added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2018-04-03T00:41:57.964Z
|
2015-01-07T00:00:00.000
|
26285206
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/290/10/6620.full.pdf",
"pdf_hash": "583abad003c9bc47955cf0e4a1c834299e12a14f",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2574",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "10f4c00d8526b3c929341394406a972728615112",
"year": 2015
}
|
pes2o/s2orc
|
Evidence for Restricted Reactivity of ADAMDEC1 with Protein Substrates and Endogenous Inhibitors*
Background: ADAMDEC1 is an ADAM-like metalloprotease with a rare active site affecting the proteolytic activity. Results: Reconstruction of the ADAMDEC1 active site, based on the ADAM family consensus, increases proteolytic activity and susceptibility for inhibition. Conclusion: Specific structural features may protect ADAMDEC1 from endogenous metalloprotease inhibitors. Significance: ADAMDEC1 has evolved features resulting in narrow substrate specificity and restricted reactivity with endogenous protease inhibitors. ADAMDEC1 is a proteolytically active metzincin metalloprotease displaying rare active site architecture with a zinc-binding Asp residue (Asp-362). We previously demonstrated that substitution of Asp-362 for a His residue, thereby reconstituting the canonical metzincin zinc-binding environment with three His zinc ligands, increases the proteolytic activity. The protease also has an atypically short domain structure with an odd number of Cys residues in the metalloprotease domain. Here, we investigated how these rare structural features in the ADAMDEC1 metalloprotease domain impact the proteolytic activity, the substrate specificity, and the effect of inhibitors. We identified carboxymethylated transferrin (Cm-Tf) as a new ADAMDEC1 substrate and determined the primary and secondary cleavage sites, which suggests a strong preference for Leu in the P1′ position. Cys392, present in humans but only partially conserved within sequenced ADAMDEC1 orthologs, was found to be unpaired, and substitution of Cys392 for a Ser increased the reactivity with α2-macroglobulin but not with casein or Cm-Tf. Substitution of Asp362 for His resulted in a general increase in proteolytic activity and a change in substrate specificity was observed with Cm-Tf. ADAMDEC1 was inhibited by the small molecule inhibitor batimastat but not by tissue inhibitor of metalloproteases (TIMP)-1, TIMP-2, or the N-terminal inhibitory domain of TIMP-3 (N-TIMP-3). However, N-TIMP-3 displayed profound inhibitory activity against the D362H variants with a reconstituted consensus metzincin zinc-binding environment. We hypothesize that these unique features of ADAMDEC1 may have evolved to escape from inhibition by endogenous metalloprotease inhibitors.
ADAMDEC1 is a proteolytically active metzincin metalloprotease displaying rare active site architecture with a zinc-binding Asp residue (Asp-362). We previously demonstrated that substitution of Asp-362 for a His residue, thereby reconstituting the canonical metzincin zinc-binding environment with three His zinc ligands, increases the proteolytic activity. The protease also has an atypically short domain structure with an odd number of Cys residues in the metalloprotease domain. Here, we investigated how these rare structural features in the ADAMDEC1 metalloprotease domain impact the proteolytic activity, the substrate specificity, and the effect of inhibitors. We identified carboxymethylated transferrin (Cm-Tf) as a new ADAMDEC1 substrate and determined the primary and secondary cleavage sites, which suggests a strong preference for Leu in the P1 position. Cys 392 , present in humans but only partially conserved within sequenced ADAMDEC1 orthologs, was found to be unpaired, and substitution of Cys 392 for a Ser increased the reactivity with ␣ 2 -macroglobulin but not with casein or Cm-Tf. Substitution of Asp 362 for His resulted in a general increase in proteolytic activity and a change in substrate specificity was observed with Cm-Tf.
ADAMDEC1 was inhibited by the small molecule inhibitor batimastat but not by tissue inhibitor of metalloproteases (TIMP)-1, TIMP-2, or the N-terminal inhibitory domain of TIMP-3 (N-TIMP-3). However, N-TIMP-3 displayed profound inhibitory
activity against the D362H variants with a reconstituted consensus metzincin zinc-binding environment. We hypothesize that these unique features of ADAMDEC1 may have evolved to escape from inhibition by endogenous metalloprotease inhibitors.
ADAMDEC1 (decysin-1) is a novel "a disintegrin and metalloprotease" (ADAM) 4 -like protease bearing closest resemblance to ADAM-28 and ADAM-7, with sequence identities of 47 and 36%, respectively. The ADAM family comprises type 1 transmembrane metzincin metalloproteases, which, in addition to the metzincin metalloprotease domain, have multiple C-terminal ancillary domains, including a disintegrin-like domain, a Cys-rich domain, an EGF-like domain, a transmembrane domain, and a cytoplasmic tail (1,2). In proteolytically active ADAM family members, the metalloprotease domain contains an active site consisting of an elongated zincbinding motif, HEXXHXXGXXH, with three underlined zinccoordinating ligands (2,3).
Like other ADAMs, ADAMDEC1 mRNA encodes an N-terminal signal peptide directing secretion and a relatively large (173 residue) prodomain, which is processed during maturation, presumably by a member of the proprotein convertase family (4,5). The mature protein comprises a metzincin metalloprotease domain and a short disintegrin-like domain. Consequently, ADAMDEC1 lacks most of the ancillary domains otherwise present in the ADAMs and is secreted as a soluble protein (5)(6)(7). The metalloprotease domain harbors a rare metzincin-type active site consensus sequence (HEXXHXXGXXD), where the third zinc-binding ligand is an Asp residue, otherwise only found in a few bacterial metzincins like snapalysin (2). ADAMDEC1 is suggested to be the first (and only) member of a novel subgroup of mammalian ADAMs based on the differences in primary structure (6). ADAMDEC1 mRNA has been detected in monocytes, and the level increases during differentiation into mature macro-* This work was supported, in whole or in part, by National Institutes of Health, NIAMS, Grant AR40994 (to H. N.). 1 phages (8). ADAMDEC1 mRNA is undetectable in immature dendritic cells, but expression is strongly induced during maturation (6,8). In addition, ADAMDEC1 expression has been found to be differentially regulated in a number of pathological conditions (9 -15). However, the physiological roles of ADAMDEC1 remain elusive.
Here we show that ADAMDEC1 cleaves carboxymethylated transferrin (Cm-Tf) and determine the cleavage sites. We also demonstrate that special features of ADAMDEC1, including an unpaired Cys residue (Cys 392 ) near the active site in the metalloprotease domain and the active site Zn 2ϩ -ligating Asp residue, have functional impacts on proteolytic activity, specificity, and effect of inhibitors. Finally, we present a homology-based model of the tertiary structure of mature ADAMDEC1.
Cloning, Site-directed Mutagenesis, and Protein Expression-An expression system for human ADAMDEC1 variants was described previously (5). A similar expression vector was generated for expression of the murine ADAMDEC1. cDNA encoding murine ADAMDEC1 was purchased from ImaGenes (Berlin, Germany) and cloned into a pCI-neo vector (Promega) using a forward (tacgactcactataggctagcatgctgcctgggact) and reverse (cctcactaaagggaagcggccgctcattctgtgatgtgg) primer. An HPC4 affinity tag inserted 2 amino acids downstream of the furin recognition site (RTSR 203 (4)) was created by overlap extension PCR using two internal primers: cttcatttgggttttttttgccatcaatcagacgcggatccacc and gaacttccaggtcactcgaagatcaggtggatccgcgtctgattg. Site-directed mutagenesis of C392S in hADAMDEC1 was carried out using the QuikChange XL site-directed mutagenesis kit (Agilent Technologies) with mutagenesis primers (ggacttcagcacaagctccagagcccacttcgag and the reverse complementary primer). All primers were purchased from MWG Eurofins (Ebersberg, Germany). For protein expression in HEK293-6E, cells were grown at 37°C and 8% CO 2 in FreeStyle 293 expression medium supplemented with 25 g/ml G-418 Geneticin and 0.1% (v/v) Pluronic F-68. pCI-neo-based expression plasmid encoding ADAMDEC1 variants were used for transfection according to the FreeStyle 293 protocol (Invitrogen). As a control, the supernatant from non-transfected cells (mock transfection) was collected. When applicable, the HEK293-6E conditioned medium containing ADAMDEC1 variants was concentrated using Amicon Ultra 10,000 molecular weight cut-off centrifugal filters (Millipore).
Determination of Disulfide Pattern-Protein bands were excised from a Coomassie Brilliant Blue-stained gel and split into two parts, to one of which was added reducing agent (40 mM DTT final concentration) and to the other buffer. Both samples were alkylated using iodoacetic acid and subjected to in-gel digestion (19). The peptides were purified using C18 StageTips (20) and eluted onto an anchor chip target plate (Bruker) using 2,5-dihydroxybenzoic acid matrix in 0.1% (w/v) trifluoroacetic acid and 70% (v/v) acetonitrile. Samples were analyzed on a Bruker Reflex III MALDI-TOF mass spectrometer (Bruker Daltonik GmbH (Bremen, Germany), Basis-REFLEX SCOUT 384) in reflector mode. External calibration of the mass spectrometer was performed immediately before data acquisition using angiotensin II (Sigma-Aldrich), ACTH clip 18-39 (Sigma-Aldrich), bombesin (Fluka), and somatostatin (Sigma-Aldrich).
Proteolytic Activity Assays-The relative concentrations of ADAMDEC1 variants were evaluated by densitometric analysis of a Coomassie Brilliant Blue-stained SDS-polyacrylamide gel using the Phoretix 1D software (TotalLab). Proteolysis of human plasma-derived ␣2M was carried out by incubating 25 nM ADAMDEC1 variant protein with 800 nM ␣2M in reaction buffer (50 mM Hepes (pH 7.5), 100 mM NaCl, 5 mM CaCl 2 , 5 M ZnCl 2 ) for ϳ20 h at 37°C. Cross-linking of ADAMDEC1 to ␣2M was visualized by reducing SDS-PAGE followed by Western blot analysis using 1.3 g/ml anti-HPC4 tag antibody and 1.0 g/ml HRP-labeled goat anti-human IgG (PerkinElmer Life Sciences) for detection or 1.0 g/ml anti-hADAMDEC1 (Abcam ab57224) with 1.0 g/ml HRP-labeled rabbit antimouse IgG (Dako) for detection. Carboxymethylated transferrin (3.75 mg/ml) was used as a substrate for hADAMDEC1 variants (100 -500 nM) in reaction buffer. Samples were analyzed by reducing SDS-PAGE, and densitometry analysis was carried out using Phoretix 1D software (TotalLab). Proteolytic activity against azocasein was measured by incubating 7.8 M ADAMDEC1 with 94 M substrate at 37°C for 72 h. The proteolysis was terminated by the addition of 6% (w/v) trichloroacetic acid (TCA) followed by a 30-min incubation on ice and removal of undigested protein by centrifugation at 10,000 ϫ g and 4°C for 10 min. 0.26 mM NaOH was added to the supernatants, and the absorbance was measured at 440 nm using a SpectraMax 190 microplate reader (Molecular Devices). From the absorption data was subtracted the reaction buffer contribution, and the data were compared with the data obtained with supernatant from mock-transfected cells using one-way analysis of variance with Bonferroni's adjustment. Individual sample pairs were additionally compared using an unpaired, two-tailed Student's t test. For casein zymography, 4 -16% zymogram blue casein gel and buffers (Invitrogen) were used according to the manufacturer's protocol. For inhibition of ADAMDEC1 with batimastat (BB-94), the ADAMDEC1 variants were preincubated with a 10 -100-fold molar excess of BB-94 for 1 h at 37°C before the addition of substrate. Characterization of TIMP inhibition of ADAMDEC1 was performed by preincubation of ADAMDEC1 with TIMP-1 (5-10-fold molar excess), TIMP-2 (5-10-fold molar excess), or N-TIMP-3 (up to 3-fold molar excess), respectively, for 4 h at 37°C prior to the addition of substrate.
N-terminal Sequence Analysis-Proteins were separated by SDS-PAGE, transferred to PVDF membranes, and stained using Coomassie Brilliant Blue. Individual bands were excised and subjected to Edman amino acid sequence analysis using an Applied Biosystems Procise HT protein sequencer with on-line identification of phenylthiohydantoin derivatives.
Homology Modeling-Amino acid sequences were obtained from the UniProt Knowledgebase and aligned using MUSCLE (21). In order to generate a structure model of hADAMDEC1, a homology modeling approach was pursued as implemented in the computer program Modeler (22). Based on an input sequence alignment between the mature hADAMDEC1 amino acid sequence and template proteins with known structures, a three-dimensional model was generated. Loop regions not resolved in the template protein were predicted by a limited functionality for ab initio structure prediction. In the present study, the template protein was ADAM-8 (PDB entry 4DD8, resolution 2.1 Å (23)) for the metalloprotease domain (residues 204 -410) and ADAM-22 (PDB entry 3G5C, resolution 2.36 Å (24)) for the disintegrin-like domain structure (residues 411-470) and the relative domain orientation. The resulting model was processed by CHARMM (25). The active site region of hADAMDEC1 WT was modeled after snapalysin (PDB entry 1C7K, resolution 1 Å (26)) and relaxed by energy minimization, resulting in close coordination of active site residues to the catalytic zinc ion. Furthermore, N-TIMP-3 was docked in the model based on the structure of the ADAM-17⅐N-TIMP-3 complex (PDB entry 3CKI, resolution 2.30 Å (27)), and a substrate model addressing both the non-prime and prime substrate binding sites was generated inspired by the crystal structure of MMP-9 with bound active site probe (PDB entry 4JQG, resolution 1.85 Å (28)).
Human ADAMDEC1 Has an Unpaired, Reactive Cys in the Metalloprotease
Domain-Comparing the amino acid sequences of human and murine ADAMDEC1 with that of the related hADAM-28 and hADAM-8 illustrates a modified Cys pattern of the ADAMDEC1 metalloprotease domain (Fig. 1A). In the majority of the mammalian ADAMs, the metzincin metalloprotease domain contains six Cys residues forming three intrachain disulfide bonds in a C1-C6, C2-C5, C3-C4 pattern (29,30). However, the hADAMDEC1 metalloprotease domain contains only five Cys residues. Cys 328 , Cys 369 , Cys 374 , and Cys 407 of hADAMDEC1 align with Cys residues in the related ADAMs, presumably forming the two disulfide bonds corresponding to C1-C6 (Cys 328 -Cys 407 ) and C3-C4 (Cys 369 -Cys 374 ) (Fig. 1A). Both human and murine ADAMDEC1 lack Cys residues corresponding to the C2-C5 disulfide bond. However, hADAMDEC1 contains a fifth Cys residue in the metalloprotease domain (Cys 392 ), which we hypothesized to be unpaired. Indeed, hADAMDEC1 wild type (hWT) formed conjugates with thiol-reactive 5-kDa PEG-maleimide, resulting in a decrease in electrophoretic mobility. Furthermore, substitution of Cys 392 for a Ser (hC392S) blocked the reaction with the PEG-maleimide (Fig. 1B), strongly suggesting that Cys 392 exists as an unpaired and reactive Cys residue. The status of Cys 392 was further probed by tryptic peptide mass fingerprinting analyzed by MALDI-TOF mass spectrometry (Fig. 1, C and D). hADAMDEC1 was excised from a non-reducing SDS-PAGE gel, alkylated under either reducing or non-reducing conditions, and digested with trypsin (Fig. 1C). In the non-reduced alkylated sample, a peptide with a mass corresponding to hADAMDEC1 residues 387-398, containing Cys 392 in an alky-lated state (MH ϩ 1513.2/1512.6 (observed/expected)), was observed, demonstrating that Cys 392 is not bound to any other Cys residue (Fig. 1D). In addition, we observed a fragment mass in the non-reduced sample consistent with the expected tryptic peptide CPSGSCVMNQYLSSK containing Cys 369 and Cys 374 engaged in an internal disulfide bond (MH ϩ 1601.2/1601.7) (Fig. 1D). A peptide equivalent to the same fragment modified by oxidation (ox), presumably affecting the Met residue in the peptide, was also observed. Both of these fragments were absent in the reduced sample, and a new peak appeared, corresponding to the same peptide modified with two carboxymethyl groups (MH ϩ 1719.3/1717.69) (Fig. 1D). Together, this demonstrates that Cys 369 and Cys 374 form a disulfide bond corresponding to C3-C4 in other ADAMs. The mass spectrometry analysis was consistent with the presence of a Cys 328 -Cys 407 disulfide bond; however, due to the close proximity of the disintegrin domain Cys residues with no Lys or Arg residues in between, the disulfide bond pattern of this peptide cluster cannot be fully resolved by the method employed (data not shown).
Cys 392 Affects hADAMDEC1 Activity in a Substrate-specific Manner-Where hADAMDEC1 has an unpaired Cys at position 392, mADAMDEC1 has a Ser residue instead (Fig. 1A). To investigate the possible significance of Cys 392 for the proteolytic activity of hADAMDEC1, it was substituted for a Ser residue (hC392S). The reactivity with human ␣2M was analyzed using an ␣2M-cross-linking assay, demonstrating a small increase in the activity of hC392S relative to hADAMDEC1 ( Fig. 2A). Thus, a Ser residue at position 392 changes the reactivity of hADAMDEC1 toward human ␣2M. Interestingly, murine ADAMDEC1 (mWT), also with a Ser residue at position 392, exhibits significantly higher reactivity toward human ␣2M than both hADAMDEC1 and hC392S ( Fig. 2A). In contrast to the observation in the ␣2M-cross-linking assay, the C392S substitution does not impact the caseinolytic activity of hADAMDEC1, measured by the release of TCA-soluble azolabeled peptides (Fig. 2B). In addition to the two known substrates of ADAMDEC1, Cm-Tf is cleaved into two major products, with apparent molecular masses of 76 and 20 kDa, respectively, when incubated with hADAMDEC1. These products are not present when incubating with supernatant from mock-transfected cells, demonstrating hADAMDEC1-specific cleavage of Cm-Tf at one primary site (Fig. 2C). As with casein, the C392S-substitution does not change the activity toward Cm-Tf. Interestingly, mADAMDEC1 does not display any measurable caseinolytic activity or any observable cleavage of Cm-Tf (data not shown). Substitution of the catalytic Glu 353 for an Ala in hADAMDEC1 (hE353A) completely abrogated the catalytic activity against all substrates (Fig. 2, B and C; shown previously (5)).
The Activity Enhancement of the D362H Active Site Reconstruction Is Augmented by the C392S Substitution-We previously demonstrated that reconstruction of the hADAMDEC1 active site by substituting the zinc-binding Asp 362 for a His residue (hD362H) increases the proteolytic activity of hADAMDEC1 toward ␣2M and azocasein (5). Interestingly, when combining the D362H and C392S substitutions (hD362H/C392S), we observe a synergistic effect on the proteolytic activity toward human ␣2M relative to hADAMDEC1, hC392S, and hD362H (Fig. 3A). Also, both hD362H and hD362H/C392S demonstrated significantly increased proteolysis of Cm-Tf compared with hADAMDEC1 and hC392S, again with hD362H/C392S being the most active (Fig. 3B). Whereas digestion of Cm-Tf by hADAMDEC1 primarily generated two fragments, hD362H and hD362H/C392S produced a range of fragments under the conditions employed (Fig. 3B). Finally, hD362H/ C392S also exhibited significantly increased caseinolytic activity, shown by the proteolysis of azo-labeled casein (Fig. 3C) and further demonstrated by casein zymography (Fig. 3D). In fact, hD362H/C392S is the only variant displaying visible activity in the casein zymogram.
Identification of hADAMDEC1 Cleavage Sites in Cm-Tf-As described in the legend to Fig. 2B, proteolysis of Cm-Tf by hC392S generated a cleavage pattern similar to that of hADAMDEC1. The combined molecular mass of the two primary bands (76 and 20 kDa) is in good agreement with the observed mass of the undigested Cm-Tf (95 kDa; marked by a (4,5). The numbering above selected key residues refers to the hADAMDEC1 full-length sequence. B, thiol-specific PEGylation of hWT and hC392S analyzed by SDS-PAGE. C, experimental setup for tryptic peptide mass fingerprinting using MALDI-TOF mass spectrometry. D, tryptic peptide mass fingerprinting of non-reduced (blue) and reduced (red) human pro-ADAMDEC1 (hADAMDEC1 R56A/R200K/R203A (5)). The mass spectrum of m/z 1500 -1730 is shown. Two additional expected ADAMDEC1 tryptic peptides, VVPSASTTFD-NFLR (*, residues 279 -292) and QTPELTLHEIVCmPK (**, residues 35-48), where Cm denotes a carboxymethylated Cys residue, are also visible in the depicted mass range. MARCH 6, 2015 • VOLUME 290 • NUMBER 10 black dot). Thus, the fragment pattern is consistent with a single cleavage event giving rise to the two primary proteolytic fragments. In comparison, hD362H and hD362H/C392S generated multiple cleavage products, indicating not only an increased proteolytic activity but also a change in substrate specificity compared with hADAMDEC1 and hC392S (Fig. 3B). The most intense fragments were sequenced by N-terminal Edman degradation, and four cleavage sites were identified (Fig. 4A). The 76-kDa fragment stems from cleavage of the Cys(Cm) 194 -Leu 195 peptide bond, identifying the major cleavage site of hADAMDEC1 in Cm-Tf. The N-terminal sequence of the 20-kDa fragment is identical to the N terminus of mature transferrin, suggesting that the 76-and 20-kDa fragments together make up mature Cm-Tf (Fig. 4B). Three additional main cleavage sites between Ala 346 and Leu 347 , Gly 465 and Leu 466 , and Cys(Cm) 523 and Leu 524 in Cm-Tf were identified when incubated with hD362H and hD362H/C392S. Our previous studies identified a cis-autocleavage of hADAMDEC1 between Pro 161 and Leu 162 in the prodomain (5). The sequences flanking these sites are remarkably similar to each other, and all identified sites have a Leu residue in the P1Ј position (Fig. 4C). In addition, the identified cleavage sites may suggest a preference for a charged group in P2 and a hydrophobic residue in P3. Finally, both scissile bonds identified as cleaved by hADAMDEC1 wild type contain a Lys at P2Ј, whereas variability at this position is observed in D362H-based variants (Fig. 4C). ADAMDEC1 Is Inhibited by Batimastat-To investigate whether the ADAMDEC1-specific features influence the conformation of the active site, inhibition by the metzincin active site inhibitor batimastat (BB-94) was evaluated using the ␣2Mcross-linking and Cm-Tf proteolysis assays (Fig. 5). BB-94 inhibited the ␣2M cross-linking of both human and mouse ADAMDEC1 wild type and the hC392S, hD362H, and hD362H/C392S variants (Fig. 5, A-C). BB-94 also inhibited the proteolysis of Cm-Tf by all tested variants, and neither of the substitutions has an apparent effect on the inhibition (Fig. 5D). Because BB-94 primarily probes the S1Ј pocket, inhibition by BB-94 indicates that neither the C392S nor the D362H substitution imposes dramatic changes to the conformation of the S1Ј pocket, also illustrated by the retained preference for a P1Ј Leu residue in the identified cleavage sites.
TIMPs Do Not Inhibit ADAMDEC1, but N-TIMP-3 Inhibits ADAMDEC1 with a Reconstituted ADAM Active Site-The
TIMPs regulate matrix metalloproteinase activities in vivo, and especially TIMP-3 has been shown to have a broad specificity toward the ADAMs and ADAMTSs (17,(31)(32)(33). To investigate whether members of this endogenous inhibitor family affect hADAMDEC1 proteolytic activity, mature TIMP-1 and TIMP-2 as well as N-TIMP-3 were incubated with hADAMDEC1 and variants before the addition of Cm-Tf. TIMP-1 and TIMP-2 did not inhibit the proteolysis of Cm-Tf by any of the investigated hADAMDEC1 variants (data not shown). Similarly, N-TIMP-3 in up to a 3-fold molar excess did not show any inhibition of Cm-Tf digestion by hADAMDEC1 or hC392S (Fig. 6, A and B). However, the hD362H variant with enhanced proteolytic activity was efficiently inhibited by a 3-fold molar excess of N-TIMP-3 (Fig. 6C). In addition, equimolar concentration of N-TIMP-3 demonstrated efficient inhibition of the superactive hD362H/C392S variant (Fig. 6D). To account for the higher proteolytic activity of hD362H and hD362H/C392S compared with that of hWT and hC392S, D362H-based variants were incubated half as long with Cm-Tf. Thus, the D362H-substitution, reconstituting the consensus three-His Zn 2ϩ -binding site appears to be needed for ADAMDEC1 susceptibility to inhibition by N-TIMP-3.
Homology Modeling of the hADAMEC1 Structure-To gain insight into how the rare and unique features of the ADAMDEC1 amino acid sequence may impact the structure of ADAMDEC1, a homology model was created using the crystal structures of hADAM-8, hADAM-22, and snapalysin as templates (23,24,26). The metalloprotease domain was modeled after the hADAM-8 structure; the active site was based on snapalysin and refined by energy minimization. The disintegrinlike domain and its relative positioning were based on that of hADAM-22 (Fig. 7). The homology model of hADAMDEC1 contains the hallmarks of a metzincin metalloprotease, including a central highly twisted five-stranded -sheet (I-V) and three conserved ␣-helices (A-C) as well as two additional ␣-helices specific for the ADAMs (Fig. 7, A and B) (1, 34). Contrary to the metalloprotease domain, the disintegrin-like domain consists mainly of loops and turns (Fig. 7B) A, ␣2M-cross-linking assay with hADAMDEC1 variants and mWT visualized by Western blot analysis using an anti-HPC4-tag mAb against a N-terminal HPC4 tag on the ADAMDEC1 variants. The different ␣2M⅐ADAMDEC1 complexes (I) are observed at the top of the blots, whereas the added protease (III) is seen at the bottom. Also, cross-reactivity of antibodies with an impurity in the ␣2M preparation is observed (II). B, proteolytic activity against soluble, azo-labeled casein. The sample means are shown by thick horizontal lines. For one-way analysis of variance: ***, p Ͻ 0.0001; ns, not significant relative to mock. Student's t test comparing hWT and hC392S is shown with a horizontal bracket. C, proteolytic assay using Cm-Tf (marked by a black dot) as substrate for the indicated hADAMDEC1 variants (500 nM). Samples were analyzed by reducing SDS-PAGE, and primary product bands are marked by black arrows.
Cys 442 , and Cys 446 -Cys 460 , where the latter four appear to hold the disintegrin-like domain together. In hADAM-22, the Cys residues aligning to ADAMDEC1 Cys 446 and Cys 460 are not connected to each other but form disulfide bonds with Cys residues absent in ADAMDEC1. However, the model orients Cys 446 and Cys 460 within disulfide bonding distance, suggesting that these could be connected. In the metalloprotease domain, a putative calcium binding site is formed by the side chains of Asp 221 , Asp 304 , and Gln 410 , the backbone carbonyl oxygen of Cys 407 , and two water molecules (Fig. 7B). Another putative calcium binding site in the disintegrin-like domain is made up by the side chains of Asn 425 , Glu 429 , Glu 432 , and Asp 435 in addition to the backbone carbonyls of Val 422 and Leu 427 . To accommodate Zn 2ϩ coordination, Asp 362 is predicted to influence the active site architecture, as illustrated by a superposition of the modeled active sites of hWT and hD362H (Fig. 8A) showing a 1.9-Å offset of the C␣ of hWT Asp 362 compared with hD362H His 362 . The active site architecture of ADAMDEC1 is suggested to be similar to that of snapalysin, and substitution of Asp 362 for His is predicted to adopt an active site like hADAM-8. The model places the unpaired Cys 392 in the N-terminal of ␣-helix C below the active site and suggests that Ser 392 in the hC392S variant stabilizes ␣-helix C by forming an N-cap hydrogen bond to Ser 389 (Fig. 8A). The modeled active site readily accommodates the AFKCm2LKDG peptide representing the dominant cleavage site of hADAMDEC1 in Cm-Tf (Fig. 8, B and C). The modeled peptide docking suggests that the conserved P1Ј Leu residue fits easily into the deep and hydrophobic S1Ј cavity and that also the preferred hydrophobic residue in the P3 position is bound in a hydrophobic pocket formed by Phe 287 , Leu 321 , and Ala 323 . The model further predicts favorable electrostatic interactions between the negatively charged carboxymethylated Cys MARCH 6, 2015 • VOLUME 290 • NUMBER 10 in the P1 position and Arg 318 as well as between the P3Ј Asp and Lys 339 /Lys 340 of hADAMDEC1 (Fig. 8B). The crystal structure of N-TIMP-3 docks into the hADAMDEC1 model with the N-terminal ridge of the N-TIMP-3 binding the catalytic Zn 2ϩ through the backbone of Cys 1 (Fig. 8D). In addition, residues flanking Cys 1 and Cys 68 are predicted to interact with the substrate binding pockets. Whereas hADAMDEC1 is not inhibited by N-TIMP-3, the proposed backbone rearrangement in the active site of the hD362H variant is predicted to accommodate interactions with N-TIMP-3 Glu 65 , thereby enabling inhibition of the metalloprotease (Fig. 8E).
DISCUSSION
ADAMDEC1 is an unusual metzincin metalloprotease with an unknown biological function. It has a rare active site with a zinc-coordinating Asp residue and a short domain structure terminating in the disintegrin-like domain. Here we have demonstrated that ADAMDEC1 also contains a different Cys pattern of the metalloprotease domain, resulting in an unpaired Cys residue, Cys 392 , in the human ortholog. We show that Cys 392 of hADAMDEC1 can react with a maleimide group coupled to a 5-kDa polyethylene glycol moiety, indicating that Cys 392 is reactive and at least partially surface-exposed. Cys 392 appears to modulate the ability of ADAMDEC1 to be crosslinked by ␣2M because its relatively subtle substitution for Ser increases ␣2M-mediated trapping of the protease. Substituting the Cys 392 for a Ser is predicted by structural modeling to stabilize ␣-helix C by forming an N-cap hydrogen bond to Ser 389 . Interestingly, the C392S substitution does not change the activity against the other identified ADAMDEC1 substrates; thus, Cys 392 may have evolved in the human ortholog to limit the risk of inhibitory trapping by the ␣2M protease inhibitor. We previously showed that the zinc-coordinating Asp dampened the proteolytic activity of ADAMDEC1, in that reconstruction of the canonical metzincin active site (hD362H) increases the activity (5). Surprisingly, the combination of the D362H and C392S substitutions (hD362H/C392S) has a synergistic effect, resulting in a significant enhancement of the proteolytic activity against all identified hADAMDEC1 substrates.
Human Cm-Tf is a previously used metalloprotease substrate that contains two homologous transferrin domains. hADAMDEC1 cleaves Cm-Tf at one primary site, between Cys(Cm) 194 and Leu 195 in the transferrin domain 1, generating two major Cm-Tf fragments. The D362H and D362H/C392S variants not only displayed enhanced proteolytic activity but also exhibited altered specificity, identifying the Ala 346 -Leu 347 bond as an additional preferred scissile bond. In addition, we identified secondary cleavage sites at Gly 465 2Leu 466 , Cys(Cm) 523 2Leu 524 , and the site preferred by wild-type hADAMDEC1, Cys(Cm) 194 2Leu 195 . The data demonstrate a clear change in the substrate specificity of the D362H and D362H/C392S variants, yet all identified sites have a Leu residue at the P1Ј position. We previously determined an autocatalytic processing site in the prodomain of hADAMDEC1 (RYQIKPLKSTDE (5)), which also contains a Leu residue at the P1Ј position as well as a hydrophobic residue in the P3 position. Interestingly, the main cleavage site of hADAMDEC1 at Cys(Cm) 194 -Leu 195 and the secondary cleavage site of hD362H and hD362H/C392S at Cys(Cm) 523 -Leu 524 have both been found to be proteolyzed by ADAMTS-4 and -5 (35,36). In contrast, the primary cleavage site of D362H-substituted variants at Ala 346 2Leu 347 has been shown to be cleaved by ADAM-12 (37). To our knowledge, proteolysis of the Gly 465 -Leu 466 peptide bond has not previously been reported. Comparison of the two identified cleavage sites for hADAMDEC1 and the three sites specific for the D362H substitution suggests that a positively charged Lys residue in the P2Ј position may be preferred or needed for hADAMDEC1 (Fig. 4C). Because a single change of the third Zn 2ϩ binding residue from the negatively charged Asp 362 to the uncharged His seems to alleviate the need for a positively charged P2Ј residue in the substrate, we suggest that the P2Ј may interact directly with Asp 362 or through a hydrogen bond network to reduce the increased electron density or negative charge of the Zn 2ϩ ligand. The P2Ј Lys may thus be part of creating a Zn 2ϩ environment for ADAM-DEC1, which may be closer to that of the three-His-coordinated ion. hADAMDEC1 is not inhibited by TIMP1 to -3 under the investigated conditions, and substitution of Cys 392 for Ser has no effect on TIMP inhibition. However, substitution of the zinc-binding Asp 362 for a His increases the susceptibility to N-TIMP-3 inhibition. The crystal structure of hADAM-17 in complex with N-TIMP-3 reveals that Glu 65 of N-TIMP-3 interacts with the backbone nitrogen of His 415 , the third zinc ligand of hADAM-17 (27). Because the amino acid side chain of Asp is shorter than that of His, the backbone of the active site Asp 362 in ADAMDEC1 is predicted to be moved closer to the Zn 2ϩ in order to accommodate binding (Fig. 8A). Hence, the D362H substitution is predicted to push the backbone down 1.9 Å, allowing the substituting His 362 to coordinate the active site zinc ion while at the same time allowing Glu 65 of N-TIMP-3 to interact with the backbone nitrogen of His 362 . We suggest that this may in part account for the increased inhibition by N-TIMP-3 when substituting Asp 362 for a His residue, in addi-tion to the changed chemical environment of the catalytic Zn 2ϩ ion. Although TIMP-1, -2, and -3 are structurally similar, only N-TIMP-3 inhibits hD362H. This selectivity may in part arise from a favorable interaction of N-TIMP-3 Glu 62 with Arg 318 of ADAMDEC1 (data not shown). In contrast, TIMP-1 has a Pro and TIMP-2 has an Ala at the corresponding position, and they are thus incapable of forming a hydrogen bond or a salt bridge to Arg 318 . Additionally, TIMP-1 Thr 97 and TIMP-2 Thr 99 are predicted to interfere with binding of ADAMDEC1 by coming into too close proximity of Arg 318 . TIMP-3, on the other hand, has Gly 93 on the corresponding position, which is not predicted to influence the binding of N-TIMP-3 to hD362H.
All investigated hADAMDEC1 variants are inhibited by the synthetic active site inhibitor batimastat (BB-94), suggesting that neither of the substitutions change the S1Ј binding pocket. This is in accordance with the finding that all tested variants proteolyze Cm-Tf N-terminal to a Leu residue.
Taken together, the ADAMDEC1-specific features result in narrower substrate specificity but also a dampening of the proteolytic activity. The observed effects lead us to hypothesize that hADAMDEC1 has evolved with the rare active site and the unpaired Cys 392 allowing escape from inhibition by TIMP-3 and ␣2M. (27)). N-TIMP-3 interacts directly with the catalytic Zn 2ϩ through the backbone amino and carbonyl groups of Cys 1 and with the substrate binding pockets through multiple amino acid side chains (as indicated in the figure). E, the interaction of N-TIMP-3 Glu 65 with the backbone of Asp 362 in hWT (gray) and His 362 of the hD362H variant (green).
|
v3-fos-license
|
2020-07-30T02:06:57.850Z
|
2020-07-01T00:00:00.000
|
222124008
|
{
"extfieldsofstudy": [
"Economics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journal.trunojoyo.ac.id/jsmb/article/download/7979/4701",
"pdf_hash": "55a299f675d91ed13beb7e9a29a262dab16d31e9",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2576",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"sha1": "fb959c1d6dba80d0cef52a5e5aa64c3ed627f9e5",
"year": 2020
}
|
pes2o/s2orc
|
Macroeconomic Changes And Prices Stock In Real Estate And Property Firms
This research was conducted to examine changes in macroeconomic conditions on stock prices in real estate and property companies. The aim of this research is to explain the changes in macroeconomic conditions on the stock prices of real estate and property companies. The sampling technique used was purposive sampling with a sample of 14 companies from 48 real estate and property companies listed on the JSX. The analysis technique used is multiple regression. Based on the results of the analysis, it was found that changes in macroeconomic conditions, namely inflation, had no positive effect on the stock prices of real estate and property companies. This is because the amount of demand for real estate and property will increase in accordance with population growth. Other findings show that the interest rate proxied from the BI Rate has a negative effect on the stock prices of real estate and property companies. Increased interest rates from the BI Rate will cause investors to be no longer interested in investing in the money market because it is considered more profitable to invest with high interest rates.
The aim of this research is to explain the macroeconomic changes conditions on the stock prices of real estate and property firms. The sampling technique used was purposive sampling with a sample of 14 firms from 48 real estate and property firms that listed on the JSX. The analysis technique used is multiple regression. Based on the results of the analysis, it was found that the macroeconomic changes conditions, namely inflation, had no positive effect on the stock prices of real estate and property firms. This is because the amount of demand for real estate and property will increase in accordance with population growth. Other findings show that the interest rate proxied from the BI Rate has a negative effect on the stock prices of real estate and property firms. Increased interest rates from the BI Rate will cause investors to be no longer interested in investing in the money market because it is considered more profitable to invest with high interest rates.
INTRODUCTION
A free trade will be pushes the firms level of competition sharper. The consequencies, a competition will be make a positive impact on the world economic growth. Economic resources limited will be exploited more efficiently. Facing in the free trade era, the capital market has an important role as a source of external financing for the business world and as a means of investment for the society (Hamrita & Trifi, 2011;Sailendra & Suratno, 2014). A capital market has become one of the success factors of the country's economic development. In addition, a capital markets will be a venue for lenders and capital seekers (borrower) to a raise funds from the public to be spend into a more productive sector (Brigham & Houston, 2006). Through the estimation of excess funds they have, lenders will be expected to gain a rewards from the submission of such funds. On the other hand, the borrowers will use the funds investment without having to wait for the availability of funds from the firms operations. Generally, there are three main objectives of the market, there are; firstly, accelerate the process of expanding inclusion community in the ownership of the firms shares. Secondly, the income equality for the people and; third, increasing the people participation in gathering funds productively. Previous study has shown that there's some Indicators that can be considered for investors in investing, and one of it's an information about the stock price development (Ramadani, 2018).
A capital markets are one of the preferred alternative to a long-term funding sources among other firm alternatives (Barro, 1990;Qaisi, Al-qudah, & Tahtamouni, 2016). Based on the development, the number of firm that sell its shares in the capital market is increasing. In relation to stock investing, investors will choose a firm stocks that are worthy to be selected based on specific criteria. The commonly criteria that used is stocks that are actively traded and it has a good fundamentals. The rational investors will be consider of two things, there are the expected income or expected return and the risk, whereby both of these are contained an alternative investments.
Stock market price is a market clearing prices which are determined based on the strength of demand and supply (Brigham & Houston, 2006). So, stock price is the value of a share that reflects the wealth of the firm issuing the stock, where its change or fluctuation is determined by the power of supply and demand in the exchange (secondary market). When the investors want to buy or retain a stock, the price will be a rises. In contrast, the more investors who want to sell or release a stock, the price is increasingly moving down.
Theoretically, there are several perspectives that investors can use in predicting the change in the stock price, i.e. foreign exchange rate, interest rate, inflation and financial ratios. In line with that argument, Mishkin (2008), stated that the firm portfolio theory shows that there's a many factors influencing the request of a valuable letter, one of it's macro economic variables which consist of interest rates, rates, and inflation rates. Further, Mishkin (2008) also explains that marketable securities will be influenced by the firm profitability, expected inflation and government activities.
Inflation is the price of goods and services increase that have a broad influence as well as the price of shares in the capital market. The investors ability in understanding and predicting macroeconomic conditions in the future will be very useful in making profitable investment decisions. For that reason, investor should consider some macroeconomic indicators that can help in making investment decisions.
Empirically, some research has show a results that many people are interested to invest their funds in the real estate and property sector because of the price that tends to always rise (Geriadi & Wiksuana, 2017;Ramadani, 2018). The price increase in the real estate and property sector because the soil supply is fixed. On the other hand, along with the increase in population and the increasing human needs of housing, offices, shopping centers, amusement parks and others, causing demand for this sector has always increased. Departing from these conditions, some developer firm will try to increase profits as the impact of the property's price increase. With the increased profits gained, the developer firm can improve its financial performance, so it can increase the stock price.
However, the nature of investments in the real estate and property sector are a long term and very sensitive growth. Some of a macroeconomic indicators, such as gross domestic product, inflation rate, interest rates and exchange rate, should be the thing that will determine the investment success. Considering some factors that affect the activity of the stock market that further leads to the increase and decrease in the number of stock requests and offers on the stock exchanges and the effect of changing the stock price, the information will be an important role for investors (Ramadani, 2018). The Information about the firm can be obtained from internal and external parties. External parties relating to the economic conditions of domestic and political situations, interest rates, government policies, inflation and others. While that affects the trading transactions of shares in internal parties, among others, related to the stock price, the level of profit gained, the level of risk, the firm performance and corporate action that the firm doing.
Stock price is the price that formed from the interaction of the sellers and buyers of stocks that are backed by their hopes of the firm profit (Mok, 1993). The stock price that occurs most recently in a single exchange day or which can be called the closing price. Stock prices are formed from the process of requests and offers occurring in the exchange. The increase or decrease of traded stocks on the exchange floor are determined by market forces. If the market assesses that the firm shares the stock in good condition, then usually the stock price of the firm will be a rises. Whereas if the firm is low-rated by the market, then the stock price of the firm will be decrease, even lower than the price in the secondary market. Therefore, the relationship between investors with the others strongly determine the stock price of the firm. From this reason, investors needed an information related to the establishment of the stock price in making decisions to sell or buy stocks.
Theoretically, Brigham & Houston (2006) explained that there are several factors that affect the price of stocks that are, (1) fundamental factors, provide information about the firm performance and the factors that can affect it, including the management ability to manage the operational activities of the firm, the firm business prospects in the future, the marketing prospects of the business undertaken, the development of technology used in the firm operations, and the (2) Technical factors, describing the market an effect either individually or in groups in assessing the price of shares, such as the development of exchange rate, the state of the capital market, the volume and frequency of transactions of interest rates, and the strength of the capital market in influencing the stock price of the firm.
Economic risk analysis is a part of a stock analysis based on technical analysis. Analysis of external and macro factors in the form of events that occur outside the firm and affect all firm so that it cannot be controlled by the firm. The movement of economic direction affects the capital market movement which is useful for investor decision making. Good growth is good news for investors, so it positively affects the capital market (Weston & Copeland, 2008).
Previous research shows that stock prices will be influenced by macroeconomic conditions (Demir, 2019;Hussain, Aamir, Rasool, Fayyaz, & Mumtaz, 2012). Furthermore, that authors explained that there are several macro economic conditions that causes the change in the stock price, for example, inflation and interest rates. Inflation is the tendency of prices to increase generally and continuously. Basically, inflation can be distinguished between permanent and temporary (Mankiw, 2007). The core inflation rate is the rate of inflation due to the increasing pressure of demand for goods and services or aggregate demand in the economy.
The inflation rate is determined by the power of demand and supply reflecting the behaviour of market actors and the public. One of the factors affecting society's behaviour is their expectation of the future rate of inflation. High inflation expectations will encourage people to transfer their financial assets into real assets and vice versa. High inflation will result in decreased stock prices. This is because high inflation causes the price of goods to be generally increased. This condition affects the increase in production costs and then affects the selling price of high goods (Nurdin, 1999). That condition can be occur because the high price of goods will result in the purchasing power of the community that decreases and affects the level of profit of the declining firm and will eventually affect the price of the stock that also decreased. In line with that argument, (Soebagiyo, 2017) explained that inflation has a positive effect for the IDX because inflation is followed by the increase in the amount of money circulating in a propotional manner, demonstrating a good economic performance.
A large rate of inflation shows that the risk of investing in all major business sectors is, because high inflation will reduce the rate of return from investors. In addition, a high inflation conditions will be affect a tendency on the increase the price of goods. The increase of the price for these items will make the high cost production, so it will affect the decline in the number of requests individually or thoroughly. As a result, the number of sales will decline, and automatically it will decreasing the amount of sale and the firm revenue. Furthermore, it will adversely affect the firm performance reflected by the decline in the stock price of the firm (Nurdin, 1999).
Studies conducted by Suyati (2015) explaining that the interest rate is the annual interest payment of a loan, in the form of a percentage of the loan earned by the amount of interest received annually divided by the loan amount. Definitively, the interest rate described as is the price loan (Sunariyah, 2003). Interest rates are expressed as a percentage of principal money per unit time.
Interest is a measure of the resource price used by debtor to be paid to creditors. The interest rate itself is determined by two strengths, namely: the savings supply and capital investment demand or especially from the business sector. Savings are the difference between revenue and consumption. Essentially, interest is the main drivers for the community to be willing to saving. The amount of savings will be determined by the high low interest rate. The higher interest rate will be triggers the public to save, and vice versa. High low investment fund offerings are determined by the high low interest rate of community savings. If the interest rate rises, it will give negative influence on the equity market. The decrease in interest rate will reduce the burden of the issuer and further increase the share price. The decrease in interest rate could encourage investors from savings to the capital market (Mardiyati & Rosalina, 2013).
Based on the explanation, hypotheses in this study can be explained as follows: H1. Inflation will be influence to the price of the real estate and property sector . H2. Interest rates will be influence to the stock price of real estate and property sectors
Population and Sample
The population in this research is the company's real estate and property listed in the Indonesia Stock Exchange (IDX). The population in this study consist of 48 real estate firm and the property listed on the stock exchange. The sam-pling techniques used are purposive sampling with sampling criteria covering the real estate and pro-perty firm that listed in Indonesia Stock Exchange (IDX). The basis of population determination in the study because in the period of 2014-2016, real estate and property firm have an experienced a fairly good development. The firm issued a complete financial statement from 2014 to 2016 and ended in the December 31st period to facilitate the research process. So that the number of real estate and property firm that become sample is 14 firm.
Research Variables
The variables used in this research are the stock prices of real estate companies and property that have reported complete financial statements in the period 2014-2016. The price stock referred to in this study is closing price, because it is this price that states the increasing and decreasing of a stock. Stock price Data is the average closing price three days after the date of publication as calculated from 2014 year until 2016. Furthermore, the stock price will be determined as a dependent variable (Y).
Inflation rate (X1) is the increase tendency of the price of goods which stated in percentage using the year inflation calculation, namely by comparing the index of the month with the index in December of the previous year which was processed from the annual report of BI and the financial economic statistics of Indonesia. With the measurement formula of percentage. The interest rate (X2) is proscribed on the BI Rate is a policy interest rate reflecting the attitude or stance of monetary policy established by the Indonesian bank and announced to the public
Data Source
The data used in this research is secondary data, which is numeric data, which is physically observed, classified according to the place and time behind the events such as financial statements (Sugiyono, 2004). Completely, the data required in this research is the stock price data of 2014 -2016, 2014-2016 inflation data and interest rate data from 2014-2016. The data sources in this research are obtained from IDX corner, Indonesian Capital Market Directory (ICMD), Central Bureau of Statistics (BPS) and Indonesian Bank.
Data Analysis
The research aims to determine the influence of macroeconomic conditions on stock prices. The macroeconomics are measured from the inflation rate (X1) and interest rate (X2) proscribed from the BI Rate. To conduct testing of the proposed hypothesis, this study will be conducted with multiple regression analysis techniques. Data processing is done using SPSS software version 15 for Windows (Basuki, 2015).
Results of multiple regression analyses of the influence between inflation (X1) and
interest rates (X2) on the stock price (Y), obtained the following equation:: Y = 1,488 + 0,287 X1 + 1,139 X2 + e Based on that equation above shows that if the inflation variable (X1) and the interest rate variable (X2) are 0 (zero), then the stock price variable (Y) is 1.488. Meanwhile, inflation variable regression coefficient value of 0.287 indicates that when the inflation addition of 1x, it will increase the share price by 0.287, assuming that the interest rate is constant. While the variable regression coefficient of 1.139 interest rate stated that each increase in interest rate of 1x will increase the share price by 1.139 assuming that inflation is con-stant.
The results also indicate that the value of the correlation coefficient (R) is 0.314 which means that the relationship between variable X to Y is less closely. While coefficient of determination (R2) is obtained by 0099 which means that the contribution of macroeconomic variables with interest rates and inflation to the share price of 9.9%, and the remaining 90.1% is influenced by other factors that are not researched in this study. The coefficient of determination has been adjusted (Adjusted R2) obtained by 0.052, which means that the variation in the value of the share price (Y) can be explained via the interest rate variable (X1) and inflation (X2), amounting to 5.2% and the remaining 94.8% is influenced by other variables not examined in this study.
Based on the multicoliniearity test results with tolerance value > 0.1 and the VIF < 10, it can be concluded that the overall independent variables including inflation (X1) and interest rates (X2) do not have a multicolinearity relationship and can be used to predict profit during the research period. Based on the autocorrelation test results showed that Durbin-Watson's yield amounted to 2,588. This indicates that there is no autocorrelation in the data because the results of Durbin-Watson are between 2.46-2.90. Based on the results of heteroskedasticity test can be seen randomly spread points, does not form a clear pattern and is spread over the numbers 0 and Y. This indicates that there is no heteroskedastisity. The normality test indicates that data is spreading around the diagonal line and following the direction of the diagonal line so that the research data can be said to meet the normality assumption.
The effect of independent variables on dependent variables, both simultaneously and partially, can be determined based on the results of the ANOVA test or F test with SPSS 15 for Windows. Based on the test results obtained that the value F count is 2.132 with significance level 0.132. Since the probability 0.132 > 5%, then it can be said that the inflation variable (X1) and the interest rate variable (X2) have no significant effect on the stock price (Y (2018) Based on t-test result indicates that the inflation variable has t-count of 1.137 with significance of 0.263. Due to the probability of 0.263 > 5%, it can be concluded that inflation variable (X1) as a partially has no significant effect on the price stock (X2). While the interest rate variable (X2) has a t-count of 0.927 with significance of 0.360. Due to the proba-bility of 0.360 > 5%, it can be concluded that interest rate variable (X2) as a partially has no significant effect on the stock price (Y).
Discussion
Overall, this research has resulted in several findings. Based on the results of a description of the analysis of macroeconomic variables that were measure by inflation and interest rates on the stock price of companies listed on the Jakarta Stock Exchange (JSX) from 2014 -2016, indicating that the inflation and interest rates fluctuate from the increased to decreased. This shows that the Board of Governors sees the determination of the interest rate to control the level of inflation towards the medium-term inflation target and is conducive to maintaining the high momentum of economic growth. The interest rate is valid during the third quarter, but there's no possibility to become an adjustment in subsequent months in line with the economic development and overall monetary condition. Since interest rates become a reference instrument, the benefits of implementing BI Rate is the market participants do not need to speculate wildly against the change of BI Rate. They just make a prediction about how much the changes of Bi Rate. Maybe, they can prediction the changes of BI Rate approximately as 25 point or multiples. The increase or decrease of the BI Rate, depending on inflation trend and capital market conditions. The results of a partial statistical test of the influence of inflation rate on the stock price have a coefficient value of 0.263 and simultaneously at 0.132. In addition, both of partially and simultaneous, the BI Rate have a coefficient value of 0.360. Therefore, the level of siginifansi exceeds 0.05 so the rate of inflation and the BI Rate has no influence on the stock price. It is in accordance with the research conducted by Park (1997) and Mok (1993).
Based on the above discussion, it can be concluded that the overall macroeconomic conditions have no significant effect on the stock price. This is because investors or prospective investors will see the firm internal information in advance for decision making buying or selling securities on IDX. In addition, the types of real estate and property firm, mostly are not affected by macro economy because of the fixed supply and demand is increasing related to the needs that must be met with the development of population increasing. In addition to the supply and demand conditions for investment in the real estate sector and the property is also encouraged by increasing human needs of housing, offices, shopping centers, amusement parks and others. It is proper if the developer firm can gain a profit from the price increase of 72 real estate and the property. With the profit gained, the developer firm can be improve its financial performance so as to boost the stock price.
Macroeconomic conditions are improving, should make the financial performance of the real estate and property sector is getting better because with the decline in interest rates and inflation can increase the purchasing power of society. In this case, the developer can increase the number of transactions on the real estate and property offered. The increasing number of transactions will increase the financial performance of real estate and property firm which reflected in the firm financial statements. Improving the macroeconomic indicators has not yet revealed its effect on increasing the volume of real estate and property sales. This condition may be doubtful of many real estate and property investors to make an investments, thus it will raising the questions about the predictions about the real estate business and property which can be used as a guideline to invest securely in real estate and property stock.
CONCLUSION
Based on the results of multiple regression test macro economic conditions i.e. inflation has no positive effect on the stock price of real estate and property firms because the number of requests for real estate and property will be increased according to population growth. While BI Rate negatively affects the stock price of real estate and property firms. Increased BI Rate will cause the investors to not be interested anymore to invest in the money market because it is considered more profitable investing with high interest.
Based on the research, discussion, and research limitation, there can be some suggestions. The stock of real estate and property firms are one of the stocks that are sensitive to macroeconomic indicators. Therefore it's advisable for investors, in addition to considering the fundamental factors in the firms financial ratios and the systematic risk, the investors should be consider another factors that affect the stocks price, such as macroeconomic factors, currency exchange rate and others. The study uses only 2 macroeconomic variables, so it's advisable for the future researcher to adding another variable that is suspected to affect the stock price.
|
v3-fos-license
|
2021-12-26T16:08:14.219Z
|
2021-12-23T00:00:00.000
|
245476582
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.mdpi.com/1420-3049/27/1/76/pdf",
"pdf_hash": "4c743b717bea2aff29846b7f72189c04ae998068",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2577",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "1ace10d1cd7ac812ae36dc7d287b24a8f7de7ee0",
"year": 2021
}
|
pes2o/s2orc
|
Hermetic Seal of Organic Light Emitting Diode with Glass Frit
The components of OLED encapsulation with hermetic sealing and a 1026-day lifetime were measured by PXI-1033. The optimal characteristics were obtained when the thickness of the TPBi layer was 20 nm. This OLED obtained a maximum luminance (Lmax) of 25,849 cd/m2 at a current density of 1242 mA/cm2, an external quantum efficiency (EQE) of 2.28%, a current efficiency (CE) of 7.20 cd/A, and a power efficiency (PE) of 5.28 lm/W. The efficiency was enhanced by Lmax 17.2%/EQE 0.89%/CE 42.1%/PE 41.9%. The CIE coordinates of 0.32, 0.54 were all green OLED elements with wavelengths of 532 nm. The shear strain and leakage test gave results of 16 kgf and 8.92 × 10−9 mbar/s, respectively. The reliability test showed that the standard of MIL-STD-883 was obtained.
Introduction
Optoelectronic devices based on organic materials have several advantages. They are low-cost, their power efficiency is high, and they create mechanically flexible devices. The concept of cheap solar cells providing clean green energy on a large scale is interesting. Furthermore, organic light-emitting diodes (OLEDs) are a promising technology for use in energy-efficient flexible light sources and displays [1]. A major drawback of organic electronics is their relatively poor environmental stability. Moisture and oxygen can penetrate into the organic stack through pinholes in the metal cathode layer. These pinholes are induced by particles that are present during the processing of the devices [1][2][3]. Lateral diffusion of water and oxygen in the organic stack enables cathode oxidation over a continuously growing area [4]. The improvement of the encapsulation technique, electrode materials, and substrate processing in recent years has helped to overcome the problem of OLED degradation [2]. However, organic materials are usually very susceptible to humidity, water, and oxygen, which cause damage to the organic material layer and electrode (cathode) of the device, affecting its lifetime. As a result, favorable encapsulation is indispensable [3]. A material with low moisture diffusion and absorption properties needs to be identified in order to avoid damage to the OLED [4].
The main reason why OLEDs have not completely replaced other display products is that their stability and yield are worse than in other displays. Their luminous efficiency and other characteristics have dropped sharply [4,5], so that research on the hermetic packaging process of OLED is a key issue. Many technologies have been proposed in recent years. Among them is a method developed in Japan whereby a closed cover outside the component is used to protect and isolate the outside air and moisture in the sealed cover. However, this method will increase the thickness of the device due to the shape of the sealed cover and is susceptible to physical impact and damage. In the case of large-scale manufacturing, there will be a problem due to its poor heat dissipation properties [6]. In South Korea, another technology in which a moisture absorbent is added to epoxy resin was developed. The volume expansion caused by the reaction of moisture absorbents with moisture may cause physical damage to organic components, and when metal oxides are used as moisture absorbents, they will react with moisture to produce strong alkaline substances that cause chemical damage to the organic layer or cathode [7].
In the OLED packaging process, organic materials are extremely sensitive to temperature, so that the temperature and packaging methods are extremely important [8]. Having an excessive temperature in the packaging process will lead to degradation or damage to the light-emitting characteristics and the lifetime of OLED components [9]. However, the hermetic seal of the material with traditional UV glue in the packaging process does not reach 10 −8 Torr [10]. Thus, the packaging material currently used in the industry is glass frit. Glass frit glue is mainly made of inorganic materials. When it is used as an OLED packaging material, it can block the penetration of moisture and oxygen from the external environment [11]. It is not necessary to attach a moisture-absorbent material in the OLED package, and the device is able to exist in conditions of high temperature and high humidity (85 • C/85% RH) for more than 7000 h [12]. The glass frit glue coating can be made by using a screen-printing or dispenser method in which the glass packaging cover is coated, and then the packaging glass cover and the vapor-deposited OLED-related materials are covered in a nitrogen-filled environment. The glass glue is melted and adhered by a laser welding system. Glass frit glue must be able to quickly absorb the energy of the laser beam and reach the melting point in a short time. In addition, the coefficient of thermal expansion (CTE) must be equivalent to that of the ITO glass substrate to avoid a deviation in the alignment package with a large difference in the CTE [13]. The laser welding method uses local heat to melt the glass frit glue; it will therefore not cause damage to temperature-sensitive organic materials.
Laser welding technology is used in encapsulation due to its coherence, non-contact processing, and complex shape processing [14][15][16]. It is not only beneficial for OLEDs with temperature-sensitive materials but also fills the requirement of blocking the water and oxygen. The usage of glass frit glue has advantages such as a relatively lower joining temperature of almost 350 • C and less rigorous requirements for contact surface smoothness [17][18][19][20].
Experiment and Devices
The light-emitting principle of OLED diodes is carrier injection. Under the basic single layer structure of an OLED device, the emitted light must pass through the device. The OLED device must have one transparent side electrode so that the emitted light can pass through the element. When a forward bias is applied to the positive and negative electrodes, the carriers will generate holes and electrons from the anode and cathode ends, respectively. The holes and electrons will be injected into the energy level of the highest occupied molecular orbitals, HOME, of the luminescent material and the energy level of the lowest unoccupied molecular orbital, LUMO. Finally, photons are generated through the principle of radiative recombination. The potential difference between the two electrodes makes the two carriers move in the organic material layer and finally recombine in the light-emitting layer. The energy released after carrier radiative recombination leads to the formation of a radiative exciton. The state of the carrier returns from the high energy level of the excited state to the low energy level of the ground state. The difference in energy is released into heat or photons, and the wavelength emitted by the device depends on the inherent fluorescent properties of the organic luminescent material. When an external bias voltage is applied, the cathode and anode individually inject electron and hole carriers toward the organic layer. The injected electrons/holes go from the electron/hole transport layer to the organic structure. As the two carriers move to the light-emitting layer, electrons and holes combine in the light-emitting layer and generate excitons. The generated excitons migrate under an applied voltage and transfer the generated energy to the light-emitting layer. This is where the excited electrons transition from the ground state to the excited state, and finally, the energy generated in the excited state is passivated back to the ground state by radiative photons generated to release light energy. The light-emitting layer has two kinds of transition. The first is fluorescence, and the second is phosphorescence. The difference between the two kinds of light transition is that the initial state of the transition of fluorescence is the singlet excited state, S1, in which the direction of the spin of the excited electron is opposite to the direction of the unexcited electron in the ground state. Phosphorescence is a triplet excited state, T1, in which the excited electron spin direction is the same as the unexcited electron direction in the ground state. Under general conditions, the S1 energy is greater than that of T1. As the transition mode that allows the carrier to directly excite it to T1 is not quite achieved, the singlet excited state (S1) has electron spin characteristics. However, it has the opportunity to transform into the triplet excited state (T1) through different transition states. Under ideal conditions, the numbers of injected holes and electrons are equal, with values of 1. Theoretically, the ratio of electrons in the singlet excited state to those in the triplet excited state is 1:3. Thus, it is generally believed that the internal quantum efficiency limitation of fluorescent materials is 25%, and the remaining 75% of the energy is non-radiated by the triplet excited state. The light extraction rate is about 1/2n 2 , where n is the refractive index. As the refractive index of the glass substrate is 1.5 and its light extraction rate is about 20%, the theoretical upper limitation of the external quantum efficiency (EQE) is about 5%.
In the experiment, we used a 0.3 × 0.3 cm 2 active area basic un-doped OLED with a three-layered structure. The cathode consisted of aluminum (Al) deposited on lithium fluoride (LiF). The HTL consisted of N, Alq3 acted as an emitting layer (EML). The ETL consisted of 2,2 ,2 -(1,3,5-Benzinetriyl)-tris(1-phenyl-1-H-benzimidazole) (TPBi). The device structure is shown in Figure 1. All organic layers were deposited under high-vacuum conditions of 1.2 × 10 −6 Torr, and the OLED devices were transferred directly into an automated laser processing system with 10 L/min of nitrogen (N 2 ) gas for encapsulation. The encapsulation devices had a laser power of 2.595 (W) and scanning speed of 0.1 mm/s to cure the glass frit with a melting point of 320-350 • C. To blow N 2 gas at a rate of 10 L/min during the encapsulation process and prevent water humidity and oxygen from entering the air environment, the encapsulation area protected the organic materials.
The OLED device encapsulation procedure used was as follows. (1) The groove glass packaging cover was shaken for 10 min in an ultrasonic oscillator with DI water/pure alcohol/alcohol in order to clean it. After the cleaning process had been completed, the groove glass packaging cover was sprayed dry with N 2 gas. The groove glass put into the petri dish was placed in the oven and heated at 60 • C for 10 min. (2) For the parameters of the dispenser, we used a dispensing speed of 1 mm/s, a dispensing time of 2 s, and a stop time of 1 s. The dispensing pressure was adjusted depending on the packaging glue. Finally, the syringe (G 30) was filled with glue with a UV glue pressure of 1 kg/cm 2 and a glass frit glue pressure of 2 kg/cm 2 . The dispensing path was set after the parameters were set. (3) To calibrate the focal length of the laser in order to ensure that the laser was focused on the packaging glue during the encapsulation process when the automated laser packaging platform was used, a Vernier caliper was used to measure the distance between the laser and the platform (15 cm), and the focal length was adjusted with white paper to observe the minimum light point of the laser spot. The laser power was set to 0.35 W with a current of 10 A and an automated laser path. (4) After the groove glass packaging cover was dispensed with glass frit glue, the groove glass packaging cover was aligned to the OLED substrate and the laser scanning path was set. (5) The output mode and output time of the laser were chosen. The laser output mode was set to the CW mode, and the output time was matched to the laser scanning time. During the laser welding process, N 2 gas was applied at a flow rate of 10 L/min to prevent water and oxygen from entering the component and reducing its lifetime under atmospheric conditions. Finally, the OLED component was taken out from the plastic vacuum drying vessel and the encapsulation cover was aligned to perform the laser welding encapsulation process.
Space Charge Limited Current, SCLC Model for the OLED Device
Under a low electric field of ≤3 × 10 5 V/cm, the energy barrier between the organic layer and the metal interface was low and the carrier injection was reduced. The current was dominated by the conductive carriers of the organic layer. An ohmic contact was formed [21].
Under a greater electric field of >3 × 10 5 V/cm, the current density measured in the experiment was greater than that determined by theoretical calculation; the carrier mobility was therefore taken into account with the change in the electric field [22].
where J SCLC is the current density, ε 0 is the vacuum permittivity, ε r is the dielectric constant of the material, µ is the carrier mobility, V is the applied voltage, d is the thickness, β is the factor of the Poole-Frenkel effect, and E is the electric field. The parameters were substituted into the SCLC theoretical model using Mathcad software. The curve of each film thickness with the current density and voltage is shown in Figure 2. The charge density of the electrons with a TPBi thickness of 10/15/20 nm gradually approached the charge density of the holes. As the charge density of the electrons was not close to the charge density of the holes and gradually increased from 10 to 20 nm, the combination rate of electrons and holes was enhanced at the thickness of 15 nm, and then there was a gradual decrease in the combination rate of electrons and holes with a thickness of 20 nm. Finally, a charge density of the electrons of 15 to 20 nm was also much greater than the charge density of the holes.
OLED Device Measurement
Under different TPBi film thickness of 10/15/20/30/40 nm, the electroluminescence spectra were all emitted at a wavelength of 532 nm, and the CIE coordinates were set at (0.32, 0.54), as shown in Figure 3. The OLED device was measured with the optimal characteristics when the thickness of the TPBi was 20 nm. The device had a current density of 2190 mA/m 2 at 10 V with encapsulation and 2203 mA/m 2 at 10 V with no encapsulation. The maximal luminance (Lmax) was 21,480 cd/m 2 with encapsulation and 21,523 cd/m 2 with no encapsulation. An Al cathode thickness of 100 or 150 nm did not have a significant influence, as shown in Figure 4a-d. The external quantum (EQE) was 1.39% when packing was used and 1.40% without packaging, and the current efficiency (CE) was 4.17 cd/A with packaging and 4.19 cd/A without packaging. The power efficiency (PE) was 3.07 lm/W with packaging and 3.08 lm/W without packaging, as shown in Figure 4e-h. The luminance loss with and without encapsulation was 0.2%. Figure 4k The shear strain test determines the shear strength with encapsulation, i.e., the maximum shear stress that the material can withstand before failure occurs. The shear strain test with encapsulation shown in Figure 4l gave tensile strength values of 2.566 kgf for thermosetting glue, 7.348 kgf for UV glue, and 16 kgf for glass frit glue. These are all above the standard value of 10.2 kgf (MIL-STD-883).
When the thickness of TPBi was changed and calculated according to the SCLC theory, the current density at a thickness of 10 nm was obtained, and the maximum value was 10,510 mA/cm 2 . The current density at the maximum thickness of 40 nm was the minimum value of 1350 mA/cm 2 . However, at a thickness of 15 nm, there was a current density of 8178 mA/cm 2 . When the thickness was 20 nm, the current density was 9644 mA/cm 2 , and the current density was 2479 mA/cm 2 at a thickness of 30 nm. As the SCLC theory does not take the capacitance and resistance effects of organic light-emitting diodes into account, this theory does not allow us to know the maximum breakdown voltage of the device. The effective circuit of the device and the cause of device breakdown are discussed. It could be known that the charge density of holes decreases slowly after 4 V. With an external bias voltage of 3 to 10 V, the charge density of the electrons gradually approached the charge density of the holes. As the charge density of the electrons was not close to the charge density of the holes and was gradually increasing, there was a gradual decrease in the combination rate of electrons and holes at an interface thickness of 20 nm. Eventually, the charge density of the electrons was much greater than the charge density of the holes.
Molecules 2021, 26, x FOR PEER REVIEW 6 of 10 The shear strain test determines the shear strength with encapsulation, i.e., the maximum shear stress that the material can withstand before failure occurs. The shear strain test with encapsulation shown in Figure 4l gave tensile strength values of 2.566 kgf for thermosetting glue, 7.348 kgf for UV glue, and 16 kgf for glass frit glue. These are all above the standard value of 10.2 kgf (MIL-STD-883).
When the thickness of TPBi was changed and calculated according to the SCLC theory, the current density at a thickness of 10 nm was obtained, and the maximum value was 10,510 mA/cm 2 . The current density at the maximum thickness of 40 nm was the
Oxygen Plasma Bombards to ITO Thin Film Substrate
To enhance the injection of hole carriers, the ITO transparent electrode was subjected to oxygen plasma bombardment. Oxygen plasma bombarded the ITO thin film substrate for different times of 2/3/4/5/8/10 min, emitting a green light at the wavelength of 532 nm, using CIE coordinates of 0.32, 0.54. The oxygen plasma bombardment time did not affect the light wavelength emitted by the OLED device. The luminance of the OLED device with an oxygen plasma bombardment time of 3 min was optimal for ohmic contact, as shown in Figure 5. The Lmax value was 25,849 cd/m 2 at 10 V and a current density of 1242 mA/cm 2 with encapsulation and 25,901 cd/m 2 at 10 V without encapsulation, and the luminance loss was only 0.2%, as shown in Figure 5a,b. The EQE was 2.28% with encapsulation and 2.29% without encapsulation, and the CE was 7.20 cd/A with encapsulation and 7.20 cd/A without encapsulation. The PE was 5.28 lm/W with encapsulation and 5.3 lm/W without encapsulation, as shown in Figure 5c-h.
SED Model for OLED Degradation
After hermetic sealing package of the OLED device with a lifetime of 1500 h, OLED encapsulation was carried out and measured with the NI PXI-1033. The luminance of the OLED device was 1000 cd/m 2 , and a constant current source was continuously applied. This was set as the control standard CIE 150-2003. The measured data were substituted into the stretched exponential decay (SED) model to calculate the lifetime of the component. Under the SED model of OLED degradation, the OLED luminance with respect to time is expressed as Equation (3).
where L is the OLED luminance, L 0 is the initial OLED luminance, t is the current time, τ is the characteristic time of decay, and β is a stretching exponent [7,8].
As shown in Figure 6, an OLED device burned-in with thermosetting glue obtained a decay of 0.99813 at 8 h, 0.97479 at 16 h, and 0.95358 at 24 h. An OLED device burned-in with UV glue obtained a decay of 0.99893 at 8 h, 0.98482 at 16 h, and 0.97130 at 24 h. An OLED device burned-in with glass frit glue obtained a decay of 0.99896 at 8 h, 0.99833 at 16 h, and 0.99780 at 24 h. Finally, the lifetime of the OLED device was calculated to be 20 days with thermosetting glue, 60 days with UV glue, and 1026 days with glass frit glue.
The Hermetic Measurement of the OLED Device
The reliability of the device's packaging with different materials was measured by a helium leak detector, which is a small gas mass spectrometer. The standard measurement is done according to reference MIL-STD-883. When the packaging material used was glass frit glue, the leakage was 8.92 × 10 −9 mbar/s. The leakage test aligned with MIL-STD-883(<10 −6 mbar/sec) and was within the standard range. As the material was used UV glue, the leakage was 2.1 × 10 −5 mbar/s or 1.4 × 10 −6 mbar/s.
Conclusions
When the electron transport layer (TPBi) thickness was changed, the TPBi thickness of 20 nm in this OLED device obtained the optimal characteristics with the hermetic sealing package. The characteristics were a maximum luminance of 21,480 cd/m 2 at current density of 2190 mA/cm 2 , EQE of 1.39%, CE of 4.17 cd/A and PE of 3.07 lm/W, respectively. When the OLED device first were pre-treated so that the ITO glass substrate was done by the oxygen plasma bombardment under different time, the OLED device with the oxygen plasma bombardment of 3 min had the optimal characteristics. The device had a maximum luminance of 25,849 cd/m 2 at current density of 1242 mA/cm 2 with the luminance enhanced by 17.2%, EQE of 2.28% with EQE enhanced by 0.89%, CE of 7.20 cd/A with CE enhanced by 42.1% and PE of 5.28 lm/W with PE enhanced by 41.9%, respectively. The International Commission on Illumination (CIE) of chromaticity coordinates of OLED devices were all 0.32, 0.54. The OLED device is a green light at the wavelength of 532 nm.
Under the package test, the three different materials of thermosetting glue, UV glue and glass frit glue were used. The glass frit glue material of the encapsulation was cured for the continuous wave laser (CW laser) at the wavelength of 800 nm. To apply local heating of the favorable characteristic of the CW laser with the encapsulation could reduce the influence of temperature for organic materials. The CW laser power of 2.595 W and scanning speed of 0.1 mm/s were used to cure the glass frit glue for OLED hermetic sealing encapsulation. The shear strain test was 16 kgf and the leakage test was 8.92 × 10 −9 mbar/s. The glass frit glue under the reliability test had reached the standard of MIL-STD-883. i.e., A hermetic seal package standard was only with glass frit glue. The hermetic seal of the OLED device was achieved for 1026 days of lifetime measured by PXI-1033 under the TPBi thickness of 20 nm and oxygen plasma bombardment of the ITO glass substrate for 3 min.
|
v3-fos-license
|
2019-04-13T13:11:54.876Z
|
2015-09-01T00:00:00.000
|
111025203
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2015/13/epjconf-dymat2015_02002.pdf",
"pdf_hash": "56822726cb4ed6fdbb64abc70c8c24f24130309e",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2578",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "9817d79cceb8a623dfe6a10ccfe89fa8d9a166f6",
"year": 2015
}
|
pes2o/s2orc
|
Beryllium strain under dynamic loading
There are some data (not much) on dynamic characteristics of beryllium that are important, for example, when estimating construction performance at NPP emergencies. A number of data on stress-strain curves, spall strength, shear strength, fracture and structure responses of shock loaded beryllium have obtained in US and Russian laboratories. For today the model description of this complex metal behavior does not have a reasonable agreement with the experimental data, thus a wider spectrum of experimental data is required. This work presents data on dynamic compression-test diagrams of Russian beryllium. Experiments are performed using Hopkinson bar method (SHPB). Strain rates were έ ∼ 103 s−1.
Introduction
Beryllium has an asymmetric hexagonal close-packed (hcp) lattice and has a number of unique properties: the highest (among all other metals) specific strength and heat capacity [1]. The combination of low density, high modulus of elasticity, strength and heat conductivity makes beryllium needed in aeronautical and space engineering [2]. Due to small beryllium atomic mass, small capture cross section and radiation resistance, it is one of the best materials for reflectors and moderators in nuclear engineering [2]. However, data on its mechanical properties are mainly obtained at static loading. There are some data (not much) on dynamic characteristics of beryllium that are important, for example, when estimating construction performance at NPP emergencies. A number of data on deformation curves for different types of beryllium, in particular S200F (USA) at strain rateś ε = 1500 − 8000 s −1 are given in [3,4]. For the same type of beryllium, mechanical characteristics at higher strain ratesέ = 10 4 − 10 5 s −1 (spall strength) [4,5] as well as fracture and structure responses of shock loaded beryllium [6] are known. In [7][8][9] the data are given on investigation of spall and shear strength of Russian beryllium at strain ratesέ· ∼ 10 4 − 10 5 s −1 using various methods.
Researchers perform a model description of beryllium behavior under dynamic and shock-wave loading, e.g. [4][5][6]. For today the model description of this complex metal behavior does not have a reasonable agreement with the experimental data [4], [6], thus a wider spectrum of experimental data is required.
This investigation provides data on dynamic compression-test diagrams of Russian beryllium. Experiments are performed using Hopkinson bar method (SHPB). Strain rates wereέ ∼ 10 3 s −1 . a Corresponding author: postmaster@ifv.vniief.ru
Experimental
Beryllium is prepared using the method of hot vacuum pressing [1], [7] with addition of preprepared beryllium powder. Beryllium density is 1, 85 g/sm 3 , Be content is > 98 weight %, O 2 ∼ 1,5 weight%, other major impurities are Fe and C, and grain size is ∼ 50 µm.
Experiments were performed using the SHPB method at strain rates 1000-1600 s −1 . The experimental setup is given in Fig. 1.
The bars were made of titanium BT-20 (∅20 × 1500 mm), samples had the size Ø10 × 7 mm. Figure 2 presents dynamic "σ -ε" compression-test diagrams of Be based on experimental results. It is obvious from Figure 2 that Be has considerable strain hardening. The degree of hardening is 7,5-12,0 GPa/rel. units, which is close to data in [3]. In tests 1-5 samples did not fail, the residual strain measured after tests was ε res = 4,3-9 %.
Results
Dynamic "σ -ε" compression-test diagrams of Be were used to determine yield strength values presented in Table I. The yield strength dependence on strain rate was not revealed within the range ofέ = 1000-1600 s −1 .
In terms of the behavior of diagrams and the values of strength yield σ 0.2 , beryllium is very similar to preloaded (P = 59 GPa) uranium [10]. However, the values σ 0.2 are considerably higher than in [3].
In test No. 6 atέ = 1280 s −1 , the sample failed; Be strength yield was = 1490 MPa, the residual strain was 9%. Failure behavior was quasi-brittle. Figure 3 presents a photo of the damaged sample in test No. 6.
In test No. 4 the sample deformed by ∼ 7,5%. The sample from test No. 4 was subjected to the secondary compression at the strain rateέ = 1150 s −1 . Diagrams of Be double dynamic compression are provided in Fig. 4. In the second test the sample failed. In that case Be strength was = 1450 MPa.
Conclusion
Using SPHB method, dynamic compression "stressstrain" diagrams of Russian beryllium were examined. Beryllium was prepared using the method of hot vacuum pressing with admixture of preprepared beryllium powder. Experiments were made at strain rates of 1000-1600 s −1 . Obtained data will be useful for testing the available models of Be behavior and developing new ones which describe Be behavior under various loading, including those typical for accidents in nuclear engineering, more adequately.
|
v3-fos-license
|
2021-06-28T13:24:04.432Z
|
2021-06-28T00:00:00.000
|
235655944
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2021.682208/pdf",
"pdf_hash": "1c6b8a08e713dd3a932ce27cc8e81f95a5e7d655",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2580",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "1c6b8a08e713dd3a932ce27cc8e81f95a5e7d655",
"year": 2021
}
|
pes2o/s2orc
|
Autonomic Nervous System Function in Anorexia Nervosa: A Systematic Review
Background: Autonomic nervous system (ANS) dysfunction has been suggested to contribute to the high prevalence of cardiovascular complications in individuals with anorexia nervosa (AN), yet has not been thoroughly investigated. The current review aimed to synthesize the evidence of basal ANS function in individuals with a current diagnosis of AN and those with a previous diagnosis who had achieved weight restoration, as compared to controls. Methods: A systematic review of nine databases was conducted and studies that were published in a peer-review journal, in English, that included at least one assessment of ANS function in individuals with a current or previous diagnosis of AN were selected. Forty-six studies were included with a total of 811 participants with a current diagnosis of AN and 123 participants with a previous diagnosis of AN. Results: ANS function was assessed through heart rate variability (n = 27), orthostatic challenge, blood pressure variability or baroreflex sensitivity (n = 11), adrenergic activity (n = 14), skin conductance level (n = 4), and pupillometry (n = 1). Individuals with AN demonstrated increased parasympathetic activity and decreased sympathetic activity, suggestive of autonomic dysregulation. Following weight restoration, autonomic function trended toward, or was equivalent to, control levels. Discussion: Autonomic dysregulation is indicated through a range of assessments in individuals with AN. Future investigations should utilize a variety of assessments together in order to conclusively establish the nature of autonomic dysfunction in AN, and following extended weight restoration. Moreover, investigation into the co-occurrence of ANS function and cardiovascular risk is required.
INTRODUCTION
Anorexia Nervosa (AN) is an eating disorder characterized by restriction of food intake, an intense fear of gaining weight and a distorted self-perception of body image (American Psychiatric Association, 2013). AN has been recognized as an increasingly prevalent psychiatric condition among young people in Western societies, with the incidence also increasing in a variety of racial and ethnic groups (Nakai et al., 2016), mostly in women (Hoek, 2006). AN has a typical onset in adolescence (Hoek and Van Hoeken, 2003) and has an estimated lifetime prevalence of 1.7% in the general population (Smink et al., 2014). The etiology and pathophysiology of AN are complex, involving biological, psychological, and sociocultural development and maintenance factors (Phillipou et al., 2019). The chronic nature of AN is evidenced by a 50% relapse rate (Pike, 1998), with learned maladaptive behaviors becoming deeply entrenched and difficult to alter (Steinglass and Walsh, 2016).
The energy deprivation and malnutrition associated with AN places immense pressure on the cardiovascular system, with up to 80% of patients suffering from cardiovascular complications (Spaulding-Barclay et al., 2016). These include structural, conduction, and hemodynamic abnormalities (Sachs et al., 2016;Giovinazzo et al., 2019;Smythe et al., 2020), and are a major contributor to the high mortality rate in AN (Nakai et al., 2016), which is approximately six times that of the general population (Papadopoulos et al., 2009;Arcelus et al., 2011). Cardiovascular problems occur not only during the starvation state of AN; there are also specific cardiac complications that arise during the process of re-feeding, such as arrhythmia, tachycardia, and congestive heart failure (Casiero and Frishman, 2006;Vignaud et al., 2010). Despite the profound psychological and physical burdens that accompany AN, the underlying physiological mechanisms behind the cardiovascular complications of the illness remain poorly understood. It has been suggested that disturbances in cardiac autonomic regulation may contribute to the increased cardiovascular complications and mortality in AN (Mazurak et al., 2011a). The autonomic nervous system (ANS) provides the link between the cardiovascular system and the central nervous system, and is responsible for the regulation of internal bodily processes in response to physiological and environmental changes (Palma and Benarroch, 2014). The ANS is a dynamic regulatory function that involves interpretation of sensory feedback from the organs by higher brain areas, including the brainstem and hypothalamus, in order to adapt the output of the ANS to adjust the physiological state of the body (Porges, 2007;Buijs et al., 2013). Through the regulation of heart rate (HR), blood pressure (BP), and rate of respiration among other visceral activities, the ANS maintains cardiovascular homeostasis via the opposing inputs of its two branches; the sympathetic (SNS) and parasympathetic (PNS) nervous systems (Gordan et al., 2015). Activation of the SNS results in increased arousal, such as increased HR and blood vessel constriction through the release of noradrenaline (NE), whereas the PNS (or vagal nerve) acts in opposition to decrease HR and BP. Evaluation of the ANS can be derived from various techniques including hemodynamic, biochemical and neurophysiological assessments with each presenting its own limitations (Grassi and Esler, 1999). Therefore, multiple assessments of autonomic function should be undertaken together in order to provide an overview of neural function; some of which are briefly detailed below.
Hemodynamic assessments can provide insight into the autonomic regulation of blood flow. Sinus bradycardia (Yahalom et al., 2013) and low BP levels (Sachs et al., 2016) are commonly observed in individuals with AN and are suggestive of abnormalities in autonomic regulation of HR and BP. The majority of previous investigations into autonomic function in individuals with AN have assessed heartrate variability [HRV; the beat-to-beat variation in HR (Task Force of The European Society of Cardiology The North American Society of Pacing Electrophysiology, 1996;Billman, 2011)] as an estimation of autonomic cardiac regulation, with inconclusive findings (see Mazurak et al., 2011a for a review). While the review by Mazurak et al. (2011a) found the majority of studies that investigated HRV in AN reported parasympathetic dominance, some reported sympathetic dominance, while others found no difference in comparison to controls; this led the authors to suggest that HRV may not be suitable for the assessment of ANS in AN (Mazurak et al., 2011a). Another hemodynamic assessment of autonomic function is the orthostatic stress test, which provides a window into autonomic regulation through the baroreceptor reflex control of BP and HR (Grassi and Esler, 1999;Westerhof et al., 2006). Conditions related to orthostatic intolerance, such as orthostatic hypotension, syncope and postural orthostatic tachycardia syndrome (POTS) represent autonomic failure and have also been reported in AN (Sachs et al., 2016).
Biochemical assessment of plasma NE levels can provide an index of sympathetic neural function that have been shown to vary according to weight (Lambert et al., 2007). However, circulating NE represents only a fraction of the amount secreted from nerve terminals and is dependent on secretion, clearance and re-uptake processes (Esler et al., 1990), therefore this method provides a "confounded" index of systematic sympathetic activation (Grassi et al., 2015). Measurement of the NE metabolite, 3-methyl-4-hydroxyphenylglycol (MHPG) is another common biochemical assessment that is undertaken to further inform regional NE synthesis, release and re-uptake (Grassi and Esler, 1999).
In addition to regional NE spillover, the other "preferred" assessment for sympathetic nervous system evaluation, is the neurophysiological technique of "microneurography" (Grassi et al., 2015). Microneurography provides a direct continuous recording of muscle sympathetic nerve activity (MSNA) to give a measure of central nervous system sympathetic neural outflow to the skeletal muscles (Grassi and Esler, 1999), including blood vessels. Increased sympathetic neural drive, as assessed by microneurography, is associated with increased cardiovascular risk (Kaye et al., 1995;Grassi, 2006), yet microneurography remains less commonly used due to its semi-invasive nature.
It is beyond the scope of the current review to provide an overview of all assessments of ANS function; previous thorough reviews have been conducted (Grassi and Esler, 1999;Tjalf and Timo, 2019). To our knowledge, there has been no prior systematic review of autonomic function in individuals with AN. Moreover, most studies have primarily assessed function in individuals in the acute state of AN and it is less clear whether any abnormalities persist after weight restoration. In order to advance the knowledge of ANS function in AN, the current systematic review aims to synthesize studies investigating resting-state ANS function in individuals with AN, including those who have achieved weight restoration, as compared to healthy controls. Given the important clinical implications of abnormalities in autonomic cardiovascular control, a greater understanding of any abnormalities in ANS function in individuals with AN, and following weight restoration, is crucial.
Search Strategy
This systematic review was carried out in accordance with the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) (Supplementary Material) (Moher et al., 2010) and was registered with the International Prospective Register of Systematic Reviews (PROSPERO identifier CRD42020177195). Studies were identified through systematic searches of nine databases: Ovid MEDLINE(R) ALL 1946 to November 03, 2020; Embase 1974 to 2020 November 03 (Ovid); Ovid Emcare 1995 to 2020 Week 44; APA PsycInfo 1806 to October Week 4 2020 (Ovid); Ovid Nursing Database 1946 to October Week 4 2020; CINAHL (EBSCOhost); Health Collection, Humanities & Social Sciences Collection (Informit); Cochrane Library and Clinicaltrials.gov. Search strategies were developed by a medical librarian, HW, in consultation with the review team. Strategies combined the general concepts of anorexia nervosa AND autonomic nervous system using a combination of subject headings and textwords relevant to each database. Results were limited to English language, but no date limits were applied. Animal studies were excluded. An initial strategy was developed for Medline and then adapted for other databases (Appendix 1 in Supplementary Material). All searches were updated on 5 November 2020. Reference lists of included studies were screened for additional publications.
Study Selection
Search results were exported to Endnote bibliographic management software, duplicates removed, and the remainder uploaded to Covidence systematic review software (www.covidence.org) by HW. Two authors (Z.J., E.L.) independently screened records on title and abstract and then full text against the following exclusion criteria: primary condition not AN, no diagnostic criteria referenced, no control group, no basal ANS assessment outcome, protocol paper, review article, dissertation, conference abstract, case series/study. A third reviewer (N.E.) resolved any conflicts. Studies that included at least one of the ANS measures in basal conditions listed in Table 1 were included (see Table 1 for a summary of ANS outcomes, description of assessment and relationship to ANS functioning). A meta-analysis was not performed as there were too few similarities between study methods and measures.
Data Extraction
Two reviewers (Z.J. and E.L.) independently extracted data and consensus was confirmed by a third reviewer (N.E. or A.P.) Extracted data included information on study characteristics and basal ANS assessment and outcomes.
Risk of Bias/Quality Assessment
The risk of bias among included studies was assessed independently by two authors (Z.J. and D.C.) using a modified version of the Newcastle-Ottawa Quality Assessment Scale (NOS; see Appendix 2 in Supplementary Material) for cohort/casecontrol studies, in which a high score indicates a low risk of bias (Wells et al., 2006). Studies were assessed on three domains; participant selection, comparability and outcome assessment and were classified as at low, moderate, or high risk of bias. The risk of bias was not used as an exclusion criterion in the selection of studies to provide a complete overview of available data.
HRV Risk of Bias/Quality Assessment
Given the large number of included studies that assessed HRV, we used a modified version of a previously published measure of study quality in studies of HRV in functional somatic disorders to specifically evaluate quality of HRV methods (Tak et al., 2009). We modified the tool to incorporate the items listed in the Guidelines for Reporting Articles on Psychiatry and Heart rate variability (GRAPH) criteria (Quintana et al., 2016) to provide a more comprehensive assessment of HRV quality and risk of bias (see Appendix 3 in Supplementary Material). We assessed three general domains: appropriate selection of participants, appropriate quantification collection of HRV and appropriate control for confounding factors. Potential scores ranged from 0 to 22.
RESULTS
The literature search yielded 2,126 unique citations. The full text of 105 citations were examined and, of these, 46 articles met our inclusion criteria (see Figure 1).
Study Characteristics
Characteristics of the included studies for qualitative synthesis are shown in Table 2. All included studies utilized cross-sectional study design; 39 assessed participants at a single time point and seven included assessments at multiple time points after weight restoration (Gross et al., 1979;Riederer et al., 1982;Lesem et al., 1989;Kaye et al., 1990;Kreipe et al., 1994;Bar et al., 2006;Lachish et al., 2009). The 46 studies included assessments of 811 participants with a current diagnosis of AN (757 female, 11 male, 43 not specified), 123 participants with a previous diagnosis of AN who were at various stages of treatment and weight restoration (AN-WR; 100 female, 2 male, 21 not specified) and 867 control participants (834 female, 20 male, 13 not specified). Sample sizes ranged from 7 to 89 participants with a current diagnosis of AN, 4-18 weight-restored participants, and 8-39 controls. One study did not specify the sample size of their control group (Lechin et al., 2010), four studies did not specify the sex of the AN participants (Kaye et al., 1990;Pirke et al., 1992; Short-term fractal scaling exponent α Using detrended fluctuation analysis, the fractal scaling exponent provides a measure of complexity in heart period series (RR interval) (Peng et al., 1995).
Fractal correlation
Reduced α has been demonstrated in patients with congestive heart failure and depressed left ventricular function
Baroreflex function
Baroreflex sensitivity BRS Invasive: measuring the change in heart rate in response to changes in blood pressure induced by injection of vasoactive drugs that have minimal effect on the sinus node. Non-invasive: the Valsalva maneuver, head-up-tilt, the neck chamber technique (which provides a selective manipulation of carotid baroreceptors), and the analysis of spontaneous variations of blood pressure and RR interval. Consecutive systolic pressure values and corresponding RR intervals with one-beat delay are fitted by a linear regression in the interval between the beginning and end of systolic pressure increase, the sensitivity of the baroreflex is provided by the slope of the fitted line, and expressed as the change in RR interval in milliseconds per millimeter of mercury change in systolic pressure. ms/mmHg CV diseases are often accompanied by an impairment of BRS mechanisms, with a reduction of inhibitory activity and an imbalance in the physiological sympathetic-vagal outflow to the heart, thus resulting in a chronic adrenergic activation. Sustained baroreflex-mediated increase in sympathetic activity may contribute to increased end-organ damage and to the progression of the underlying disease, and a blunted baroreflex gain is predictive of increased cardiovascular risk in post-myocardial infarction and heart failure patients.
(Continued)
Frontiers in Neuroscience | www.frontiersin.org Rommel et al., 2015;Palomba et al., 2017) and three studies did not specify the sex of the AN-WR participants (Riederer et al., 1982;Kaye et al., 1990;Pirke et al., 1992). The average duration of illness ranged from 8 months to 10 years and the duration of weight restoration ranged from 2 weeks to 3 years.
Study Quality Assessment
The NOS scores of the included studies ranged from 3 to 10. Among the 46 included studies, two were at high risk of bias (4.3%), 17 were at moderate risk (37.0%), and 27 were at low risk (58.7%) (see Table 2 for classification or detailed assessment in Appendix 4 in Supplementary Material). The HRV quality summary score is listed in Table 3 (see Appendix 5 in Supplementary Material for a detailed assessment).
Of the four studies that reported non-linear assessments of HRV and reported the scaling exponent (α), two found decreased α (Ishizawa et al., 2008;Vigo et al., 2008) and one reported no difference in α compared to controls (Russell et al., 2008). Platisa et al. (2006) again highlighted differences according to duration of AN, reporting decreased α in those with a shorter illness duration and no difference to controls in those with an extended illness duration.
Overall, three studies indicated differences in HRV modulation according to duration of illness. A shorter illness duration was demonstrated by increased parasympathetic modulation which was attenuated over time in two studies (Platisa et al., 2006;Nakai et al., 2015). However, Wu et al. (2004) found a negative correlation between enhanced SNS activity and illness duration and a positive correlation between PNS activity and illness duration.
(ii) Weight-Restored AN
Four studies reported on HRV in individuals with a previous diagnosis of AN who were in varying stages of weight restoration. Two reported time domain HRV; one reported decreased HRV (no change from the current AN group) as compared to controls (Lachish et al., 2009) and the other reported no difference between AN-WR and controls (Bar et al., 2006). Three reported LF HRV in AN-WR; two reported maintenance of decreased LF (Rechlin et al., 1998;Lachish et al., 2009) and one reported no difference in LF (Kreipe et al., 1994) between AN-WR and controls. The same three studies also recorded HF in AN-WR; one reported maintenance of high HF in AN-WR (Lachish et al., 2009) and two reported no difference in HF (Kreipe et al., 1994;Rechlin et al., 1998) between AN-WR and controls. Three studies calculated the LF/HF ratio; one reported sustained low LF/HF after weight restoration (Lachish et al., 2009) and two reported no difference in LF/HF between AN-WR and controls (Kreipe et al., 1994;Bar et al., 2006).
(i) Current AN
Six studies assessed BP response to an orthostatic challenge in individuals with a current diagnosis of AN; five reported decreased systolic BP (SBP) and/or diastolic BP (DBP) (Gross et al., 1979;Lesem et al., 1989;Kreipe et al., 1994;Casu et al., 2002;Murialdo et al., 2007) and one did not directly compare the response to controls (Lechin et al., 2010). Four studies investigated NE levels in response to an orthostatic challenge; two found a decreased response (Gross et al., 1979;Lechin et al., 2010), one an increased response (Lesem et al., 1989), and one found no difference (Van Binsbergen et al., 1991), as compared to controls.
Three studies reported increased BRS in individuals with a current diagnosis of AN (Kollai et al., 1994;Ishizawa et al., 2008;Takimoto et al., 2014) but Tonhajzerova et al. (2020) reported no difference in BRS to controls. All three studies that assessed BPV in individuals with AN reported decreased LF variability of BP (Ishizawa et al., 2008;Takimoto et al., 2014;Tonhajzerova et al., 2020).
(ii) Weight-Restored AN
Two studies assessed BP response to an orthostatic challenge in AN-WR groups, with both reporting maintenance of decreased BP response (Gross et al., 1979;Lesem et al., 1989). However, both reports of NE response to an orthostatic challenge were no different from controls (Gross et al., 1979), or trended toward control levels (Lesem et al., 1989) following weight restoration.
(i) Current AN
Thirteen studies reported basal NE or MHPG levels in individuals with a current diagnosis of AN. Of these studies, 11 reported basal plasma NE levels; four reported decreased plasma NE (Gross et al., 1979;Luck et al., 1983;Pirke et al., 1992;D'Andrea et al., 2008), one reported increased plasma NE (Van Binsbergen et al., 1991) and six reported no difference in basal plasma NE (Lesem et al., 1989;Kaye et al., 1990;Bartak et al., 2004;Nedvidkova et al., 2004;Dostalova et al., 2007;Lechin et al., 2010), as compared to controls. Lechin et al. (2010) proposed that individuals with AN present with adrenal sympathetic overactivity, as evidenced by the low NE: adrenaline plasma ratio, yet did not directly compare NE levels to controls. Beta-adrenergic receptor activity was assessed by Kaye et al. (1990) who found an erratic response to increasing doses of isoproterenol in individuals with AN, as compared with controls, proposing that altered regulation of presynaptic adrenoreceptors may account for the discrepancy in assessments of NE levels across studies.
Two studies assessed adipose tissue levels of NE and both reported increased NE Nedvidkova et al., 2004). Two studies assessed urinary NE levels and both found decreased urinary NE, as compared to normal weight controls (De Rosa et al., 1983;Van Binsbergen et al., 1991) and lean controls (Van Binsbergen et al., 1991), despite one also reporting increased plasma NE levels (Van Binsbergen et al., 1991). Three studies assessed urinary excretion levels of MHPG; in two, MHPG levels were decreased in individuals with AN (Gross et al., 1979;Riederer et al., 1982) and in the third, there was no difference to controls (Van Binsbergen et al., 1991).
(ii) Weight-Restored AN
Six studies reported basal NE or MHPG levels in individuals with a previous diagnosis of AN (Gross et al., 1979;Riederer et al., 1982;Kaye et al., 1985Kaye et al., , 1990Lesem et al., 1989;Pirke et al., 1992). Five studies reported plasma NE levels; two of which reported decreased NE (Kaye et al., 1985;Pirke et al., 1992) and three reported no difference to controls (Gross et al., 1979;Lesem et al., 1989;Kaye et al., 1990). Two studies reported urinary MHPG levels and both found them to be comparable to control levels (Gross et al., 1979;Riederer et al., 1982) whereas one study assessed plasma MHPG, which was decreased in AN-WR participants (Kaye et al., 1985).
Skin Conductance Level and Pupil Response
Four studies reported skin conductance level (SCL) as an outcome measure in individuals with a current diagnosis of AN (see Table 6); two reported decreased SCL (Abell et al., 1987;Palomba et al., 2017) and two reported no difference in SCL compared to controls (Calloway et al., 1983;Léonard et al., 1998).
The only study that assessed pupil response (PLR) found decreased PLR response in individuals with a current diagnosis of AN, which did not persist after weight restoration (Bar et al., 2006).
DISCUSSION
The current review provides the first synthesis of investigations into ANS function in individuals with AN and those who have a previous diagnosis and have achieved weight restoration. The assessment of ANS function across modalities is discussed below.
Heartrate Variability
The majority of studies that assessed HRV in the time domain demonstrated increased beat-to-beat variability in HR in individuals with a current diagnosis of AN, consistent with a recent review (Peyser et al., 2020). Moreover, increased time domain HRV parameters were demonstrated in patients with AN when compared to lean controls (Petretta et al., 1997;Galetta et al., 2003). The studies that reported decreased time domain HRV presented some methodological limitations. One did not specify duration of AN and stated that participants had recently started various antidepressant and antipsychotic agents (Russell et al., 2008), which have been associated with decreased HRV (Licht et al., 2010), another did not report the length of HRV assessment (Lachish et al., 2009) and the third reported results from a small sample size of six patients (Melanson et al., 2004). Following weight restoration, one reported no difference to controls and the other reported decreased HRV, yet did not report the HRV assessment length (Lachish et al., 2009). Therefore, based on the current review results, beat-to-beat variability in HR is increased in the acute state of AN, which does not continue following weight restoration. Assessment of HRV in the frequency domain, specifically in the LF and HF frequency bands, trended toward increased HF and decreased LF which was reflected in a trend toward decreased LF/HF ratios in patients with a current diagnosis of AN. Assessment of HRV in the frequency domain in WR participants primarily suggested normalization of HRV, with either no difference or levels trending toward controls. Akin to HRV assessed in the time domain, the acute state of AN is marked by increased parasympathetic activity and decreased sympathetic activity in the frequency domain, which appears to normalize following weight restoration. Non-linear analysis of HRV was also assessed to provide a measure of complexity (α), or randomness, in heart period series that has been demonstrated to be reduced in individuals with congestive heart failure (Peng et al., 1995) and a prognostic indicator of cardiac mortality (Huikuri et al., 2000). Decreased α values were demonstrated in individuals with a current diagnosis of AN (Ishizawa et al., 2008;Vigo et al., 2008) and in those with a shorter duration of AN (termed "acute") (Platisa et al., 2006), reflective of HRV patterns seen in patients with heart failure, which was postulated to be a mechanism of cardiac autonomic dysfunction and sudden death in AN (Vigo et al., 2008).
While the majority of studies indicated concordant results in HRV assessment, discrepancies are likely to be due in part to the duration of AN, the potential for comorbid conditions to impact HRV and the assessment methodology. The impact of chronicity (or duration of AN) was repeatedly highlighted as a distinguishing feature of HRV profile HRV (Platisa et al., 2006;Nakai et al., 2015). It was suggested that the HRV profile was so distinct between initial and chronic stages of illness that it could be used to distinguish between phases of illness, whereby initial starvation is typified by increased parasympathetic activity (increased HF) and an extended duration of illness was characterized by increased sympathetic activity (LF) (Petretta et al., 1997;Melanson et al., 2004;Roche et al., 2004;Platisa et al., 2006;Nakai et al., 2015). A single study found contrasting results (a positive correlation between increased illness duration and HF but a negative correlation between duration and LF), yet did not specify illness duration, therefore potential extrapolation is uncertain (Wu et al., 2004). A tentative conclusion may be that the relative increase or decrease in HF and LF is dependent on duration of AN. However, further investigation is required to confirm this hypothesis.
In addition to duration of illness, another potential influence on HRV that must be taken into account is the potential impact of comorbid psychiatric conditions on HRV parameters (Shinba et al., 2008). Anxiety and stress have been demonstrated to increase sympathetic activity (Lucini et al., 2002) and evoke cardiac vagal withdrawal, a physiological response thought to be related to the hypersensitivity engendered in anxiety disorders (for a review on the topic, see Friedman, 2007). Similarly, decreased HRV has frequently been associated with depression (independent from cardiovascular disease) (Musselman et al., 1998;Kemp et al., 2010) and antidepressant use (Licht et al., 2010;Michael and Kaur, 2021). Given that the majority of studies did not specify comorbid psychiatric conditions or psychoactive medication use, the impact of these in the current review cannot be ascertained. There is a wide literature on the influence of psychological state on HRV (Thayer et al., 2012), with common reference to Porges' polyvagal theory which stipulates that HRV is associated with experience and expression of social and emotional behavior (Porges, 2007). Given the high rate of comorbid psychiatric disorders in individuals with AN (O'Brien and Vincent, 2003), it may be difficult to extrapolate reliably, the influence of AN alone on HRV.
Further consideration must be applied when considering the HRV assessment methodology. Assessments of HRV in the current review were derived from both ambulatory recordings and short-term recordings of varying length. While HRV analyses of different lengths of time are generally closely correlated (Costa et al., 1994), results between short-term and ambulatory recordings can differ (Li et al., 2019) and should not be compared (Task Force of The European Society of Cardiology The North American Society of Pacing Electrophysiology, 1996). Indeed, the only study that assessed both short-term and ambulatory HRV in the current review reported no difference in short-term HRV but decreased HRV over long-term recordings (Melanson et al., 2004).
A separate consideration is concern over whether HRV is a reflection of the autonomic state of the entire body or the regulation of the sinoatrial node alone (Hayano and Yuda, 2019). The use of HRV as a sole index of ANS activity is potentially problematic given that frequency domain analysis of HRV reportedly over-simplifies the non-linear interactions between the SNS and PNS (Billman, 2013). While HRV provides some insight into vagal activity, it has the disadvantage of giving a poor indication of sympathetic activity (Esler and Lambert, 2003;Billman, 2013). Indeed, LF heart rate spectral power (often interpreted as sympathetic activity) has been demonstrated as unrelated to direct assessments of sympathetic activity, such as NE spillover, MSNA (Kingwell et al., 1994), and cardiac sympathetic innervation quantified by positron emission tomographic neuroimaging (Rahman et al., 2011). Moreover, in the current review, 17 out of the 25 studies that assessed HRV did not use any other method to assess autonomic function in individuals with AN, a limitation underscored by Ishizawa et al. (2008) and Takimoto et al. (2014).
Overall, the assessments of HRV indicated alterations in autonomic regulation of heart rate in AN characterized by increased heart rate variance and increased vagal activity. While persistent sympathetic excitation and depressed vagal activity are associated with ventricular arrhythmias and sudden cardiac death (Task Force of The European Society of Cardiology The North American Society of Pacing Electrophysiology, 1996), the implications of persistent vagal activation and autonomic dysregulation remain unclear. However, there have been indications of increased parasympathetic activity and autonomic dysregulation at the onset of acute myocardial infarction (Webb et al., 1972), with the suggestion that autonomic dysregulation is a risk factor for sudden cardiac death in individuals with amyotrophic lateral sclerosis (Asai et al., 2007). Therefore, it remains to be determined whether consistent elevation of HRV and increased vagal modulation of cardiac control represent cardiovascular risk for individuals with AN.
Orthostatic Response, Blood Pressure Variability, and Baroreflex Sensitivity Assessment of the physiological response to an orthostatic challenge can provide powerful insight into cardiac autonomic regulation. During a head-up tilt, the resultant peripheral venous pooling and decreased cardiac output triggers stimulation of aortic, carotid and cardiopulmonary baroreceptors, resulting in increased sympathetic outflow and inhibition of parasympathetic activity in healthy individuals (Ramírez-Marrero et al., 2007).
Observations that assessed the change in BP from a supine to upright position were limited; while BP response to orthostasis was blunted in individuals with AN in one study (Casu et al., 2002), it did not differ from controls in others (Lechin et al., 2010;Takimoto et al., 2014). Multiple studies compared absolute BP levels between AN and HC groups during an orthostatic challenge; a methodology which is limited in providing an indication of autonomic regulation given that BP is principally decreased in individuals with AN. However, assessments of HRV, BPV and adrenergic response to orthostasis revealed that individuals with AN failed to exhibit an increased sympathetic response to a head-up tilt. While a normal response is demonstrated by a decrease in the HF and increase in LF components of HRV and BPV, these reflex mechanisms were not seen in individuals with AN (Casu et al., 2002;Murialdo et al., 2007;Takimoto et al., 2014). Furthermore, individuals with AN did not demonstrate increased adrenergic outflow during a change in position (Gross et al., 1979;Lechin et al., 2010), yet were comparable to controls after weight restoration (Gross et al., 1979;Lesem et al., 1989).
While at rest, individuals with AN demonstrated decreased variability in BP and increased baroreflex sensitivity, further suggesting increased parasympathetic control over the heart. Together, these assessments of orthostatic response, BPV and BRS in individuals with AN demonstrate an abnormal regulation of the cardiovascular system through a failure to activate a sympathetic response and inhibit parasympathetic activity. Altered orthostatic regulation suggests that individuals with AN are at risk of a range of conditions associated with altered orthostatic regulation, such as syncope, orthostatic hypertension, and POTS (Grubb, 2005), many of which have indeed been reported in AN. Following weight-restoration, responses trended toward those of controls, reflective of the suggestion that resolution of a normal orthostatic response can determine medical stability and readiness for discharge following treatment (Shamim et al., 2003).
Adrenergic Assessment
While many of the studies that assessed static adrenergic activity in the current review found no difference in plasma NE levels between individuals with AN and controls, there was a trend toward decreased plasma NE or MHPG levels. Decreased NE was interpreted as a chronic adaptation to malnutrition by some authors (Riederer et al., 1982;Dostalova et al., 2007), which contributed to hypothalamic dysfunction during the acute state of AN (Gross et al., 1979;De Rosa et al., 1983). Another interpretation suggested that NE levels varied over the course of treatment according to stress levels and psychological (as opposed to physical) stabilization (Lesem et al., 1989). Moreover, altered regulation of presynaptic beta-adrenoreceptors was reported, suggesting that altered noradrenergic receptor function may also be present in individuals with AN (Kaye et al., 1990).
Similarly, urinary excretion of NE and MHPG was decreased in individuals with AN compared to both normal weight (Gross et al., 1979;De Rosa et al., 1983) and lean controls (Van Binsbergen et al., 1991), which increased following treatment (Gross et al., 1979;Riederer et al., 1982). While MHPG is the major metabolite of NE in the brain, urinary MHPG is predominantly the product of peripheral SNS, rather than central nervous system NE metabolism. Given that urinary catecholamine excretion is dependent on renal function (Esler et al., 1988), which has previously been shown to be impaired in individuals with AN (Stheneur et al., 2014), interpretation of decreased urinary excretion of NE and MHPG in AN is constrained.
In contrast, assessment of NE levels in adipose tissue revealed localized elevated levels of sympathetic activity in individuals with AN, compared to controls Nedvidkova et al., 2004), despite no difference in overall plasma NE . Given that local adipose tissue sympathetic activity is not a reflection of overall whole body sympathetic activity (Patel et al., 2002), an increase in localized sympathetic activity within adipose tissue was suggested to be a protective mechanism to protect fat stores from further depletion through downregulation of lipolysis , a process supported by prolonged fasting models (Migliorini et al., 1997).
Each assessment of adrenergic activity in individuals with a current diagnosis of AN, and after weight restoration, provided an alternate assessment of NE presence and metabolism. Given that circulating NE levels represent a small proportion of NE secreted from nerve terminals (Grassi and Esler, 1999), it is difficult to surmise a conclusive interpretation of sympathetic activity from these results. However, there was a trend toward decreased NE levels in individuals with a current diagnosis of AN, which normalized after weight restoration.
Skin Conductance Level and Pupillary Response
In comparison to alternate measurements of autonomic function, SCL and PLR were less commonly assessed. Notwithstanding this, reduced sympathetic activation in SCL (Abell et al., 1987;Palomba et al., 2017) and altered SCL responses between AN subtype (Calloway et al., 1983) were reported. All assessments of SCL were conducted on the palms, of which are prone to emotional sweating (Vetrugno et al., 2003). Indeed, alterations to SCL in AN were observed to be correlated with psychological factors (including anxiety and metacognitive dysfunction) (Léonard et al., 1998;Palomba et al., 2017). Given that sympathetic skin response has been demonstrated to be emotionally activated (Cheshire et al., 2020), the use of SCL to provide insight into thermoregulatory autonomic function is therefore limited.
The only study that investigated PLR found decreased sympathetic and increased parasympathetic pupil response in individuals with AN, yet only in the acute state, which normalized following weight restoration (Bar et al., 2006). Given that only a single investigation has been conducted into PLR, which identified changes in autonomic nervous system activity in individuals with AN, further investigations of this non-invasive parameter should be undertaken in future studies.
Limitations
The purpose of the current review was to synthesize the evidence of ANS function associated with AN. Several methodological factors must be taken into account when comparing the assessments of ANS function in the current review. Given the serious nature and medical instability associated with AN, many studies utilized small sample sizes, which no doubt contributed to the lack of consistency among results in individual studies. Moreover, the studies investigating individuals with a previous diagnosis of AN included varied durations of weight restoration, precluding the ability to draw a succinct conclusion. Many studies did not detail or compare differences between restrictive and binge eating-purging subtypes of AN, therefore any differences related to specific AN behaviors cannot be determined by the current review. Future investigations into ANS function after a prolonged period of weight restoration would allow a better understanding of the impact of AN in any longterm alterations to ANS function. Similarly, delineation of AN subtype and assessment of comorbid psychiatric diagnoses in future assessments could reveal any differences in autonomic function according to subtype and comorbidities.
Implications and Conclusion
The current review provides a synthesis of the evidence to date assessing resting autonomic function in individuals with AN, and after weight restoration. It is indicated that individuals with AN demonstrate autonomic dysregulation characterized by decreased sympathetic activity and increased parasympathetic activity as well as increased complexity of the ANS through a variety of assessment methodologies. Given the ease and convenience of HRV assessment, it is tempting to use the measure as a sole assessment of autonomic function. However, the demonstrated impact that both illness duration and psychiatric comorbidities can have on HRV infer that assessment of autonomic activity should be established via additional accompanying measures. While the duration of weight restoration in the current review was widely varied, the majority of studies to date indicated that autonomic regulation tended to normalize after weight restoration. Moreover, there has been no assessment of SNS activity in individuals with AN to date using either microneurographic measurement of muscle sympathetic nerve activity or assessment of organ-specific NE spillover; the two "preferred" assessments of human adrenergic function (Grassi and Esler, 1999).
The underlying mechanisms that contribute to the abnormalities in ANS function in acute AN remain speculative. It has been proposed that the parasympathetic dominance seen in AN is an adaptive physiological response to conserve energy in response to malnutrition (Buchhorn et al., 2016;Sachs et al., 2016;Kalla et al., 2017). However, it remains unclear whether energy preservation alone is underlying the changes in ANS function, given that the three studies that included lean control groups did not find a linear relationship between BMI and ANS function. Specifically, HRV and NE excretion in patients with AN were significantly different than both normal-weight and lean controls, who satisfied the weight, but not psychological, criterion for AN (Van Binsbergen et al., 1991;Petretta et al., 1997;Galetta et al., 2003). There is growing evidence of an intrinsic connection between the brain and the heart, including interplay between frontal-vagal (brain-heart) and depression networks (Iseger et al., 2020), that purportedly contributes to cardiovascular disease (Makovac et al., 2017). Given the demonstrated dysregulation of other neural regulatory systems in AN [including dopaminergic and serotonergic systems, which are thought to contribute to both physiological and psychological traits seen in AN (Kaye et al., 2005;Fladung et al., 2010)], there may be central dysregulation of ANS networks in AN, yet this remains putative.
The implications of the current review are that increased vagal activity is likely to underlie the widespread bradycardia in individuals with AN. Moreover, inhibited SNS activation during orthostasis would result in insufficient blood flow to organs and contribute to episodes of syncope. Less clear are the implications of the increased autonomic complexity demonstrated by HRV and BRS parameters. While cardiovascular disease is commonly associated with sympathetic overactivity (Malpas, 2010), the consequences of sustained parasympathetic overactivity and autonomic dysregulation are yet to be determined. It remains to be ascertained whether the autonomic dysregulation indicated in individuals with AN contributes to the widespread cardiovascular complications.
This review has demonstrated that autonomic dysregulation is indicated in individuals with AN, yet there have been no thorough assessments of autonomic function utilizing multiple methodologies. Due to the variability in both methodology and quality of assessments to date, conclusions drawn from these data should be interpreted with caution. Furthermore, in order to determine the association between autonomic dysregulation and widespread cardiovascular complications in AN conclusively, future investigations should employ a variety of assessments of autonomic function in conjunction with markers of cardiovascular risk. It will also be important to assess the impact of comorbid psychiatric conditions and duration of illness in order to conclusively establish the nature of autonomic (dys)function in AN. Similarly, future investigations in individuals with an extended duration of weight restoration are still required. Determination of autonomic function through a variety of assessment methodologies in individuals with a current, and previous, diagnosis of AN alongside assessments of cardiovascular risk will aid in determining the contributing factors to cardiovascular complications. This will allow clinicians to identify individuals at risk and aid in the prevention, treatment and development of interventions to reduce the inadvertent mortality rate of AN.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
|
v3-fos-license
|
2018-11-12T04:48:41.000Z
|
2018-06-28T00:00:00.000
|
119195129
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP11(2018)124.pdf",
"pdf_hash": "bb4e6d5fe45cdcba34cb363b685752a1289fcca1",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2582",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "bb4e6d5fe45cdcba34cb363b685752a1289fcca1",
"year": 2018
}
|
pes2o/s2orc
|
Probing analytical and numerical integrability: The curious case of $(AdS_5\times S^5)_{\eta}$
Motivated by recent studies related to integrability of string motion in various backgrounds via analytical and numerical procedures, we discuss these procedures for a well known integrable string background $(AdS_5\times S^5)_{\eta}$. We start by revisiting conclusions from earlier studies on string motion in $(\mathbb{R}\times S^3)_{\eta}$ and $(AdS_3)_{\eta}$ and then move on to more complex problems of $(\mathbb{R}\times S^5)_{\eta}$ and $(AdS_5)_{\eta}$. Discussing both analytically and numerically, we deduce that while $(AdS_5)_{\eta}$ strings do not encounter any irregular trajectories, string motion in the deformed five-sphere can indeed, quite surprisingly, run into chaotic trajectories. We discuss the implications of these results both on the procedures used and the background itself.
Introduction
String motion in curved spaces, described by two-dimensional non-linear sigma models, have been studied extensively from the birth of the subject. This is extremely interesting due to the complicated non-linear equations of motion associated with the worldsheet fields. It comes as no surprise that these equations of motion are only 'integrable' for a select subclass of target space backgrounds, and hence this notion of integrability helps one to pick out the cases where a complete quantitative analysis of classical (and perhaps quantum) string motion can be performed and compared to the flat space case. One of the most widely known cases is of course that of type IIB strings in the AdS 5 × S 5 space-time [1], which is dual to operators in maximally supersymmetric N = 4 Yang-Mills theory (sYM) via AdS/CF T correspondence [2]. The integrability of these strings moving in the bulk AdS 5 × S 5 , in conjunction with the integrability of the dual sYM theory, makes an exceptional example to study the AdS/CF T correspondence from the point of view of integrable systems [3]. Moreover, the finding that in the semiclassical limit, the dynamics of this correspondence indeed becomes tractable [4], has regenerated interest in the classical string solutions in AdS and related geometries. Indeed, a lot of literature has been devoted to the subject of integrability in AdS/CF T in last two decades 1 .
With this advent of integrability studies in the context of AdS/CF T , there has been much celebrated quests of deforming the symmetries on both sides of the correspondence while keeping the integrable structure intact. Most of these relied on the use of target space duality symmetries to generate new integrable backgrounds [6,7,8,9] . Very recently, based on Klimcik's pioneering work on novel integrable deformations of σ-models [10,11,12] have paved the way for their application to string σ-models and finding probable deformed versions of AdS/CF T correspondence. Since then a larger family of integrable deformations of AdS × S geometries have been explored, where the deformation is given by a classical r-matrix solution to the (modified) Classical Yang-Baxter Equation (CYBE). The explicit geometry and NS-NS forms for such a 'Yang-Baxter' deformation of AdS 5 ×S 5 first appeared in [13,14], was analysed in detailed in [15] and various consistent truncations have been discussed in [16]. In the Yang-Baxter case, the deformation works by deforming the supercoset associated to AdS 5 × S 5 itself by a continuous parameter, which is often referred to as a q-deformation, or a quantum group deformation [17]. This replaces the lie algebra of the classical charges by its q-deformed version, which is then incorporated into the superstring action for AdS 5 × S 5 having a real deformation parameter η ∈ [0, 1) or equivalently another parameter called κ with κ ∈ [0, ∞) 2 . For various avenues of exploratory works on Yang-Baxter deformations, one should have a look at [18]- [72].
As integrable string backgrounds by construction, these (AdS 5 × S 5 ) κ strings in general satisfy the Lax equations. But in the case of a random string sigma model, where the existence of Lax pair is not particularly known, proving (non)-integrability is a rather complicated task. To this note, there have been a number of works to consistently truncate the two dimensional string equations of motion of particular circular strings into one dimensional mechanical systems and analyzing the (non)-integrability properties thereof. It has been argued that it is sufficient to show that there exists at least one truncated dynamical system of differential equations, where the corresponding string motion turns chaotic [73], i.e. small variations around the equations grow non-deterministically in time. Useful tools in these studies have mainly been the variational non-integrability techniques of Hamiltonian systems and numerical experiments in the associated phase space in general. This approach is often hailed as the equivalent of the algebraic approach of finding Lax pairs for the system and a large number of works have appeared along these lines, see for example [74]- [89].
In the following note, we seek to understand this equivalence by studying string motion in the extremely complicated but integrable background of (AdS 5 × S 5 ) κ . One should bear in mind, the process of Yang-Baxter deformation breaks the supersymmetries associated to (AdS 5 × S 5 ) and even at the bosonic level, the isometry group of SO(2, 4) × SO(6) breaks down to U (1) 3 × U (1) 3 in this case, but still the κ-deformed background inherits the parent integrability. We must mention here [61], in which, using these hamiltonian analytical methods, it was claimed that the associated phase space encounters chaos as the differential equations of motion are not 'integrable'. This certainly creates a tension between the different methods of studying (non-)integrability of string motion in curved backgrounds. Spearheaded by this, we revisit these claims of non-integrablity of string motion in (R×S 3 ) κ and (AdS 3 ) κ and then attack the larger and more complicated problem of strings in (R × S 5 ) κ and (AdS 5 ) κ with antisymmetric B fields included. We explicitly show that string motion in former cases do not have any non-integrable traits. However, to our surprise, we find that the phase space of the deformed five-sphere indeed contains chaotic string motion, as the equations describing the motion are non-integrable in nature. We emphasize that this phenomenon happens only for the dynamical phase space associated with the full five sphere, and the sub-sectors can be presumed integrable in this sense. For the case of (AdS 5 ) κ also, we find string trajectories remain regular throughout the motion.
The paper is organized in the following way, in the section 2, we give a review of the background and fluxes associated to the (AdS 5 × S 5 ) κ (or, as we will actually use, (AdS 5 × S 5 ) κ ) string background. In section 3, after revisiting the case of (R × S 3 ) κ , we will have a detailed discussion of string motion in (R × S 5 ) κ . By the use of Normal Variational Equations (NVE's) for fluctuations around equations of motion for consistent string solutions, we would arrive at the fact that strings in (R × S 3 ) κ do not run into any chaotic trajectories. In the case of deformed five-sphere, we will, however, find chaotic trajectories as soon as we turn on a non-zero deformation parameter, a result that will be corroborated by both using NVE and studying its Poincare sections by numerical trajectories method. In section 4, we will essentially repeat the same exercise of the earlier chapter, instead in the case of deformed AdS backgrounds. As in the earlier case, we show that there are no chaotic trajectories in (AdS 3 ) κ . And following this, no chaotic motion is found in the case of (AdS 5 ) κ as well, which we confirm via both analytical and numerical calculations. We discuss the ramifications of our results and conclude this work in section 5.
Setup
Let us start by introducing the geometry and the general setup required for our study. We first right down the full deformed metric for the κ deformed AdS 5 × S 5 [14], (2.1) Also we have the B-fields B = 1 2 B M N dX M ∧ dX N [14] associated to the solution, r 4 sin(2ξ) 1 + κ 2 r 4 sin 2 ξ dφ 1 ∧ dξ + 2r 1 + κ 2 r 2 dφ ∧ dr .
(2.3)
And in this case, the single surviving component of NS-NS flux takes the form as, It is worthwhile to note that the (AdS) η contains a singularity, but we won't be bothered with that part in the present analysis. We write the deformed (AdS 5 ) κ part of the metric and B field again with the redefinition ρ → sinh ρ, The singularity surface in this coordinate system is located at a critical value of the radial coordinate ρ = ρ s = sinh −1 1 κ , (2.6) So that κ → 0 signals the usual AdS boundary at conformal infinity. One must emphasize, that this is a general singularity in the spacetime which can't be dealt with by simple change of coordinates alone. In this background, to study string solutions, we use the Polyakov action coupled to an antisymmetric B-field, where λ is the modified 't Hooft coupling for this case,given byλ = λ(1 + κ 2 ) 1/2 , γ αβ is the worldsheet metric and αβ is the antisymmetric tensor defined as τ σ = − στ = 1.
Variation of the action with respect to X M gives us the following equations of motion and variation with respect to the metric gives the two Virasoro constraints, We use the conformal gauge (i.e. √ −γγ αβ = η αβ ) with η τ τ = −1, η σσ = 1 and η τ σ = η στ = 0) to solve these equations of motion.
Strings in deformed sphere
3.1 Revisiting a warm up example: The case of (R × S 3 ) κ Although the simplest case of an extended string in (R×S 3 ) κ has been addressed already in [61], we would first start with taking another look at the findings. The metric for The NS-NS flux vanishes in this case. We now have to choose a consistent worldsheet embedding for the worldsheet coordinates. We can't help but note here that in [61] the following string embedding was chosen, We must stress here that this is not a consistent string embedding, since for this choice, the second Virasoro constraint (T τ σ = 0) gives rise to the following condition, Which in turn can only be satisfied consistently if the winding number α 1 = 0 orq = 0. For our case, we choose the former and propose a refined ansatz for a circular string with additional angular momentum in the aforementioned geometry, This is a completely consistent embedding and makes the second Virasoro constraint zero naturally. We can now write the effective Lagrangian of this theory as, From the equations of motion, it can be seen that the t equation is easily satisfied by, where E is a constant, and the φ equation is trivially satisfied. The other two equations for θ and ϕ then read as following, − sin θ cos θ m 2 1 + κ 2 cos 2 θ 2 − κ 2θ2 + 1 + κ 2 φ 2 +θ 1 + κ 2 cos 2 θ = 0 2θφ sin θ cos θ 1 + κ 2 +φ sin 2 θ 1 + κ 2 cos 2 θ = 0.
These equations are supplemented by the other Virasoro constraint implying the vanishing of the 2d Hamiltonian, This is exactly equivalent to the time integrated version of the θ equation of motion. From the above, we can see that θ → 0,θ → 0 is a solution to the both equations of motion, i.e. is an invariant plane of the system. We can demand that the Hamiltonian constraint is satisfied on the invariant plane, just with the identification of the constants as E 2 = m 2 . Now we can consider small fluctuations around this invariant plane, with the form, Expanding the θ equation upto first order in , we can get the Normal Variational Equa- We now have to replaceφ to get a differential equation for (τ ). We then note that from the equation for ϕ we can easily write, where J is a constant of motion, i.e. evolves independently of time. So, near the point (3.11) With this replacement, we can now analyze the NVE to find that it offers well defined rational solutions of the form, The above solution is completely well defined in the parameter space and we can conclude that the string motion in this case does not run into chaos anywhere. Note that if we had chosen θ = π/2 as an invariant plane of the system, we would have simply gottenφ ∼ J, which would also be sufficient to satisfy the equations of motion and the NVE would just take the form of a simple Harmonic Oscillator equation. The Hamiltonian constraint in that case would simply become E 2 = J 2 , i.e that of a BPS point-like string. For the sake of completeness let us also discuss the case of choosing the angular momentum along the other direction from the above one, i.e considering the changed ansatz, (3.13) 3 We must point out here that solving the Hamiltonian constraint near the limit forφ also gives a leading since the two-spheres inside the deformed three-sphere are not equivalent to each other (as is the case for undeformed spheres) this case has to be addressed separately. In this case, the equations of motion can be seen to be trivially satisfied by θ → π 2 ,θ → 0, which define the other invariant plane. Doing the above analysis again for this case (expanding as θ(τ ) = π 2 +˜ (τ )) yields the same form of NVE as in (3.9). The only difference comes from the definition of the angular momentaJ, which near θ → π 2 gives, φ ∼J 2 . (3.14) So we can safely say here that the expansion near invariant planes is not sensitive to the choice of the angular momentum direction, and in both the cases integrability properties of the equations of motion stay unchanged.
Strings in the five-sphere: Analytical
In this section, we would try to repeat the exercise done in the last section for a deformed S 5 . Due to the complexity of the equations of motion in this case, more emphasis will be given to the numerical analysis here. Let us take a general spinning string ansatz in the deformed five-sphere of the following form, The effective lagrangian of the theory is given by, The equation of motion for t is satisfied trivially by, where E is a constant as shown in the last section. The equation of motion for θ on the other hand reads, sin θ cos θ ψ 2 − m 2 cos 2 ψ 1 + κ 2 cos 4 θ sin 2 ψ + 2κ 2 sin θ cos 5 θ sin 2 ψ m 2 cos 2 ψ −ψ 2 Similarly, we can easily write the equation of motion for ψ as, In these equations, we can see that θ = 0 and ψ = π 2 are trivial solutions of the θ and ψ equations respectively. The non-zero Virasoro constraint equation has a form This is equivalent to the Hamiltonian constraint for the 2d theory explicitly.
Normal variational equations
Consider the equation of motion for ψ. This is satisfied by the trivial solution as we showed earlier.
If we now look carefully at ψ = π 2 solution, the Virasoro constraint on the invariant dynamical plane yields, thereby effectively eliminating one variable from the equation. For us, ψ = π 2 will be the aforementioned invariant plane, making the dynamics effectively only along the θ direction. This special solution to the equations of motion is given by (3.21), which can be solved with appropriate initial conditions to get, Now we have to study small fluctuations near this special solution in order to comment on the integrability of the system. To write the normal variation equation, we then start by expanding the equation of motion for ψ using ψ(τ ) = π 2 + η(τ ), |η| << 1. − m 2 1 + κ 2 cos 4 θ) + 2 mκ sin(2θ)θ + 8κ 2 cos 4 θη 2 η = 0.
One should note since there is a division by cos 2 θ involved, this expansion does not include the case of θ = π 2 . Also since we are working in strictly first order of η, we can discard thė η 2 η term. Now, on the invariant plane we can simply use (3.21) to replace terms having derivatives of θ 4 . Then this can be put into an algebraic form using a simple replacement τ → z = tan θ(τ ). (3.25) Instead of working with the total complicated differential equation, we can concentrate on the κ → 0 limit, i.e. that of small deformation. We would see that this would be enough for our case. We can write this differential equation upto the leading order of κ in the following form, This equation is already in the so-called "Normal" or Schrödinger form. It is easy to see that the κ = 0 solution, i.e. the solution for the undeformed sphere is quite simple and in a rational form Where c i are constants. But the total solution η (κ) can't be written in terms of rational functions. Specifically, the solutions can be found in terms of Doubly Confluent Heun function, thereby making it clear that even in the small deformation limit, there is no Liouvillian solution for this dynamics, making it effectively non-integrable in this sense. This claim will be more elucidated in the next section when we talk about numerical simulation of these string trajectories. Of course the above discussion does not encompass the whole story here. In general the integrability properties of classical hamiltonian systems are associated with behaviour of variations for the phase space curves. The usefulness of NVE's come in handy when we systematically want to analyse existence of functionally independent integrals of motion. The symmetries leading to existence of such integrals of motion are usually given by transformations between space of solutions of the variational differential equations. These are often described in mathematical literature via Picard-Vessiot theory or differential Galois group techniques [90](Also see [78]). Since determining the Galois group for a general case is hard, a different route is provided via the Koavacic algorithm [91] to find existence of Liouvillian solutions. We will describe more about this in the appendix, and explicitly calculate the case of θ NVE in (R × S 5 ) κ for any finite value of κ via this algorithm, which will undoubtedly point out that inclusion of non-zero κ leads to solutions becoming non-Liouvillian in this case. For the time being, we will accept the above discussion and focus on the numerical analysis.
The hamiltonian and numerical trajectories
Here we supplement our previous analysis by probing more into the chaotic behaviour. We will find the Hamiltonian equation of motion and plot the constant energy surfaces. Then, by observing the behaviour of those trajectories we can get some insight into this chaotic behaviour by invoking the Kolomogorov-Arnold-Moser (KAM) theorem. For integrable systems essentially the number of conserved charge is equal to the number of degrees of freedom present in the system. The systems that we will consider are basically coupled harmonic oscillators with non-trivial potentials. They are characterized by a set of coordinates q i and their conjugate momenta p i . Together they {q i , p i } give the phase space (i = 1, · · · N ). Now if the system is integrable then there will be exactly N number of conserved charges. Then we can plot this N dimensional surfaces and typically for the integrable system the shape of these surfaces are of like torus, which are known as KAM tori. In other words for each values of these conserved charges (one of them will be the energy which we will mainly consider in our subsequent analysis), the points of the phase space will lie on this KAM tori. Now when one adds non-integrable terms to the integrable Hamiltonian these KAM tori get perturbed. According to the KAM theorem most of these tori will be deformed but if the strength of the non-integrable deformation terms is small then the trajectories will still be ordered and fall on an the surface of this deformed tori (only the resonant tori i.e those corresponding to the frequencies ω i such that α i ω i = 0, where α i ∈ Q will be completely destroyed). But if the strength of the non-integrable deformations is large, all these tori will be completely destroyed and the trajectories can probe the entire accessible phase space (determined by the total energy) in a completely arbitrary way and thus we will observe chaotic behaviour.
We will adopt the following strategy for our case. We first consider the string motion on (R × S 5 ) κ case as discussed in section (3.2) and use the profile mentioned in (3.15). We write down the Hamiltonian starting from the Lagrangian mentioned in (3.16) below.
where, the two conjugate momenta are defined as p θ = 2θ 1 + κ 2 cos 2 θ , p ψ = κ m cos 4 θ sin(2ψ) + 2 cos 2 θψ 1 + κ 2 cos 4 θ sin 2 ψ , (3.29) and we have identified the energy with, E = −p t = 2ṫ. We next find the Hamiltonian equation of motion using the ansatz mentioned in (3.15). The phase space is defined by the four coordinates: {θ, p θ , ψ, p ψ }. The constant energy surfaces (E) are defined by the equation (3.20). Keeping this in mind we solve the Hamiltonian equation of motion together with the constraint (3.15) for different values of E and plot the phase space trajectories for both the canonical pairs {θ, p θ } and {ψ, p ψ }. Surprisingly, we observe that for generic initial conditions even in the presence of small κ as we increase the energy the trajectories becomes chaotic. Initially we identify that there are some kind of deformed tori in the phase space when energy is small but as we increase the energy these tori are completely destroyed and the the trajectories moves freely in the phase space, the motion is only bounded by the total energy. We give the representative plots showing this behaviour below. This further supports our claim stemming from the NVE analysis that even for small κ the system shows some signature of chaos for (R × S 5 ) κ . First for consistency check we set the initial condition for {ψ, p ψ } as {ψ(0) = 0, p ψ (0) = 0}. So there will be no non trivial phase space trajectories in the {ψ, p ψ } plane. Only we will have non trivial trajectories in {θ, p θ } plane. In this case effectively what we are left with a harmonic oscillator type system characterized by {θ, p θ } and we should not observe any chaotic behaviour for any values of κ and E (energy) (this is exactly what happens for (R × S 3 ) κ where we get exactly one harmonic oscillator with a κ dependent mass and hence we observe no chaos whatever be the values of κ and energy.) Next we choose more general boundary conditions where both the canonical pairs evolve. We plot the trajectories for both {θ, p θ } and {ψ, p ψ } below for different values of E and κ. In all cases, we have set the winding number m = 2 for simplicity. Similarly we plot these phase space trajectories for higher values of κ in subsequent plots. We see the chaos persists and for high values of κ, for example if we look at the κ = 100 all the points in the phase space seem to concentrate near the edges of each of the energy contour. It is expected because the oscillators become very massive for higher values of κ and the points in the phase space do not move much. This is also in agreement with our physical intuition.
Revisiting the case of (AdS 3 ) κ
We again revisit the case of strings in (AdS 3 ) κ as already discussed in [61], and start by putting ζ = π 2 in the metric, which makes the NS-NS two-form zero. Moreover, identifying ψ 2 = ψ, we write the relevant metric in this case, Starting with a simple circular string ansatz as the following form, we can write down the equations of motion in the following form, cosh ρ 2 cosh ρẗ κ 2 − κ 2 cosh(2ρ) + 2 + 8 1 + κ 2 sinh ρṫρ = 0, These equations are trivially satisfied by ρ = 0 andρ = 0, which give us an invariant plane to work with, provided we have the solution, Also there is the Hamiltonian constraint to be satisfied, This in turn means that at ρ = 0, we should haveṫ = 0. Using the expansion the desired NVE simply has the following form r + m 2 + (1 + κ 2 )ṫ 2 r = 0. (4.6) As we have discussed above, this is simply a Harmonic Oscillator equation of motion and hence is completely solvable.
A concrete example: Extended 'Spiky' strings
For the sake of completeness, we here mention the case of 'spiky' strings [92] in (AdS 3 ) κ which, unlike circular strings, are extended object and has been well studied in the literature [38]. To discuss these strings in the Polyakov framework, the worldsheet embedding is quite general, and has been discussed in [93]. We start here with that particular ansatz 5 of the form, t = τ + f (σ), ρ = ρ(σ), ψ = ωτ + g(σ). (4.9) 5 Here we again note that the ansatz for such strings discussed in [61] i.e.
is completely incompatible with the Virasoro constraints. Notably, the constraint Tτσ = 0 leads to the condition sinh 2 ρ mψ = 0, (4.8) which forces either m orψ to be zero, rendering the ansatz inconsistent.
Note here the AdS radial direction is not dependent on the worldsheet time coordinate. The equations of motion for t and ψ here gives rise to (4.10) Where C 1,2 are just constants. Also the ρ equation is given by, Note that ρ = 0 and ρ = 0 is a trivial solution for the above equation of motion, although g seems to be diverging at this limit. This is, as usual, supplemented by the Virasoro constraints. Here the constraint T τ τ + T σσ = 0 gives simply the Hamiltonian, which in turn is consistent with the ρ equation. The other constraint T τ σ = 0 gives a nice relation between the constants, −C 1 + ωC 2 = 0. (4.11) So we can easily choose C 2 = C 1 ω without loss of generality. With these inputs and explicit expressions of f and g at hand, we can now expand the ρ equation as ρ = 0 + R(σ), with R << 1, to get the following NVE upto first order in R, This is certainly reminiscent of the NVE's we have dealt with in earlier sections, and a representative rational solution can be given by, where we have defined, which determines the trajectory of the string along the σ direction. We note here that if we want to discuss dynamics of such extended string where the radial direction is time dependent, we can perform a σ ↔ τ exchange in (4.9) to transform it to the 'Dual Spike' solution [94], without any change in the analysis.
Analytical strings on (AdS 5 ) κ
The most important exercise would be to study string motion in the full (AdS 5 ) κ space-time for reaching a concrete conclusion in our case. To study circular strings in this background, we choose a particular simple ansatz as follows, Note here we have put both winding numbers to be same for simplicity. With this choice, the Lagrangian for the system of strings take the expression, We will now write the explicit equations of motion for ζ, sinh ρ κ 2 m 2 sinh 5 ρ 4κ 2 sin 5 ζ cos ζ sinh 4 ρ − sin(4ζ) (4.17) −8ρ cosh ρ κm sin(2ζ) sinh 2 ρ +ζ κ 2 sin 2 ζ sinh 4 ρ − 1 +4ζ sinh ρ κ 2 sin 2 ζ sinh 4 ρ + 1 − 2κ 2ζ 2 sin(2ζ) sinh 5 ρ = 0.
Also the equation for ρ can be written as.
Similarly, the t equation has a form, cosh ρ 2 cosh ρẗ κ 2 + κ 2 (− cosh(2ρ)) + 2 + 8 1 + κ 2 ρ sinh ρṫ = 0. These equations are not at all illuminating as one can see. Instead we can notice that the energy of this circular string is given by, We can show that the equations of motion in this case also vanish as in the previous one for ρ = 0 andρ = 0, giving us an invariant plane to work with. Using this, we expand the ρ equation of motion with, Then, the ρ NVE can be written in the following simple form upto the first order in R, This is a remarkably simple equation for such a complex string background. Note that the effect of the singularity surface apparently vanishes here, since we are considerably closer to centre of the AdS space. To findζ we notice that the conserved angular momentum associated to ζ can be written as . (4.23) Near the invariant plane ρ = 0, this can be shown to lead us to the expansion, As we have done before, we use the above in conjunction with (4.22) to write down the full form of the NVE. This equation, as evident, is completely solvable with rational functions. A representative solution can be written as, This analysis is strongly justified by the numerical calculations in the next section, where we show that no non trivial dynamics appear for these AdS strings in the phase space. Note that, throughout this section we have seen that the NVE for string motion in deformed AdS spaces also become weakly deformed Harmonic Oscillator problems, indicating the inherent simplicity of the motion itself.
Explicit numerical hamiltonian analysis
We repeat the same analysis for this case as we have done in section (3.3) for the case of sphere. We will consider only the circular string profile as mentioned in (4.15). We can leave the analysis for the extended spiky string for future investigation and instead focus only on these simple strings for our numerical experiment.The total Hamiltonian in this case is, as expected, very complicated. And the two conjugate momenta are defined below, and again the energy is identified as As before we solve the Hamiltonian equation of motions and we show the plots for the phase space trajectories for various values of κ and energy E. Also from the metric (4.1) we note that there is a singularity surface at ρ = sinh −1 1 κ . So the larger the value of κ lesser becomes the range of the ρ, and the phase space plot of the evolution of ρ will be restricted for each of the values of E. Keeping this in mind we show the phase-space trajectories for both {ρ, p ρ } and {ζ, p ζ } plane. Also we set the winding number m = 2 throughout.
At κ = 0 there is no chaos as expected and this provides a benchmark for our numerics. Then we proceed to analyze other cases with non vanishing of κ.
From these plots we can easily infer that for AdS within the range of the values of κ and E that we have considered the trajectories remains always ordered (upto some numerical errors) and hence it is in good agreement with our conclusion from NVE analysis that unlike the S 5 κ case, there is no chaotic behaviour for the case of AdS κ .
Summary and Conclusion
Let us first summarize the paper briefly. In this note, we set out on a humble quest, to settle the issue of analytic/numeric integrability techniques clashing with well-known algebraic formulation for Yang-Baxter deformed AdS 5 × S 5 case. Although the methods of studying classical integrability of an exact string background have been believed to be equivalent to each other, we find evidences suggesting the contrary. Starting from revisiting the calculations provided in [61], we conclude there is no such analytical/numerical evidence of chaotic motion appearing in the string phase space for (AdS 3 ) κ and (R × S 3 ) κ case. However, we find that for a rigidly spinning circular string moving in (R × S 5 ) κ , the motion surprisingly runs into chaos when we turn on a non-zero value of κ. We show both analytically via perturbations around the classical trajectories and via numerical experiments that irregular evolution of trajectories indeed occur in this case. Surprisingly, the case of (AdS 5 ) κ , which has a prominent space-time singularity, doesn't show any evidence of chaotic string motion. This rather shocking revelation puts us in crossroads about how we view the notion of classical integrability in this case from different vantage points. If there exists one such dynamical model truncation for the string system, where the differential equations are not integrable, the phase space definitely has problems. Non-integrability often does not explicitly lead to chaos, but for our case, it is evident in the Poincare sections. There could easily be some added subtlety to the case of (R × S 5 ) κ , which is not captured by our analysis here, and which could stabilize the solution against irregular perturbations. However, the idea of what that could be eludes us as of now. Another viable point that can be considered, comes from the discussion presented in [65]. There, it was explicitly showed that at the fast spinning string limit the equations of motion for strings in (R × S 5 ) κ maps to that of a complex β-deformed sphere. This background has been shown to be classically non-integrable via analytical/numerical techniques [81]. We speculate that this might have deeper implications that we have had thought earlier, although in this paper we are not exactly taking the fast spinning limit anywhere.
The general question about different methods to check classical (non)-integrability, however, persists strongly. This work has been a standalone example towards scratching the surface of this mystery. One might try to find an answer to this via exploring other well-known but non-trivial classical string backgrounds. A very useful exercise perhaps would be to study the BTZ black hole background. BTZ has been known to be classically integrable [95] for few years now. But since this background contains event horizons, one would easily guess that string motion becomes irregular near these horizons. One could then investigate string motion in BTZ background using the procedures used in this paper. This might give more insight into this apparent disparity of discussing string integrability in this context. However, all of this still remains speculations, and certainly require rigorous understanding. We plan to come back to these concerns in the near future. ψ 2 = E 2 (1 + κ 2 sin 2 ψ) − m 2 cos 2 ψ. (5.1) The solution for ψ(τ ) on this invariant plane can then be written as a jacobi function, cos ψ(τ ) = sn 1 + κ 2 Eτ | E 2 κ 2 + m 2 E 2 (1 + κ 2 ) .
(5.3) Now we perform the change of variable as τ → z = cos ψ(τ ). After some involved algebra, the above equation takes the following form.
(5.9) Now (5.6) is a linear second order differential equation and also in the correct form for applying Kovacic algorithm [91] to test whether it admits Liouvillian solutions. Now according to Kovacic algorithm, the potential should at least satisfy one of the following three necessary (but not sufficient) criteria so that the differential equation (5.6) and hence (5.4) will admit Liouvillian solution.
• I: All the poles of V (z) will be either of order 1 or even order and the order of V (z) at infinity has to be either even or gerater than 2. The order of V (z) at infinity can computed by the subtraction of the highest power of z in numerator from the highest power of z in denominator.
• II: All the poles of V (z) will be of odd order greater than 2 or it will posses just one pole of order 2. 6 • III: Order of all the poles of V (z) are less than or equal to 2 and order of V (z) at infinity has to be at least order 2.
These conditions can be proven to be equivalent to the differential-Galois group treatment for differential equations. Now we can check that the potential V (z) mentioned in (5.6) violates all of these three conditions and hence our normal variation equation for the θ does not admit Liouvillian solutions. This is consistent with our numerical results presented in the section (3.2). Putting κ = 0 one can easily check that this process succeeds as one would expect for the undeformed five sphere.
|
v3-fos-license
|
2024-05-17T15:02:48.588Z
|
2024-05-01T00:00:00.000
|
269798346
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-4991/14/10/844/pdf?version=1715674122",
"pdf_hash": "b6ab1d865c07f782a2e50e717dad47812e982c80",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2583",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"sha1": "018878e16f3010c429d455fd59b2e4079b46f1d7",
"year": 2024
}
|
pes2o/s2orc
|
Study on the Synthesis of Nano Zinc Oxide Particles under Supercritical Hydrothermal Conditions
The supercritical hydrothermal synthesis of nanomaterials has gained significant attention due to its straightforward operation and the excellent performance of the resulting products. In this study, the supercritical hydrothermal method was used with Zn(CH3COO)2·2H2O as the precursor and deionized water and ethanol as the solvent. Nano-ZnO was synthesized under different reaction temperatures (300~500 °C), reaction times (5~15 min), reaction pressures (22~30 MPa), precursor concentrations (0.1~0.5 mol/L), and ratios of precursor to organic solvent (C2H5OH) (2:1~1:4). The effects of synthesis conditions on the morphology and size of ZnO were studied. It was found that properly increasing hydrothermal temperature and pressure and extending the hydrothermal time are conducive to the more regular morphology and smaller size of ZnO particles, which is mainly achieved through the change of reaction conditions affecting the hydrothermal reaction rate. Moreover, the addition of ethanol makes the morphology of nano-zno more regular and significantly inhibits the agglomeration phenomenon. In addition to the change in physical properties of the solvent, this may also be related to the chemical bond established between ethanol and ZnO. The results show that the optimum synthesis conditions of ZnO are 450 °C, 26 MPa, 0.3 mol/L, 10 min, and the molar ratio of precursor to ethanol is 1:3.
Introduction
Zinc oxide nanoparticles are high-performance semiconductor inorganic compounds with many excellent properties, such as excellent optical, electrical performance, and catalytic performance [1].Their low-cost, environmentally friendly, excellent-stability, and antibacterial properties make them widely used in fields such as solar cells [2], optoelectronic materials [2], light-emitting diodes [3], and photocatalysis [4].Among many photocatalysts, ZnO has become the best alternative to TiO 2 .Compared with TiO 2 , the electron mobility of ZnO nanoparticles is 10-100 times higher [5].At the same time, ZnO nanoparticles have higher quantum efficiency, better photocatalytic activity, and greater stability.Therefore, ZnO nanoparticles have developed into a highly promising optical material in the field of optoelectronic semiconductors and have been widely studied and applied in the industrial and scientific fields [6].We can conclude that developing a simple and continuous manufacturing method for ZnO nanoparticles is of great significance.
Currently, various ZnO nanoparticle synthesis methods have been studied by researchers to obtain smaller ZnO nanoparticles.Smaller particles signify larger surface area, which gives them better catalytic activity and higher quantum efficiency.The traditional synthesis methods for zinc oxide nanoparticles are the direct precipitation method [7], sol-gel method [8], homogeneous precipitation method [9], etc.However, due to their high cost, long reaction time, and large product particles, they are not suitable for commercial production of ZnO.The supercritical hydrothermal synthesis method is a novel method used for preparing various kinds of nanomaterials [10][11][12].To reach supercritical hydrothermal conditions, zinc salt precursor compounds are rapidly heated by directly contacting supercritical water, and zinc oxide nanocrystals are formed during the fast heating period [13].Compared to traditional chemical synthesis methods, supercritical hydrothermal synthesis technology has several advantages.Firstly, it allows a simple reaction operation with a step reaction, making it quite suitable for industrial applications.Secondly, it is highly efficient and fast, with high reaction rates and crystallinity, and during extremely short reaction times, small and well-distributed products are obtained.Thirdly, it offers strong control over the synthesis, allowing for sufficient production of different particle sizes and crystal types by simple variation in synthetic conditions and appropriate solution stoichiometry.Lastly, it has low production costs compared to traditional methods [14].At present, researchers have successfully synthesized zinc oxide nanorods [15], porous zinc oxide materials [16], and zinc oxide nanospheres and nanoflowers [17] through supercritical hydrothermal synthesis.
Mao Zhiqiang [18] et al. synthesized nano-zinc oxide with controllable morphology using the supercritical hydrothermal synthesis method and studied its photocatalytic performance.They elaborated on the process of preparing zinc oxide using the supercritical hydrothermal method, the crystal growth mechanism, and the influence of process parameters on zinc oxide particles.They concluded that the formation of zinc oxide crystals is mainly affected by the solubility of zinc oxide in the reaction conditions.With lower solubility, the production rate of zinc oxide usually increases.This is because low solubility means that more zinc oxide can be precipitated in supercritical water, providing more reactants to participate in the formation process of zinc oxide.Secondly, the formation of zinc oxide is also affected by Ostwald ripening and anisotropic growth during the growth process.Satoshi Ohara et al. [19] achieved continuous production of zinc oxide nanorods through the supercritical hydrothermal synthesis method.They obtained well-shaped, highly crystalline, and pure zinc oxide nanorods at 400 • C and 30 MPa.Their study found that higher-temperature conditions benefit the formation of zinc oxide nanorods.Under this condition, the decrease in solubility facilitates the formation of zinc oxide crystal cores, thereby accelerating the formation rate of zinc oxide nanorods.
Ludmila Motelica et al. [20] compared the properties of the ZnO nanoparticles obtained by solvolysis using a series of alcohols: primary from methanol to 1-hexanol, secondary (2-propanol and 2-butanol), and tertiary (tert-butanol).The results show that ZnO nanoparticles can be successfully synthesized in all primary alcohols, but the final product ZnO cannot be obtained by using secondary or tertiary alcohols, which emphasizes the importance of the solvent used.The shape of the obtained nano-zno particles depends on the alcohols used.The shape of nano-zno synthesized by different alcohols is different.ZnO synthesized in methanol is spherical, becomes polyhedral under 1-butanol, and becomes rod-like in 1-hexanol.In addition, Ludmila Motelica et al. [21] synthesized ZnO nanoparticles (NPs) using Zn(CH 3 COO) 2 •2H 2 O in alcohols with different numbers of −OH groups.The effects of different alcohol types (n-butanol, ethylene glycol, and glycerol) on the size, morphology, and properties of ZnO NPs were studied.The results show that one-step synthesis of ZnO nanoparticles is suitable for alcohols with only one or two hydroxyl parts.Moreover, the size of synthesized ZnO particles also depends on the type of alcohol used.Alcohols with a single −OH group are most suitable for obtaining small nanoparticles.With the increase in the −OH group, the size of synthesized ZnO nanoparticles gradually increases.
Different operation parameters have a significant impact on the formation mechanism of nano ZnO, and more study is still required in dealing with the nanoparticle formation mechanism.Studying the effects of different parameters on the supercritical hydrothermal synthesis of nano ZnO particles facilitates optimizing synthesis conditions, exploring reaction mechanisms, improving synthesis efficiency, and obtaining the required nanoparticles with specific morphology and properties.Furthermore, the solvent effect in the hydrothermal synthesis plays a significant role in the formation of zinc oxide nanorods.The solvent effect refers to the influence of the solvent on reaction rates, equilibrium constants, and reaction mechanisms.During the reaction process with the addition of the solvent, the solvent affects the growth rate of zinc oxide particles by modifying the solubility of intermediate products, the polarity and viscosity of the reaction medium, and the interactions between the reaction medium and reactants.
In this study, nano ZnO powders were prepared using supercritical hydrothermal synthesis technology.The effects of the reaction temperature, pressure, reaction time, precursor concentration, and ethanol content on the particle size, crystallinity, and morphology of the products were studied using the controlled variable method.The influence of reaction parameters on the ZnO formation mechanism, the best synthesis conditions, and the mechanism of adding organic solvent to prevent particle aggregation were determined.
Experimental Procedure
Firstly, the 1 mol/L Zn(CH 3 COO) 2 •2H 2 O and NaOH solutions were prepared separately.For each experiment, a certain amount of the solution was taken, with a molar ratio of Zn(CH 3 COO) 2 •2H 2 O/NaOH of 1:2.The solutions were mixed evenly through ultrasonic mixing for 15 min in a microreactor (as shown in Figure 1, made of stainless steel 316, with an inner diameter and length of 14 and 80.0 mm, respectively).The microreactor was then properly sealed with a screw-sealed cap.The sealed reactor was placed in a tubular furnace that was heated to a specified temperature (i.e., reaction temperature).The self-generated pressure inside the microreactor reached the desired value (i.e., reaction pressure).After reacting for the predetermined time, the microreactor was quickly removed from the tubular furnace for shock cooling in a water bucket.The reaction products were then collected into a centrifuge tube, and the products were centrifuged to obtain the upper suspended liquid and the lower precipitate.The precipitate product was washed with ethanol and then centrifuged at least three times.The obtained sample was prepared for detection after being placed on the glass slide and vacuum-dried for over 12 h.nanorods.The solvent effect refers to the influence of the solvent on reaction rates, equilibrium constants, and reaction mechanisms.During the reaction process with the addition of the solvent, the solvent affects the growth rate of zinc oxide particles by modifying the solubility of intermediate products, the polarity and viscosity of the reaction medium, and the interactions between the reaction medium and reactants.In this study, nano ZnO powders were prepared using supercritical hydrothermal synthesis technology.The effects of the reaction temperature, pressure, reaction time, precursor concentration, and ethanol content on the particle size, crystallinity, and morphology of the products were studied using the controlled variable method.The influence of reaction parameters on the ZnO formation mechanism, the best synthesis conditions, and the mechanism of adding organic solvent to prevent particle aggregation were determined.
Experimental Procedure
Firstly, the 1 mol/L Zn(CH3COO)2•2H2O and NaOH solutions were prepared separately.For each experiment, a certain amount of the solution was taken, with a molar ratio of Zn(CH3COO)2•2H2O/NaOH of 1:2.The solutions were mixed evenly through ultrasonic mixing for 15 min in a microreactor (as shown in Figure 1, made of stainless steel 316, with an inner diameter and length of 14 and 80.0 mm, respectively).The microreactor was then properly sealed with a screw-sealed cap.The sealed reactor was placed in a tubular furnace that was heated to a specified temperature (i.e., reaction temperature).The self-generated pressure inside the microreactor reached the desired value (i.e., reaction pressure).After reacting for the predetermined time, the microreactor was quickly removed from the tubular furnace for shock cooling in a water bucket.The reaction products were then collected into a centrifuge tube, and the products were centrifuged to obtain the upper suspended liquid and the lower precipitate.The precipitate product was washed with ethanol and then centrifuged at least three times.The obtained sample was prepared for detection after being placed on the glass slide and vacuum-dried for over 12 h.
Material Characterization and Analysis Methods
X-ray diffraction (XRD) instrument with Cu-Kα was used for checking the crystallinity and phase of obtained powders.The phase composition and purity of the products were analyzed using Jade 9 software, and the crystallite size of ZnO was calculated using Scherrer's Equation (1): where Size represents the crystallite size (nm), K is a constant typically set as K = 1, λ is the wavelength of X-ray (nm), FW(S) is the sample broadening (Rad), and θ is the diffraction angle (Rad).
The particle size and morphology of the sample were detected using field emission scanning electron microscopy (FESEM), whose acceleration voltage was 200 V~30 kV, and magnification was 16 X~1270 kX.To accurately study the particle size and its distribution, the Nano Measure 1.2 software tool was used to measure the particles from the FESEM micrographs by randomly selecting 100 measurement points.Furthermore, the measured particle sizes from the 100 points can be statistically plotted to create a size distribution bar graph, allowing for visual observation of the size distribution.By comparing the calculated crystallite size with the measured particle size, some conclusions can be obtained.
Experimental Conditions for Each Group
This study explored the effects of the reaction temperature, pressure, time, reactant concentration, and the molar ratio of precursor to ethanol on the size, crystallinity, and morphology of synthesized zinc oxide particles.Based on the control variates method, five sets of experiments were designed, as shown in Tables 1-5.
Effect of Reaction Temperature
The fixed reaction pressure was 26 MPa, the reaction time was 10 min, the concentration of Zn(CH 3 COO) 2 •2H 2 O solution was 0.3 mol/L, and no ethanol was added.The effect of the reaction temperature on the particle size, morphology, and crystal structure was studied.The XRD patterns of the products are shown in Figure 2. By comparing with the standard cards, the peak positions located at 31.8
Effect of Reaction Temperature
The fixed reaction pressure was 26 MPa, the reaction time was 10 min, the concentration of Zn(CH3COO)2•2H2O solution was 0.3 mol/L, and no ethanol was added.The effect of the reaction temperature on the particle size, morphology, and crystal structure was studied.The XRD patterns of the products are shown in Figure 2. By comparing with the standard cards, the peak positions located at 31.8°, 34.4°, 36.3°,47.5°, 56.6°, 62.8°, and 67.9° are consistent with the hexagonal wurtzite (JCPDS PDF#75-0576).The diffraction peaks are sharp, and no other impurity peaks are observed, indicating that the product is a wellcrystallized and high-purity ZnO powder.SEM images of the synthesized nano ZnO under different temperatures ranging from 300 • C to 500 • C are shown in Figure 3.It can be observed that the morphology of the ZnO product changes with increasing temperature.At a reaction temperature of 300 • C, the particles appear to be clustered into flower-like clusters and fragments (Figure 3a).As the reaction temperature increases to 350 • C, the product gradually transforms into a flower-like structure composed of conical nanoneedles (Figure 3b).When heated to 400 • C, the morphology begins to transform into hexagonal prismatic aggregates, but there are still flower clusters and flaky particles (Figure 3c).Subsequently, at 450 • C, the ZnO particles observed under FESEM exhibit a significant reduction in particle size and show a regular polyhedral morphology with a clear boundary (Figure 3d,e).
is lower, leading to a slow crystal growth rate [22].So, the synthesized nanoparticles are irregular and uneven, which is another piece of evidence of low crystallinity.However, as the temperature increases to the supercritical region (400-500 °C), lower ZnO solubility in supercritical water results in a higher supersaturation, promoting the crystal growth of the product.The synthesized particles gradually take on a polyhedral shape with clear boundaries and tend to become uniform.This change indicates that a supercritical temperature is conducive to the formation of high-crystallinity crystals.Since the morphology of ZnO generated at 300 and 350 °C is irregular, it is difficult to measure its particle size, so only the size distribution diagram of the product at 400- The change in ZnO morphology is caused by the change in ZnO solubility in supercritical water.The solubility of solutes in supercritical hydrothermal conditions mainly depends on the density of water.In the subcritical region (300-350 • C), since water has not reached the critical point, its density is higher than that of water in the supercritical state, ZnO solubility in the reaction medium is higher, and supersaturation of zinc oxide is lower, leading to a slow crystal growth rate [22].So, the synthesized nanoparticles are irregular and uneven, which is another piece of evidence of low crystallinity.However, as the temperature increases to the supercritical region (400-500 • C), lower ZnO solubility in supercritical water results in a higher supersaturation, promoting the crystal growth of the product.The synthesized particles gradually take on a polyhedral shape with clear boundaries and tend to become uniform.This change indicates that a supercritical temperature is conducive to the formation of high-crystallinity crystals.
Since the morphology of ZnO generated at 300 and 350 • C is irregular, it is difficult to measure its particle size, so only the size distribution diagram of the product at 400-500 • C is shown here, as shown in Figure 4.It is the measured size distribution by randomly selecting 100 ZnO particles from the FESEM images at different temperatures.Through observation and analysis, it was found that when the reaction temperature reaches 450 • C (Figure 4b), the particle size of ZnO is the smallest, with an average size of 63.96 nm.As the temperature increases from 400 • C, the size of ZnO particles gradually decreases within a certain range.However, when the temperature reaches 500 • C, the size suddenly increases.Figure 5 shows the size variation of ZnO grains synthesized at different temperatures calculated by Equation (1).It is found that the variation trend is the same as that in Figure 4: when the reaction temperature is 400 • C, the ZnO grain size is the smallest, which is 33.9 nm.When the reaction temperature reaches 500 • C, the crystallite size increases to 50.1 nm.Within a certain range (300-400 • C), as the reaction temperature increases, the crystallite size of ZnO decreases.On the one hand, this is because the dielectric constant of water in the supercritical region is extremely low, resulting in low solubility of metal oxides, thereby improving the formation rate of ZnO crystal nuclei.The increase in the nucleation rate leads to a decrease in average particle size and crystallite size.On the other hand, according to the Born equation, the reaction rate is inversely proportional to the dielectric constant [10].Therefore, the extremely low dielectric constant leads to faster hydrolysis, dehydration, nucleation, and growth rates compared to the subcritical region, resulting in a gradual decrease in the average particle size, which also improves the yield of ZnO particles to a certain extent.After the temperature reaches 400 • C, the Ostwald ripening and collision probability between ZnO particles increase and have an increasing influence on particle growth, leading to an increase in particle aggregation and, subsequently, an increase in the crystallite size and particle size [23,24].500 °C is shown here, as shown in Figure 4.It is the measured size distribution by randomly selecting 100 ZnO particles from the FESEM images at different temperatures.Through observation and analysis, it was found that when the reaction temperature reaches 450 °C (Figure 4b), the particle size of ZnO is the smallest, with an average size of 63.96 nm.As the temperature increases from 400 °C, the size of ZnO particles gradually decreases within a certain range.However, when the temperature reaches 500 °C, the size suddenly increases.Figure 5 shows the size variation of ZnO grains synthesized at different temperatures calculated by Equation (1).It is found that the variation trend is the same as that in Figure 4: when the reaction temperature is 400 °C, the ZnO grain size is the smallest, which is 33.9 nm.When the reaction temperature reaches 500 °C, the crystallite size increases to 50.1 nm.Within a certain range (300-400 °C), as the reaction temperature increases, the crystallite size of ZnO decreases.On the one hand, this is because the dielectric constant of water in the supercritical region is extremely low, resulting in low solubility of metal oxides, thereby improving the formation rate of ZnO crystal nuclei.The increase in the nucleation rate leads to a decrease in average particle size and crystallite size.On the other hand, according to the Born equation, the reaction rate is inversely proportional to the dielectric constant [10].Therefore, the extremely low dielectric constant leads to faster hydrolysis, dehydration, nucleation, and growth rates compared to the subcritical region, resulting in a gradual decrease in the average particle size, which also improves the yield of ZnO particles to a certain extent.After the temperature reaches 400 °C, the Ostwald ripening and collision probability between ZnO particles increase and have an increasing influence on particle growth, leading to an increase in particle aggregation and, subsequently, an increase in the crystallite size and particle size [23,24].
Effect of Reaction Pressure
Under experimental conditions fixed at 450 °C, 10 min, a Zn(CH3COO)2•2H2O solution concentration of 0.3 mol/L, and no ethanol addition, reaction pressure variation from 22 to 30 MPa was carried out to investigate the effect of pressure on the preparation of
Effect of Reaction Pressure
Under experimental conditions fixed at 450 • C, 10 min, a Zn(CH 3 COO) 2 •2H 2 O solution concentration of 0.3 mol/L, and no ethanol addition, reaction pressure variation from 22 to 30 MPa was carried out to investigate the effect of pressure on the preparation of ZnO particles.The XRD spectra of the obtained products can be found in the Supplementary Materials (Figure S1).After matching, it was found that the crystal plane height of the samples was highly consistent with the hexagonal wurtzite structure, indicating that the prepared product was zinc oxide nanoparticles with high purity.
Figure 6 shows the FESEM spectra of synthesized nano ZnO under different pressure conditions.It can be observed that at 22 MPa, the morphology of the synthesized ZnO crystals (Figure 6a) is an irregular polygonal aggregate with varying particle sizes.As the pressure increases, ZnO particles gradually become more homogeneous and dispersed than before (Figure 6b-d).This is because the increase in pressure from the sub-region to the super-region in the hydrothermal synthesis process will increase the reaction rate, promote condensed matter nucleation in the reaction system, and make the substances in the solution more uniform in diffusion and migration, thus promoting the nucleation and growth of particles [24][25][26].With the pressure increasing from 24 MPa to 28 MPa, the synthesized ZnO crystals maintain a hexagonal prism shape, but the boundaries are clearer and more evenly distributed, with the average particle size increasing slightly from 70.36 to 84.08 nm (as shown in Figure 7).Continuing to increase the pressure to 30 MPa, it is found that the ZnO crystal size becomes larger, the distribution becomes uneven, and particle aggregation can be found in the FESEM photograph (Figure 6e).These phenomenons are attributed to the slight increase in water density and dielectric constant in the supercritical state [27,28].A higher reaction medium density and dielectric constant weaken the advantages of supercritical water, adverse to crystal nucleation, resulting in larger particle size and even ununiform particles [29][30][31].The variation in crystallite size (as shown in Figure S2 of the Supplementary Materials) from 22 MPa to 30 MPa exhibits a similar trend to the average particle size, which is other proof of our deduction.
Effect of Reactant Concentration
Under experimental conditions of 450 °C, 26 MPa, 10 min, and no ethanol addition, different Zn(CH3COO)2•2H2O concentrations (0.1 mol/L, 0.2 mol/L, 0.3 mol/L, 0.4 mol/L, and 0.5 mol/L) were investigated to explore the effect of the reactant concentration on the preparation of ZnO particles.The prepared particles were characterized by XRD and FESEM, as shown in Figure S3 (which can be found in the Supplementary Materials) and Figure 8, respectively.XRD spectra graphs confirm the production of ZnO crystals, and no impurity can be found.
Figures 8 and 9 show FESEM images and the particle size distribution of ZnO synthesized with different concentrations of Zn(CH3COO)2•2H2O.As can be seen in the figure, the particle size and shape of ZnO nanoparticles are also affected by the precursor concentration; with the increase in the precursor concentration, the particle size tends to decrease first and then increase.When the precursor concentration increases from 0.1 mol/L to 0.3 mol/L, the morphology of ZnO particles gradually changes from an irregular polyhedron to uniformly distributed hexagonal prisms, and the particle size decreases from 83.48 nm to 63.96 nm (Figure 8a-c).However, when the precursor concentration exceeds 0.3 mol/L and increases, the particle morphology returns to an irregular state, and the particle size also increases to 79.94 nm (Figure 8e).This appears to point to the existence of an optimum concentration for producing the smallest particle size of ZnO nanoparticles.
Effect of Reactant Concentration
Under experimental conditions of 450 • C, 26 MPa, 10 min, and no ethanol addition, different Zn(CH 3 COO) 2 •2H 2 O concentrations (0.1 mol/L, 0.2 mol/L, 0.3 mol/L, 0.4 mol/L, and 0.5 mol/L) were investigated to explore the effect of the reactant concentration on the preparation of ZnO particles.The prepared particles were characterized by XRD and FESEM, as shown in Figure S3 (which can be found in the Supplementary Materials) and Figure 8, respectively.XRD spectra graphs confirm the production of ZnO crystals, and no impurity can be found.
Figures 8 and 9 show FESEM images and the particle size distribution of ZnO synthesized with different concentrations of Zn(CH 3 COO) 2 •2H 2 O.As can be seen in the figure, the particle size and shape of ZnO nanoparticles are also affected by the precursor concentration; with the increase in the precursor concentration, the particle size tends to decrease first and then increase.When the precursor concentration increases from 0.1 mol/L to 0.3 mol/L, the morphology of ZnO particles gradually changes from an irregular polyhedron to uniformly distributed hexagonal prisms, and the particle size decreases from 83.48 nm to 63.96 nm (Figure 8a-c).However, when the precursor concentration exceeds 0.3 mol/L and increases, the particle morphology returns to an irregular state, and the particle size also increases to 79.94 nm (Figure 8e).This appears to point to the existence of an optimum concentration for producing the smallest particle size of ZnO nanoparticles.
precursor increases the driving force for nucleation, which takes on an explosive state and rapidly forms uniform particles.This is the cause of uniform particle morphology at 0.1-0.3mol/L.However, with a further increase in precursor concentration, the deposition solubility increases, and the particle size of nano-ZnO expands.Therefore, when the concentration is greater than 0.3 mol/L, the particle size of nano-ZnO increases gradually [26,[34][35][36][37].The particle size change from 0.1 to 0.5 mol/L (as shown in Figure S4 in the Supplementary Materials) is another basis for our conclusion.According to Sue et al. [32], when the initial reactant concentration is much higher than the solubility of the metal oxide, a decrease in the initial reactant concentration usually results in the formation of smaller particles.When the initial concentration of precursors was reduced and approached the solubility limit of ZnO, the dissolution and precipitation of ZnO were accelerated.At the same time, according to the classical LaMer theory [33], the initial parent ion concentration determines the supersaturation of the solution, and the effect of supersaturation on the nucleation rate is that supersaturation provides the driving force for nucleation.Nucleation is the transformation process of the supercritical precursor to the solid phase, which requires driving force.Only when the concentration of parent ions in the solution has a certain supersaturation can the formation of a new phase be driven.The higher the supersaturation, the greater the phase transition driving force and, therefore, the higher the nucleation rate.The increasing concentration of the precursor increases the driving force for nucleation, which takes on an explosive state and rapidly forms uniform particles.This is the cause of uniform particle morphology at 0.1-0.3mol/L.However, with a further increase in precursor concentration, the deposition solubility increases, and the particle size of nano-ZnO expands.Therefore, when the concentration is greater than 0.3 mol/L, the particle size of nano-ZnO increases gradually [26,[34][35][36][37].The particle size change from 0.1 to 0.5 mol/L (as shown in Figure S4 in the Supplementary Materials) is another basis for our conclusion.
Effect of Reaction Time
Under experimental conditions of a temperature of 450 °C, pressure of 26 MPa, Zn(CH3COO)2•2H2O solution concentration of 0.3 mol/L, and 0 ethanol addition, reaction times of 5 min, 7.5 min, 10 min, 12.5 min, and 15 min were investigated to determine the effect of the reaction time on the preparation of ZnO particles.The XRD spectra of obtained products can be found in the Supplementary Materials (Figure S5).After matching, it was found that the crystal plane height of the samples was highly consistent with the hexagonal wurtzite structure.
Figure 10 shows FESEM images of ZnO obtained at different reaction times.It can be seen in Figure 10 that when the reaction time is 5 and 10 min, the morphology of ZnO particles generated is mainly foliate-like and needle-like and gradually changes to the regular hexagonal prism shape.When the reaction time is 5 min (Figure 10a), nano-ZnO is foliated, which is because the reaction time is too short, the reaction is not sufficient, and the growth process of ZnO crystals is not complete.Some factors cause the reaction conditions to not reach the ideal state due to the short heating time of the reactor and uneven heating.As the reaction time increases (Figure 10b), leaf-shaped ZnO gradually changes into needle-shaped particles.This is because the increase in the reaction time gives ZnO particles sufficient time to grow.At this time, both radial and axial ZnO particles are growing, so most of them are needle-shaped particles.With the further extension of the reaction time (10 min, Figure 10c), ZnO nanoneedles have sufficient time for growth, and their axial growth rate and radial growth rate tend to balance.Finally, a regular uniform distribution of hexagonal prismatic particles is formed [22,24,38].At the same time, the size
Effect of Reaction Time
Under experimental conditions of a temperature of 450 • C, pressure of 26 MPa, Zn(CH 3 COO) 2 •2H 2 O solution concentration of 0.3 mol/L, and 0 ethanol addition, reaction times of 5 min, 7.5 min, 10 min, 12.5 min, and 15 min were investigated to determine the effect of the reaction time on the preparation of ZnO particles.The XRD spectra of obtained products can be found in the Supplementary Materials (Figure S5).After matching, it was found that the crystal plane height of the samples was highly consistent with the hexagonal wurtzite structure.
Figure 10 shows FESEM images of ZnO obtained at different reaction times.It can be seen in Figure 10 that when the reaction time is 5 and 10 min, the morphology of ZnO particles generated is mainly foliate-like and needle-like and gradually changes to the regular hexagonal prism shape.When the reaction time is 5 min (Figure 10a), nano-ZnO is foliated, which is because the reaction time is too short, the reaction is not sufficient, and the growth process of ZnO crystals is not complete.Some factors cause the reaction conditions to not reach the ideal state due to the short heating time of the reactor and uneven heating.As the reaction time increases (Figure 10b), leaf-shaped ZnO gradually changes into needle-shaped particles.This is because the increase in the reaction time gives ZnO particles sufficient time to grow.At this time, both radial and axial ZnO particles are growing, so most of them are needle-shaped particles.With the further extension of the reaction time (10 min, Figure 10c), ZnO nanoneedles have sufficient time for growth, and their axial growth rate and radial growth rate tend to balance.Finally, a regular uniform distribution of hexagonal prismatic particles is formed [22,24,38].At the same time, the size distribution of ZnO particles prepared with a reaction time ranging from 10 to 15 min (Figure 11) also reflects this feature.When the reaction time increases from 10 min to 15 min, the ZnO particle size gradually increases, the agglomeration phenomenon intensifies, and the morphology becomes an irregular polyhedron, which can be explained by the Ostwald effect [39,40].Moreover, the crystallite size changes of ZnO synthesized at different times (as shown in Figure S6 in the Supplementary Materials) calculated by Equation ( 1) also have the same trend.
Nanomaterials 2024, 14, x FOR PEER REVIEW 13 of 20 distribution of ZnO particles prepared with a reaction time ranging from 10 to 15 min (Figure 11) also reflects this feature.When the reaction time increases from 10 min to 15 min, the ZnO particle size gradually increases, the agglomeration phenomenon intensifies, and the morphology becomes an irregular polyhedron, which can be explained by the Ostwald effect [39,40].Moreover, the crystallite size changes of ZnO synthesized at different times (as shown in Figure S6 in the Supplementary Materials) calculated by Equation ( 1) also have the same trend.
Effect of the Amount of Ethanol Addition
Under experimental conditions of a temperature of 450 • C, pressure of 26 MPa, Zn(CH 3 COO) 2 •2H 2 O solution concentration of 0.3 mol/L, and reaction time of 10 min, the molar ratios of precursor to ethanol were selected as 2:1, 1:1, 1:2, 1:3, and 1:4, respectively, to investigate the effect of ethanol addition on the preparation of ZnO particles.The XRD spectra of the obtained products can be found in the Supplementary Materials (Figure S7).After matching, it was found that the crystal plane height of the samples was highly consistent with the hexagonal wurtzite structure.
Effect of the Amount of Ethanol Addition
Under experimental conditions of a temperature of 450 °C, pressure of 26 MPa, Zn(CH3COO)2•2H2O solution concentration of 0.3 mol/L, and reaction time of 10 min, the molar ratios of precursor to ethanol were selected as 2:1, 1:1, 1:2, 1:3, and 1:4, respectively, to investigate the effect of ethanol addition on the preparation of ZnO particles.The XRD spectra of the obtained products can be found in the Supplementary Materials (Figure S7).After matching, it was found that the crystal plane height of the samples was highly consistent with the hexagonal wurtzite structure.
Figures 12 and 13 show the FESEM diagram and particle size distribution diagram of ZnO synthesized at different ethanol concentrations, respectively.It can be seen that the addition of ethanol has little effect on the ZnO grain size.With the increase in the ratio of the precursor to ethanol, the ZnO grain size fluctuates around 80 nm.The morphology and dispersion of the particles are greatly affected, and the particles gradually change from irregular polyhedrons and rod-like structures to uniformly dispersed spherical particles.The particle size changes due to the chemical bonding between ethanol and ZnO, which occurs through adsorption and chemical interactions.This bonding can occur in two ways: (1) covalent bonding between the positively charged ZnO surface and the dissociated part of R-OH; (2) bonding of hydroxyl groups on the ZnO surface with the hydroxyl groups of R-OH.The secure attachment of R-OH on the ZnO surface leads to a reduction in particle size [41].The "dissolution and crystallization process" theory has been considered a basic mechanism to describe the hydrothermal process, and in the dissolution-crystallization process, the physical properties of the solvent, such as permittivity and surface tension, also affect the process [41,42].When there is no ethanol in the solvent or the ethanol content is low, the surface tension of the solvent is high, and the hydroxide precipitate will aggregate, resulting in a longer time for dehydration to ZnO powder.If ethanol is added to the solvent, the surface tension of the solvent will be reduced, resulting in The particle size changes due to the chemical bonding between ethanol and ZnO, which occurs through adsorption and chemical interactions.This bonding can occur in two ways: (1) covalent bonding between the positively charged ZnO surface and the dissociated part of R-OH; (2) bonding of hydroxyl groups on the ZnO surface with the hydroxyl groups of R-OH.The secure attachment of R-OH on the ZnO surface leads to a reduction in particle size [41].The "dissolution and crystallization process" theory has been considered a basic mechanism to describe the hydrothermal process, and in the dissolution-crystallization process, the physical properties of the solvent, such as permittivity and surface tension, also affect the process [41,42].When there is no ethanol in the solvent or the ethanol content is low, the surface tension of the solvent is high, and the hydroxide precipitate will aggregate, resulting in a longer time for dehydration to ZnO powder.If ethanol is added to the solvent, the surface tension of the solvent will be reduced, resulting in the hydroxide precipitate being wrapped faster by the surrounding solvent, thus exhibiting better dispersibility and easier dehydration to form ZnO powder.In addition, the introduction of ethanol in water can reduce the dielectric constant of the solvent, thereby reducing the solubility of ZnO and increasing the saturation of ZnO, thus increasing the nucleation rate [43,44].
the hydroxide precipitate being wrapped faster by the surrounding solvent, thus exhibiting better dispersibility and easier dehydration to form ZnO powder.In addition, the introduction of ethanol in water can reduce the dielectric constant of the solvent, thereby reducing the solubility of ZnO and increasing the saturation of ZnO, thus increasing the nucleation rate [43,44].Figure 15 shows FT-IR (Fourier Transform infrared spectroscopy) of ZnO NPs synthesized with the addition of organic solvents of different molar ratios.It can be seen in the figure that the FT-IR curves of ZnO NPs synthesized by adding different moles of ethanol are very similar.By comparing the infrared spectra with those of ordinary ZnO, it was found that the characteristic peaks of the product were basically consistent with those of ordinary ZnO.Absorption peaks in the range of 3500 to 3300 cm −1 can be found in the figure, which correspond to stretching vibrations of hydroxyl (OH) groups.There are several weak absorption peaks in the range of 1700~1400 cm −1 , which may be the peaks of molecular vibration and bending vibration of hydroxyl coordination compounds in ZnO.The absorption peak between 700 and 400 cm −1 , which is the characteristic absorption peak of ZnO, corresponds to the bending vibration of Zn-O bonds in ZnO.It can be seen that the intensity of the absorption peaks of ZnO NPs synthesized by adding different molar ratios of ethanol is different.This is because the particle size of ZnO NPs synthesized by adding ethanol is small, and the proportion of surface atoms is large, so the lattice vibration under infrared irradiation is different from that of ordinary synthesized ZnO NPs [45][46][47].ZnO.The absorption peak between 700 and 400 cm −1 , which is the characteristic absorption peak of ZnO, corresponds to the bending vibration of Zn-O bonds in ZnO.It can be seen that the intensity of the absorption peaks of ZnO NPs synthesized by adding different molar ratios of ethanol is different.This is because the particle size of ZnO NPs synthesized by adding ethanol is small, and the proportion of surface atoms is large, so the lattice vibration under infrared irradiation is different from that of ordinary synthesized ZnO NPs [45][46][47].
Conclusions and Prospects
ZnO nanoparticles with high dispersion and a small particle size were prepared using the supercritical hydrothermal synthesis technique.By adjusting the reaction temperature, pressure, precursor concentration, reaction time, and the ratio of precursor to ethanol, ZnO nanoparticles with different shapes and particle sizes were synthesized.The influence of reaction conditions on the product was studied by analyzing its morphology, particle size, and dispersion degree.
The results show that the morphology and particle size of nano-ZnO are greatly affected by the reaction conditions.The particle size of ZnO decreases with an increase in temperature and pressure within an appropriate range due to the extremely low dielectric constant and density of water in a supercritical state.These physical properties result in a sharp decrease in solubility of the product in supercritical water, leading to rapid formation.
The influence of the precursor concentration and reaction time on the particle size is mainly determined by nucleation and grain growth.At low precursor concentrations, nucleation is slowed down due to a lack of driving force, which contributes to relatively large
Conclusions and Prospects
ZnO nanoparticles with high dispersion and a small particle size were prepared using the supercritical hydrothermal synthesis technique.By adjusting the reaction temperature, pressure, precursor concentration, reaction time, and the ratio of precursor to ethanol, ZnO nanoparticles with different shapes and particle sizes were synthesized.The influence of reaction conditions on the product was studied by analyzing its morphology, particle size, and dispersion degree.
The results show that the morphology and particle size of nano-ZnO are greatly affected by the reaction conditions.The particle size of ZnO decreases with an increase in temperature and pressure within an appropriate range due to the extremely low dielectric constant and density of water in a supercritical state.These physical properties result in a sharp decrease in solubility of the product in supercritical water, leading to rapid formation.
The influence of the precursor concentration and reaction time on the particle size is mainly determined by nucleation and grain growth.At low precursor concentrations, nucleation is slowed down due to a lack of driving force, which contributes to relatively large particles being produced.If the reaction time is too short, there will be insufficient reaction between precursors, resulting in smaller particles not forming properly.
Additionally, adding ethanol plays a key role in improving both size and morphology by establishing chemical bonds with ZnO while inhibiting agglomeration.The morphology and dispersion of ZnO synthesized with each proportion of ethanol are better than those without ethanol, indicating that an organic solvent may be better than supercritical water in the synthesis of nanoparticles.Therefore, the optimum synthesis conditions of ZnO are 450 • C, 26 MPa, 0.3 mol/L, 10 min, and a molar ratio of precursor to ethanol of 1:3.
The supercritical hydrothermal synthesis of ZnO nanoparticles is an exciting new technique.The reaction conditions and parameters have a great influence on the size and morphology of ZnO nanoparticles.Therefore, the advantage of supercritical hydrothermal synthesis technology compared with other technologies is that the size, morphology, and dispersion of its products can be easily controlled by changing the conditions and parameters or adding surface modifiers during the reaction process.
Figure 2 .
Figure 2. XRD spectra of nano ZnO synthesized at different temperatures.SEM images of the synthesized nano ZnO under different temperatures ranging from 300 °C to 500 °C are shown in Figure3.It can be observed that the morphology of the ZnO product changes with increasing temperature.At a reaction temperature of 300 °C, the
Figure 2 .
Figure 2. XRD spectra of nano ZnO synthesized at different temperatures.
Figure 5 .
Figure 5. Variation in crystallite size of ZnO synthesized from 300 • C to 500 • C.
Figures 12 and 13
Figures 12 and 13 show the FESEM diagram and particle size distribution diagram of ZnO synthesized at different ethanol concentrations, respectively.It can be seen that the addition of ethanol has little effect on the ZnO grain size.With the increase in the ratio of the precursor to ethanol, the ZnO grain size fluctuates around 80 nm.The morphology and dispersion of the particles are greatly affected, and the particles gradually change from irregular polyhedrons and rod-like structures to uniformly dispersed spherical particles.The particle size changes due to the chemical bonding between ethanol and ZnO, which occurs through adsorption and chemical interactions.This bonding can occur in two ways:(1) covalent bonding between the positively charged ZnO surface and the dissociated part of R-OH; (2) bonding of hydroxyl groups on the ZnO surface with the hydroxyl groups of R-OH.The secure attachment of R-OH on the ZnO surface leads to a reduction in particle size[41].The "dissolution and crystallization process" theory has been considered a basic mechanism to describe the hydrothermal process, and in the dissolution-crystallization process, the physical properties of the solvent, such as permittivity and surface tension, also affect the process[41,42].When there is no ethanol in the solvent or the ethanol content is low, the surface tension of the solvent is high, and the hydroxide precipitate will aggregate, resulting in a longer time for dehydration to ZnO powder.If ethanol is added to the solvent, the surface tension of the solvent will be reduced, resulting in the hydroxide precipitate being wrapped faster by the surrounding solvent, thus exhibiting better dispersibility and easier dehydration to form ZnO powder.In addition, the introduction of ethanol in water can reduce the dielectric constant of the solvent, thereby reducing the solubility of ZnO and increasing the saturation of ZnO, thus increasing the nucleation rate[43,44].
Figure 14
Figure14shows the dispersion test of ZnO synthesized in three different media under the conditions of 450 °C, 26 MPa, precursor concentration of 0.3 mol/L, reaction time of 5 min, and molar ratio of precursor to ethanol of 1:3.It can be observed in the figure that the ZnO nanoparticles synthesized under this condition form a uniform suspension in the three media.This indicates that the synthesized ZnO nanoparticles have a good dispersion in different media.
Figure 14
Figure14shows the dispersion test of ZnO synthesized in three different media under the conditions of 450 • C, 26 MPa, precursor concentration of 0.3 mol/L, reaction time of 5 min, and molar ratio of precursor to ethanol of 1:3.It can be observed in the figure that the ZnO nanoparticles synthesized under this condition form a uniform suspension in the three media.This indicates that the synthesized ZnO nanoparticles have a good dispersion in different media.Figure15shows FT-IR (Fourier Transform infrared spectroscopy) of ZnO NPs synthesized with the addition of organic solvents of different molar ratios.It can be seen in the figure that the FT-IR curves of ZnO NPs synthesized by adding different moles of ethanol are very similar.By comparing the infrared spectra with those of ordinary ZnO, it was found that the characteristic peaks of the product were basically consistent with those of ordinary ZnO.Absorption peaks in the range of 3500 to 3300 cm −1 can be found in the figure, which correspond to stretching vibrations of hydroxyl (OH) groups.There are several weak absorption peaks in the range of 1700~1400 cm −1 , which may be the peaks of molecular vibration and bending vibration of hydroxyl coordination compounds in ZnO.The absorption peak between 700 and 400 cm −1 , which is the characteristic absorption peak of ZnO, corresponds to the bending vibration of Zn-O bonds in ZnO.It can be seen that the intensity of the absorption peaks of ZnO NPs synthesized by adding different molar ratios of ethanol is different.This is because the particle size of ZnO NPs synthesized by adding ethanol is small, and the proportion of surface atoms is large, so the lattice vibration under infrared irradiation is different from that of ordinary synthesized ZnO NPs[45][46][47].
Figure 14 .
Figure 14.The dispersion of ZnO NPs in different media, from left to right: dispersed in water, ethanol, ethylene glycol.
Figure 15
Figure 15 shows FT-IR (Fourier Transform infrared spectroscopy) of ZnO NPs synthesized with the addition of organic solvents of different molar ratios.It can be seen in the figure that the FT-IR curves of ZnO NPs synthesized by adding different moles of
Figure 14 .
Figure 14.The dispersion of ZnO NPs in different media, from left to right: dispersed in water, ethanol, ethylene glycol.
Figure 15
Figure 15 shows FT-IR (Fourier Transform infrared spectroscopy) of ZnO NPs synthesized with the addition of organic solvents of different molar ratios.It can be seen in the figure that the FT-IR curves of ZnO NPs synthesized by adding different moles of
Figure 14 .
Figure 14.The dispersion of ZnO NPs in different media, from left to right: dispersed in water, ethanol, ethylene glycol.
Table 1 .
Reaction conditions of Experimental Group 1 without the addition of ethanol.
Table 2 .
Reaction conditions of Experimental Group 2 without the addition of ethanol.
Table 3 .
Reaction conditions of Experimental Group 3 without the addition of ethanol.
Table 4 .
Reaction conditions of Experimental Group 4 without the addition of ethanol.
Table 5 .
Reaction conditions of Experimental Group 5 with the addition of ethanol.
Table 5 .
Reaction conditions of Experimental Group 5 with the addition of ethanol.
|
v3-fos-license
|
2020-05-28T09:08:21.402Z
|
2020-05-27T00:00:00.000
|
219741117
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40789-020-00329-w.pdf",
"pdf_hash": "efa547072c6da9b92993d916951b154d372fbda5",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2585",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"sha1": "f935eaf706922fa45827f90b3ad29e69b3ee8143",
"year": 2020
}
|
pes2o/s2orc
|
Experimental study on atomization characteristics and dust-reduction performance of four common types of pressure nozzles in underground coal mines
Pressure nozzle is commonly used in the dust-reduction techniques by spraying of underground coal mines. Based on the internal structure, the pressure nozzle can be divided into the following types: spiral channel nozzle, tangential flow-guided nozzle and X-swirl nozzle. In order to provide better guidance on the selection of nozzles for the coal mine dust-reduction systems by spraying, we designed comparing experiments to study the atomization characteristics and dust-reduction performance of four commonly used nozzles in the coal mine underground with different internal structures. From the experimental results on the atomization characteristics, both the tangential flow-guided nozzle and the X-swirl nozzle have high flow coefficients. The atomization angle is the largest in the spiral non-porous nozzle, and smallest in both the X-swirl nozzle and the spiral porous nozzle. The spraying range and the droplet velocity are inversely proportional to the atomization angle. When the water pressure is low, the atomization performance of the spiral non-porous nozzle is the best among the four types of nozzles. The atomization performance of the X-swirl nozzle is superior to other types when the water pressure is high. Under the high water pressure, the particle size of the atomized droplets is smallest in the X-swirl nozzle. Through the experiments on the dust-reduction performance of the four types of nozzles and the comprehensive analysis, the X-swirl nozzle is recommended for the coal mine application site with low water pressure in the dust-reduction system, while at the sites with high water pressure, the spiral non-porous nozzle is recommended, which has the lowest water consumption and obvious economic advantages.
Introduction
A large amount of dust is generated in the coal mine production fields such as mining, tunneling, transport, etc. (Wang et al. 2019;Luo et al. 2017). The health condition and life safety of the coal mine workers, who work in the environments with high-concentration dust for a long time, are seriously threatened Reed et al. 2018;Zhou et al. 2017). At present, the spraying technology for dust reduction has been widely applied in coal mine underground because of its simple assembly, easy operation, and strong applicability (Yang et al. 2019;Wang et al. 2018a). The atomizing nozzle is a key component of the spraying system for dust reduction. According to the atomization principle, the commonly used atomizing nozzle can be divided into different categories, i.e., pressure nozzle, rotary nozzle, pneumatic nozzle and ultrasonic nozzle. Among all the types of nozzles, the pressure atomizing nozzle is widely used in the dust reduction by spraying due to its simple structure and strong adaptability (Wang et al. 2018b(Wang et al. , 2020. The dust-reduction performance of the pressure nozzle is closely related to the atomization effect and the particle size of the droplets is an important indicator to evaluate the atomization effect of the nozzle. Some researchers studied the maximum entropy model and proposed a new method to predict the size distribution of droplets using the maximum entropy model (Sellens 1989;Li and Tankin 1992;Dumouchel 2009). Wang and Lefebvre (1987) and Couto et al. (1997) established a theoretical equation to calculate the SMD of pressure swirl nozzle based on the hypothesis on the fracture thickness of the liquid film. In addition, based the finite volume VOF method, there have been a large amount of studies on the internal flow field and atomization characteristics of pressure nozzles using numerical simulations (Fan et al. 2018;Zhao and He 2017;Chen and Ge 2013). Cheng et al. (2010) and Zhou et al. (2012) studied the atomization characteristics of the tangential flow-guide nozzle pressure nozzle by experiment, which was commonly used in coal mines, and obtained the relationship between the particle size of atomized droplets and the water pressure. Yi et al. (2018) measured the atomization angle of the X-swirl pressure nozzle, and obtained the formula for calculating the atomization angle of the nozzle by fitting. Nie et al. (2017) compared the atomization characteristics of two common pressure nozzles, spiral channel type and X-swirl type, and found that the spiral channel type pressure nozzle has a large atomization angle, but the range is short. At the same time, because the internal structure of the nozzle has an important influence on the atomization characteristics, Seoksu et al. (2008Seoksu et al. ( , 2009) studied the inclination angle and flow angle of the swirl nozzle, and concluded that when the inclination angle is large, an air core is formed and during the flow atomization, backflow vortex occurs. Harshad et al. (2020) selected three solid-cone nozzles with X-type swirl-insert and orifice diameters of 1.65, 1.90, and 2.45 mm to study the discharge coefficient, spray cone angle and mass flux density in the spray, and the size distribution of droplets, and finally fit the distribution formula of the same kind of nozzle spray characteristics.
Based on the studies on the atomization characteristics, some researchers have worked on the theoretical analysis on the dust-reduction by spray from pressure nozzles. Kou (2005, 2006) established a mathematical dustreduction model by high-pressure spray and obtained a theoretical calculation equation of classification efficiency using the fluid mechanics and aerosol theories. Cheng et al. (2011) analyzed the dust-reduction mechanism by highpressure spray, investigated the effect of water pressure on the dust-reduction efficiency by spraying, and proposed the calculation equation of minimum particle size in the dust collection by the atomized droplet. Based on the momentum and mass conservation equations of droplets and dust, Tawatchi and Wiwut (2011)established a prediction model for the dust inertia-interception efficiency in the dust reduction by spraying water droplets in open space. In addition, the authors verified the accuracy of the model by experimental results. Yu et al. (2018) established a mathematical model of gas-liquid-solid three-phase coupling and verified the validity of the mathematical model. The mathematical model has been applied to predict the dustreduction efficiency of the pressure nozzle in the fully mechanized mining face and obtained accurate prediction in the application.
The pressure nozzles commonly used in coal mine production sites for dust reduction by spraying can be roughly classified into different categories according to internal structure, including spiral channel nozzle, tangential flow-guide nozzle, X-swirl nozzle. The spiral channel nozzle can be further divided into two types, i.e., spiral porous nozzle and spiral non-porous nozzle. The spraying flow of the above nozzles has solid conical shapes. Due to the differences in the internal structure of the nozzles, the atomization characteristics and dust-reduction performance of the above nozzles are different. In addition, the applicable conditions of different types of nozzle are also different. In the coal mine application site, the design of the dust-reduction scheme and the selection of the nozzle should be dependent on the actual application conditions, including the water consumption requirement of nozzles and the water pressure in the pipe network. By comprehensive considerations on these conditions, an economical and rational dust-reduction scheme can be obtained. Most of the existing studies on the pressure nozzles focused on the analysis of single-structure nozzles. However, there are only few studies to compare the atomization characteristics and the dust-reduction performance of several common types of pressure nozzles. As a result, the design of dust reduction system and the selection of spray nozzles in coal mine applications have challenges. In this study, a custom-developed dust-reduction test platform was used to systematically compare the atomization characteristics and dust-reduction performance of the above-mentioned four different types of nozzles. The results in this study can provide reference for the design of dust-reduction scheme by spraying and the selection of nozzles in coal mine applications.
2 Experimental system and scheme
Selection of nozzles
According to the previous on-site investigation, four pressure nozzles which are commonly used in coal mine fields were analyzed in this study, i.e., the tangential flow-guide nozzle, the X-swirl nozzle, the spiral porous nozzle, and the spiral non-porous nozzle. Among them, the tangential flow nozzle has three internal vertical channels at the center of the nozzle. The interior of the X-swirl nozzle is an X-shaped water flow channel. The inside of the spiral porous nozzle is composed of a spiral channel around and a vertical channel in the middle, while the spiral non-porous nozzle has no intermediate channel. In coal mine production sites, the water used for dust-reduction generally contains impurities. In order to reduce clogging, the outlet diameter of nozzles cannot be too small. At the same time, for ensuring the atomization effect, the outlet diameter of nozzles should not be too large. Based on the comprehensive consideration, the outlet diameter of 1.2 mm is suitable for the nozzles in coal mine production sites (Wang et al. 2015). Therefore, the four different pressure nozzles selected in the experiment all had the outlet diameters 1.2 mm. The pressure nozzles selected in the experiment are shown in Fig. 1.
Experiment system
An experimental system for dust reduction by spraying is shown in Fig. 2. The system can simulate the processes such as dust generation, spraying, ventilation, etc. in coal mine production sites. The experimental system contained roadway model, high-pressure water pump, water tank, control cabinet, aerosol generator, Malvern droplet size analyzer, particle image velocimetry (PIV) and related pipelines, valves, and measuring instruments. The roadway model consisted of an inlet section, a measurement section, a spraying section, an axial flow fan and an outlet section. In order to facilitate the data acquisition of the Malvern droplet size analyzer and PIV system, the main section of the roadway model is made of transparent plexiglass with a thickness of 1 cm.
Measuring instrument and scheme of atomization characteristic
The atomization characteristics of nozzles include flow rate, atomization angle, range, droplet size, and droplet velocity. The water flow rate and water pressure of nozzles were measured using an electromagnetic flowmeter (YY-LED15K4C) and a digital pressure gauge (DX-801XB00150), respectively. A high-performance digital camera was used to capture the spraying field, and then Image-Pro Plus 6.0 post-processing software was used to calculate the atomization angle and the range. The Malvern droplet size analyzer was used to measure the particle size of the droplets in this experiment. However, the Malvern droplet size analyzer is unable to measure the droplet velocity of the flow field. In this experiment, the PIV system produced by LaVision, Germany was used to measure the droplet velocity. Figure 3 shows the equipment used in the atomization characteristics experiment for nozzles. The Malvern droplet particle size analyzer is based on the line-measurement principle. The particle size Experimental study on atomization characteristics and dust-reduction performance of four… 583 distribution of the droplets along the laser beam line was measured. The area which was 50 cm in front of the nozzle exit was selected for the data acquisition of particle size. At the same time, the water pressure of the nozzle in this experiment was set to 1.0-8.0 MPa based on the actual conditions of the industrial application. The PIV system captured the flow field in the area of 30-80 cm in front of the nozzle and the capturing range was 50 cm 9 50 cm. In this PIV experiment, the exposure interval dt was set to 300 ls, and the power of light source A and B were set to 50% and 45% of the maximum power respectively. In each PIV test, 20 groups of double frame photos were obtained from CCD cameras, and then the flow field of the twenty groups of double frame photos was analyzed to produce a velocity profile. The high water pressure can cause excessive concentration of the droplets in the downstream flow field, which affects the tracing capability of particles and reduces the measurement accuracy. Therefore, in the PIV experiment, only three lower water pressures were used for analysis, i.e., 1.0, 2.0, and 3.0 MPa, respectively.
Measuring instrument and scheme of dustreduction performance of nozzles
In the experiments of dust-reduction performance, the German AG420 aerosol generator was used to generate dust and the compressed air provided by the air compressor was used as the transmission power for the dust. The dust was sent into the roadway from the entrance to simulate the production of dust in the industrial fields. Two explosionproof dust samplers (FCC-25) were arranged in the measurement section in the model roadway, i.e., one before the spraying section and one after the spraying sections. The dust in the two areas, i.e., before and after spraying was sampled under different working conditions. The dust in the two measuring points was sampled at the same time. At each measuring point, three continuous measurements were collected to obtain the average value for each working condition. The filter membranes of dust before and after sampling were weighed by an electronic analytical balance, and total dust mass concentration c mt and the total dust reduction efficiency g t were calculated. The LS13320 laser particle size analyzer was used to analyze the particle size of the collected dust samples. The proportions of respirable dust before and after spraying were obtained. Then mass concentration of respirable dust c mr and dust reduction efficiency of respirable dust g r were obtained by combining the proportion of respirable dust with the total dust mass concentration c mt (Wang et al. 2020). The equipment used for measuring the dust-reduction performance of the nozzle is shown in Fig. 4. The dust-reduction efficiency by spraying using the four nozzles at three water pressures: 2.0, 4.0 and 6.0 MPa, was measured. The dust in the experiment was a coal powder with the particle size of less than 106 lm, which was selected by more than 150 standard industrial fields. The dust generation by AG420 aerosol generator was set to be 15 g/min and the delivery pressure was set to be 0.2 MPa. The sampling duration of the FCC-25 dust sampler was set to 2 min and the sampling flow rate was set to be 15 L/min. Through the frequency modulation of the axial flow fan, the air flow velocity in the model roadway was stabilized at 1.0 m/s.
Experimental results and analysis on atomization characteristics of nozzles 3.1 Flow rate of nozzle
Based on the relevant data, the following relationship between the flow rate of pressure nozzles and the water pressure was obtained (Wang et al. 2018b).
where Q is the flow rate of the nozzle in the unit of m 3 /s, C q is the flow coefficient, d is the outlet diameter of the nozzle in the unit of mm, p is the water pressure in the unit of MPa, q is the liquid density in the unit of kg/m 3 . In Eq. (1), d = 1.2 mm and q = 1000 kg/m 3 . The flow rates of the four types of nozzles were measured under different water pressures. In addition, the fitting analysis on the measured data was conducted using SPSS software based on the above equation. The flow coefficients of the four types of nozzles are shown in Table 1.
According to the analysis of the data in Table 1, the flow rates of the four types of nozzles all gradually increase as the water pressure increases. At the same time, the flow coefficient of the two types of spiral channel nozzles is small, while the flow coefficients of the tangential flowguide nozzle and the X-swirl nozzle are similar. For the spiral channel nozzle, the effective flow-through area Fig. 4 Measuring instrument for dust reduction performance. a Aerosol generator, AG420; b Explosion-proof dust sampler, FCC-25; and c Laser particle size analyzer, LS13320 Experimental study on atomization characteristics and dust-reduction performance of four… 585 inside the nozzle is small and the internal resistance of the nozzle is large, thus the flow rate is small at the same water pressure and outlet diameter. The spiral porous nozzle has a flow passage in the middle, which greatly reduces the flow resistance, resulting in a larger flow coefficient than that of the spiral non-porous nozzle. Since the internal flow-through areas of the tangential flow-guide nozzle and the X-swirl type are relatively large, the internal resistance is relatively small and results in a larger flow coefficient than that of the spiral channel nozzle. The dust reduction efficiency by spraying is closely related to the water amount. The nozzle with large flow coefficient can increase the atomized water amount per unit volume under the same water pressure, which is beneficial to improve the dustreduction efficiency.
Atomization angle and range
The atomization angle and range are two important parameters of atomization characteristics, which determine the effective effect area of the spraying flow field of the nozzle. The larger the atomization angle is, the wider the coverage area by the atomized flow is. The wider coverage area can reduce the number of installed nozzle and the larger range of nozzles can achieve long-distance dust-reduction. Figure 5 shows the atomization angle and range of four types of nozzles under different water pressures. Figure 5a shows the measured data of the atomization angle. From Fig. 5a, it can be seen that with the increase of the water pressure, the atomization angle a of the four types of nozzles first increases and then decreases. Figure 6 is a photograph of the spraying field under the corresponding working conditions, which shows the same trend of atomization angle with the change of water pressure. When the water pressure increases, the flow rate increases accordingly, resulting in the increases of both the swirling force inside the nozzle and the radial velocity of the outlet jet from the nozzle. As a result, the atomization angle increases as the water pressure increases. When the water pressure is higher than 6.0 MPa, there is a negative pressure in the center of the rotary droplet flow at the nozzle outlet. The higher the water pressure is, the more obvious the negative pressure effect is observed. The negative pressure effect causes the droplet flow to shrink toward the center, resulting in a smaller atomization angle.
From Figs. 5a and 6, it can be seen that the spiral nonporous nozzle has the largest atomization angle under the same water pressure and the smallest variation with the increase of the water pressure. The atomization angle of the spiral non-porous nozzle is always maintained at around 56°. For the spiral non-porous nozzle, the spiral channel design improves the flow intensity at the outlet. In addition, since there are no holes in the center of the spiral nonporous nozzle, the water flow is completely swirled along the spiral passage of the inner wall and then ejected from the nozzle outlet. The water flows swirled along the spiral passage has a large centrifugal force, which results in a large radial velocity at the nozzle outlet and a large atomization angle. For the tangential flow-guide nozzle, the spraying flow is introduced into the nozzle along the tangential direction, which also has a high swirling intensity, resulting in a relatively high atomization angle. For the X-swirl nozzle, the X-shaped design of the inner core in the nozzle is not as beneficial to the swirling strength as the designs in the tangential flow-guide nozzle and the spiral non-porous nozzle. Thus the atomization angle of the X-swirl nozzle is relatively small. The atomization angle of the spiral porous nozzle is relatively small because the flow was not swirled before being injected from the nozzle. Figure 5b shows the atomization range of the four types of nozzles under different water pressures. It can be seen that as the water pressure increases, the spraying range increases. For pressure nozzles, as the water pressure increases, the flow rate increases, resulting in the increase of spraying range. Among the four types of nozzles, the X-swirl type and the tangential flow-guided type have relatively large range under the same water pressure. From the experimental results on the flow characteristics, the difference in the flow coefficients of the four types of nozzles was not significant, which indicated that the flow rate of the four types of nozzles was close under the same water pressure. When the flow rate is close, the range of the nozzle is inversely proportional to the atomization angle. Thus the nozzle with a larger atomization angle has a relatively smaller range. X-swirl and tangential flow-guided nozzles have a smaller atomization angle and thus a relatively large range, while the other two types of nozzles have a smaller spray angle and a smaller range. From the above analysis, the atomization angle and range of different types of nozzles vary due to the differences in the internal structural. Therefore, in the design of dust-reduction scheme by spraying in the engineering sites, the nozzle should be selected based on the performance. In the industrial sites where dust distribution is extensive and large-area dust reduction is required, such as the spray for hydraulic support in the fully-mechanized working face, spiral non-porous nozzles and tangential flow nozzles with large atomization angles can be selected. The flow coverage of the above two types of nozzles is large, which can save the number of installed nozzle. In the working environments that require long-distance dust reduction, such as the coal mine with dust reduction for shearers and roadheader, spiral-type or X-swirl nozzles should be selected. Both types of nozzles have a long range and can achieve long-distance dust reduction. Figure 7 shows the variation of the droplet size of four types of nozzles with the water pressure. In Fig. 7, D 10 , D 50 , and D 90 are characteristic particle diameters, respectively denoting that the particle volume of the particles less than it accounts for 10%, 50% and 90% of the total volume of the total particles. Figure 7 indicates that as the water pressure increases, the characteristic parameters of the droplet size of the four types of nozzles all decrease. In addition, the change of the particle size in the low water pressure zone is more significant. As the water pressure increases, the Weber number of the pressure nozzle increases. As a result, the growth rate of the disturbing wave on the water jet surface increases, resulting in a smaller particle size of break-up and atomization. Figure 8 shows the droplet size distribution of the X-swirl nozzle under different water pressures. In Fig. 8, the solid red line represents the cumulative percentage of the droplet size, and the blue column represents the frequency of droplet volume. From the cumulative volume fraction curve in Fig. 8, the characteristic particle diameters D 90 , D 50 , and D 10 all exhibit the same trends as in Fig. 7. From the frequency histogram of the droplet volume in Fig. 8, when the water pressure is increased, the Experimental study on atomization characteristics and dust-reduction performance of four… 587 peak frequency of the droplet volume is continuously shifted toward the left, i.e., toward the direction in which the droplet size decreases. A comparative analysis on the curves in Fig. 7 reveals that the particle size exhibits different trend with the change of water pressure for four types of nozzles. Among the four types of nozzles, the spiral non-porous nozzle has the smallest variation of particle size. For instance, as the water pressure increases from 1.0 to 8.0 MPa, D 50 of the spiral non-porous nozzle is reduced from 112 to 85 lm with a reduction ratio of less than 25%, while D 50 of the other three types of nozzles are reduced by about or above 50%. When the water pressure is low (p \ 3 MPa), the spiral non-porous nozzle has the best atomization performance. In the high pressure zone, the X-swirl nozzle has advantages in the atomization performance, i.e., the smallest droplet size can be obtained from the X-swirl nozzle under the same water pressure. In general, the atomization performance of the tangential flow-guided nozzle is better than that of the spiral porous nozzle. Under the same water pressure, the D 50 from the tangential flowguided nozzle is smaller than that from the spiral porous nozzle. The difference in atomization performance of the above four types of nozzles is caused by the different internal structure. The unique design of the swirling core in the X-swirl nozzle provides a better atomization performance, especially when the water pressure is high.
Velocity of droplets
The droplet velocity is an important indicator of the atomization performance of the nozzle. Under the same size, a higher relative velocity between dust and droplet is conducive to the deposition of dust, especially for respirable dust. The data of flow field measured by the PIV system was imported into Tecplot 360EX software and analyzed to obtain the vector diagram of the droplet velocity of the four types of nozzles under three water pressures, as shown in Fig. 9.
From the vector diagram of droplet velocity in Fig. 9, the droplet velocity in the downstream of the nozzle is continuously attenuated along the axial direction. The liquid is atomized to form droplets inside and outside the nozzle, and the droplet moves along the nozzle axis at a relatively high initial velocity. Due to the air resistance, the droplet velocity is continuously attenuated along the nozzle axis. Based on the comparison of the droplet velocities under different water pressures, the droplet velocity continuously increases as the water supply pressure increases. As the water supply pressure increases, the water flow rate of the nozzle increases continuously, thus the exiting velocity of the droplets from the nozzle is increased.
It is also shown in Fig. 9 that when the water pressure is the same, there is a large difference in the droplet velocity among the four types of nozzles. The smallest droplet velocity is observed in the spiral non-porous nozzle. From the experimental results of the flow characteristics, the spiral non-porous nozzle has a minimum flow coefficient and the smallest water flow rate under the same water pressure, resulting in the smallest exiting velocity. At the same time, the spiral non-porous nozzle has the largest atomization angle, and the droplet flow is dispersed after being emitted from the nozzle outlet. In addition, the speed is attenuated sharply due to the large air resistance of a single droplet. Moreover, under low water pressure, the atomization performance of the nozzle is superior and the average particle size of the droplets is small, thus the penetration ability of the small droplets is poor and the velocity decays sharply in the downstream of the nozzle. For the above reasons, the downstream speed of the spiral non-porous nozzle is significantly lower than that of the other three types of nozzles. The flow coefficients and outlet droplet velocity of other three types of nozzles are similar under the same water pressure. However, the spiral porous nozzle has the smallest atomization angle and the droplet flow is concentrated under low water pressure, resulting in a slow decay of the droplet velocity in the downstream and a high droplet velocity. The X-swirl nozzle has a higher atomization angle thus a higher droplet velocity than the tangential flow-guided nozzle. From the above analysis, under the similar flow coefficient, the atomization angle is the main factor affecting the droplet velocity in the downstream of the nozzle. Among the four types of nozzles, the spiral non-porous nozzle has the smallest flow coefficient and the largest atomization angle, thus having the smallest downstream droplet velocity. The droplet velocity of the other three types of nozzles is in the following order: spiral porous nozzle [ Xswirl nozzle [ tangential flow-guide nozzle.
4 Experimental results and analysis on dustreduction performance of nozzles 4.1 Mass concentration and particle size distribution of dust Table 2 shows the mass concentration of dust in the measurement section before and after spraying of four types of nozzles under different water pressures. From Table 2, the amount of dust generated by the aerosol generator in the experiment is basically stable and the mass concentration of dust in the measurement section before spraying is similar. It can also be seen from Table 2 that the mass concentration of dust in the measurement section after spraying is significantly lower than that before spraying, which proves that the spraying has a certain dust-reduction effect. Figure 10 shows the distribution of particle size in the measurement section before spraying. From Fig. 10, since the specifications of the dust used in the experiment are the same, the particle size composition of the dust before spraying is basically the same under different working conditions and the pattern of the particle size distribution is similar. From the cumulative volume curve in Fig. 10, it can also be seen that the proportion of respirable dust in the coal powder used in the experiment is about 20%. Figure 11 shows the distribution of dust particle size in the measurement section after spraying. Although the particle size distribution of the dust before spraying is similar under different working conditions, there is a significant difference in the particle size distribution of the dust after spraying, which is mainly caused by the difference in the dust-reduction efficiency of the spraying under each working condition. For the same nozzle, the particle size distribution of the dust varies under different water supply pressures. In general, as the water pressure increases, the dips of the volume frequency in the size distribution curve after spraying moves toward the left, i.e., toward the direction in which the particle diameter decreases. The position of the dips represents the particle diameter with high classification efficiency. As the water pressure increases, the droplet size decreases while the classification efficiency of the small dust particle increases correspondingly, causing the dips of the volume frequency to move toward the left. It can also be seen from Fig. 11 that under the same water pressure, the particle size distribution of the dust after spraying is different for the four different nozzles. The difference in the particle size distribution is mainly caused by the difference in the dust-reduction performance of the four types of nozzles.
Dust-reduction efficiency by spraying
According to the dust mass concentration in the measurement section before and after spraying in Table 2, the dustreduction efficiency by spraying can be calculated under each working condition. The calculated dust-reduction efficiency by spraying is the dust-reduction efficiency for the total dust. At the same time, the proportion of respirable dust in the measurement section before and after spraying under various working conditions can be obtained from the measurement results of the particle size distribution. Combing the proportion of respirable dust with the dustreduction efficiency for the total dust, the dust-reduction efficiency for the respirable dust can also be obtained. The dust-reduction efficiency for both total dust and respirable dust of the four types of nozzles under different water pressures can be obtained, as shown in Fig. 12.
Comparing the data of dust-reduction efficiency by spraying in Fig. 12, it is found that the dust-reduction efficiency of the same nozzle increases with the increase of the water pressure; however, when the water pressure reaches 4.0 MPa, there is no significant change of the dustreduction efficiency with the increase of the water pressure. Experimental study on atomization characteristics and dust-reduction performance of four… 591 In addition, as the water pressure increases, the dust-reduction efficiency for the respirable dust is more obviously changed than that for the total dust. With the increase of water pressure, the water flow rate of the nozzle is continuously increased and the volume concentration of the droplets in the roadway is increased, thus the possibility of droplet-dust collision and suppression is improved. Meanwhile, the increase in water pressure can increase the droplet velocity and reduce the droplet size, which are beneficial for the dust-reduction efficiency, especially for respiratory dust. When the water pressure increases to a certain value, the corresponding volume concentration of the droplets in the roadway is large enough while the particle size of the droplets is reduced slightly. As a result, the dust-reduction efficiency for both the total dust and the respirable dust does not have much increase as the water pressure changes. Also shown in Fig. 12, when the water pressure is low (p = 2.0 MPa), the X-swirl nozzle has the highest dustreduction efficiency for both the total dust and the respirable dust due to the large flow coefficient, which is only slightly lower than that of the spiral porous nozzle. At the same time, under the water pressure of 2.0 MPa, the X-swirl nozzle has a smaller droplet size and a higher droplet velocity. Under the impacts of the above factors, the X-swirl nozzle has the highest dust-reduction efficiency for both the total dust and the respirable dust among the four types of nozzles. Although the spiral porous nozzle has the highest flow coefficient, the droplet size is significantly larger than that of the other three under the water pressure of 2.0 MPa, resulting in low dust-reduction efficiency, especially a much lower dust-efficiency for the respirable dust than the other three types of nozzles. Although the spiral non-porous nozzle has the smallest particle size, the flow rate and droplet velocity are not superior. As a result, the dust-reduction efficiency of the spiral non-porous nozzle is in the second place. Therefore, in the coal mine application site, if the water pressure of the pipe network for dust reduction is limited, the spiral porous nozzle should be avoided to be selected due to the unguaranteed dust-reduction efficiency and a large amount of water consumption. In the view of dust-reduction efficiency, the X-swirl nozzle is preferred because high dustreduction efficiency for both the total dust and the respirable dust can be obtained under the same water pressure.
From Fig. 12, it can also be seen that with the increase of water pressure, the dust-reduction efficiency of the four types of nozzles is getting more and more similar. For example, when p = 6.0 MPa, the difference in the dustreduction efficiency of the four types of nozzles is less than 2% for the total dust and less than 4% for the respirable dust. When the water pressure is increased to a certain value, the volume concentration of the droplets in the roadway is sufficiently large for all the four types of nozzles, resulting in very similar dust-reduction efficiency for the total dust. At the same time, there are some differences in the dust-reduction efficiency for the respirable dust due to the difference in droplet size and droplet velocity. Comparing the two types of spiral channel nozzles, the dust-reduction efficiency of the non-porous type is higher than that of the porous type when the water pressure is low. Although the porous nozzle has advantageous flow rate, the droplet size of the porous nozzle is much larger than that of non-porous nozzle at the low water pressure, resulting in lower dust-reduction efficiency than non-porous type, especially for the respirable dust. With the increase of water pressure, the droplet size of both types of nozzles is relatively similar. Meanwhile, the porous nozzle has higher water flow rate and droplet velocity than the non-porous nozzle. As a result, when the water pressure reaches 6.0 MPa, the dust-reduction efficiency of the porous nozzle is slightly higher than that of the non-porous nozzle. In order to evaluate the dust-reduction performance and economic benefits of the four types of nozzles under high Experimental study on atomization characteristics and dust-reduction performance of four… 593 water pressure, the dust reduction efficiency and water flow rate of the four types of nozzles at the water pressure of 6.0 MPa are plotted and shown in Fig. 13. From Fig. 13, when the water pressure is 6.0 MPa, the dust-reduction efficiency for both total dust and respirable dust is similar for all the four types of nozzles. However, there is a significant difference in the water flow rates of these four types of nozzles. Among them, the spiral nonporous nozzle has the smallest flow rate of 4.17 L/min, and water flow rates of the other three types of nozzles are higher than 5.0 L/min. From the above analysis, it can be found that when the water pressure is high, there is no significant difference in dust-reduction efficiency among the four types of nozzles, but the spiral non-porous nozzles consume much less water than the other three types of nozzles thus has obvious economic advantage. Therefore, based on comprehensive considerations, a spiral non-porous nozzle is recommended when the water pressure is high. Of course, as previously analyzed, the atomization angle and range of the four types of nozzles are different. Therefore, the selection of nozzles should be based on the on-site working conditions and multiple factors such as range and atomization angle should be taken into account to design the most reasonable spraying scheme for dust reduction.
Conclusions
In this study, four types of commonly used nozzles in coal mine underground with different internal structures were selected, and the atomization characteristics of the four types of nozzles were investigated and compared using Malvern droplet size analyzer, PIV system and flow measurement instrument. On this basis, the dust-reduction performance of four types of nozzles under different water pressures was studied using the custom-developed dustreduction experimental platform by spraying. The following conclusions can be obtained: (1) Among the four types of nozzles, both types of spiral channel nozzles have relatively small flow coefficient. In addition, the flow coefficient of the spiral non-porous type is the smallest among all four types of nozzles. The flow coefficients of the tangential flow-guided nozzle and the X-swirl nozzle are higher and similar to each other. (2) The atomization angles of all the four types of nozzles first increased and then decreased with the increase of water pressure. Under the same water pressure, the atomization angle of the spiral nonporous nozzle is the largest, and the change of the atomization angle is not obvious with the increase of the water pressure. The X-swirl nozzle and the spiral porous nozzle have smaller atomization angles. The range of the nozzle is inversely proportional to the atomization angle, thus the nozzle with a larger atomization angle has a relatively smaller range. (3) When the water pressure is low (p \ 3 MPa), the atomization performance of the spiral non-porous nozzle is the best among the four types of nozzles. However, in the high water pressure zone, the change in the droplet size of the spiral non-porous nozzle is not obvious with the increase of the water pressure and the droplet size is significantly higher than that of the other three types of nozzles. The X-swirl nozzle shows advantages in the atomization performance under the high water pressure. Under the same water pressure, the smallest droplet size can be obtained in the X-swirl nozzle. (4) When the nozzle has a small atomization angle, the droplet flow is concentrated and the attenuation of the droplet velocity is relatively, resulting in a high droplet velocity in the downstream of the nozzle. Among the four types of nozzles, the spiral nonporous nozzle has the smallest flow coefficient and the largest atomization angle, thus the downstream droplet velocity of the spiral non-porous nozzle is smallest. The droplet velocity of the other three types of nozzles is in the following order: spiral porous nozzle [ X-swirl nozzle [ tangential flowguide nozzle. (5) With the increase of water pressure, the dustreduction efficiency of the four types of nozzles for both total dust and respirable dust are getting similar. When the water pressure reaches 6 MPa, there is no significant difference in the dust-reduction efficiency among the four types of nozzles, but there is a significant difference in the water flow rates among Fig. 13 Dust reduction efficiency and water flow rate of the four types of nozzles them. Based on the comprehensive consideration. In the coal mine application site, it is recommended to use the spiral non-porous nozzle under high water pressure. If the water pressure for dust reduction is low, the X-swirl nozzle is recommended based on the consideration of the dust-reduction efficiency.
Author contributions Data curation, HH and CT; formal analysis, HH and CT; funding acquisition, PW and RL; methodology, PW; project administration, PW; resources, HH; supervision, RL; writing-original draft, HH; writing-review and editing, PW and RL.
Compliance with ethical standards
Conflict of interest The authors declare no conflict of interest.
Ethical standards
The experiments comply with the current laws of the country in which they were performed.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
|
v3-fos-license
|
2021-05-21T16:57:25.407Z
|
2021-04-09T00:00:00.000
|
234801911
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://doi.org/10.21203/rs.3.rs-402538/v1",
"pdf_hash": "6f09fffec8d1f6f0aed0c2c4202e419aa66b0ec7",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2587",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3f15d79ca2a7a9d4db97ce099af5100f1655e08a",
"year": 2021
}
|
pes2o/s2orc
|
Developing Patient Safety Standards for Quality Improvement in the NICUs: A Mixed-Methods Protocol
Neonatal intensive care unit is one of the accident-prone environments in the health care system. A range of structural and process factors threaten hospitalized infant safety in this unit. These factors are prevented by identifying safety needs and taking the right actions. In this regard, some countries in the world have developed standards. Developing standards based on current knowledge, available resources, and context that provide care, determine patient injury prevention requirements. Likewise, it can be a source for national development and application of guidelines, protocol, and laws. This study aims to develop patient safety standards in the Neonatal intensive care units of the Islamic Republic of Iran. This mixed methods study will apply the Exploration, Implementation, Sustainment framework to develop standards. The rst three phases are the of this study. Due to investigating the it doesn't consider In each these a set of activities takes place. Designing Phase 1 (Exploration) is based on the World health organization model to develop standards. Determining the validity and applicability of developing standards will be done in Phase2 (preparation) and Phase 3 (implementation), respectively.
Discussion
Patient safety standards from this study are developed based on valid evidence and a comprehensive theoretical view. Additionally, considering parents' roles and the interdisciplinary experts' views in the neonatal intensive care unit. In this regard, determining the minimum requirements to maintain patient safety and developing evidence-based practice will be improved e ciency and effectiveness and contributed to equitable and higher quality health care delivery. The application of developing standards will be improving patient safety and quality of health care in the neonatal intensive care units of Iran.
Background
Safety is one of the basic human needs, and patient safety is an essential component of health care quality (1). Since "To Err is Human: Building a Safer Health System report" was published, it has been considered a signi cant health approach that led to some movements in the world (2). These movements prompted every health care system to work to reduce incidents and errors and build a safe environment, in addition to providing health care services. The neonatal intensive care unit (NICU) is one of the accident-prone environments in the health care system due to the provision of special care, equipment complexity, need for specialized knowledge and skills, and high vulnerability of infants (3)(4)(5). In this environment, errors occur eight times more than in others (6). Also, the rate of unexpected incidents is more than 74 incidents per 100 infants (7), and many factors can be threatening the hospitalized infant safety.
Infant safety in the NICU includes the wide range of structures and practices of health care professionals and family involvement. Poorly designed care processes, not the well-designed environment, lack of facilities and human resources can endanger patient safety (8,9). Also, stressors like light and noise, Infection, Sudden endotracheal tube extubation, and implementing invasive procedures increase the risk of infant injury and affect the growth and neurodevelopmental outcomes (10)(11)(12)(13)(14)(15). Thus, organizational processes and structures should be designed in such a way as to provide safe care for hospitalized infants in the NICU and improved expected outcomes (16).
Investigations on the processes, structures, and expected outcomes in the NICUs of the Islamic Republic of Iran (IRI) reported low quality of care. They have shown that neonatal nutritional support processes (17) and discharge processes have low quality in the NICU (18). Likewise, developmental care has not widespread yet (19). Moreover, need to standardize the physical space of units and equipment to achieve the expected neurodevelopmental outcomes (21).
Expected outcomes such as infant developmental status, time to start oral feeding, breastfeeding, weight gain, and length of hospital stay, family satisfaction, and infant cognitive development in the future (22) play a role in assessing the effects of structures and processes and evaluating the degree of achieving to goals in the NICU. However, some problems in documenting hospitalized infant information, such as uncertainty about the validity and reliability of data, lack of supervisory authority on the accuracy of completing and information, and lack of access to information in the patient's subsequent visits, make it di cult to assess the expected outcomes. Identifying safety needs and taking proper and correct actions prevent the mentioned factors in the three areas of structure, process, and outcome (23). Designing and developing evidence-based standards is considered one of the most important aspects of modern management in the health sector. In the IRI, The Ministry of Health, and Medical Education (MOHME) has established accreditation programs and has planned to implement the standards of the safety-friendly hospitals of the World Health Organization (WHO), too. But, barriers to implementation standards, lack of adequate attention to safe care processes, limited resources, speci c characteristics, and conditions of each health care center, and the need to adapt and update to global conditions and developments on the other hand, as well as the lack of comprehensive attention to the main factors in standards development such as health care professionals, infant, family, and other stakeholders, lack of consideration of differences, and critical characteristics of NICUs, increases the need to develop an integrated set of evidence-based standards focused on these characteristics to improve the hospitalized infant safety.
Developing standards based on current knowledge, available resources, and context that provide care, determine patient injury prevention requirements. Also, it can be a source for the development and national application of guidelines, protocols, and laws. Therefore, designing a study to develop patient safety standards in the NICU. Using the developing standards may increase the e ciency and effectiveness of structures and processes, improve outputs, facilitate assessment and evaluation, and provide equitable and high-quality services.
Method And Design
This study is a sequential three-phase mixed methods study approved by the Ethics Committee of Isfahan University of Medical Sciences (IR.MUI.RESEARCH.REC.1399.496). The study applies the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. This model is a prospective framework that identi es outer context (at the system level) and inner context (service provider and patient organization) factors that may in uence the implementation of innovations in a clinical environment (27). The rst three phases are the focus of this study. The fourth phase(sustainment) is not considering due to investigating the effects in the long term.
A key component within the EPIS framework and is an essential implementation strategy within this study considers the organizational relationships between stakeholders and entities. Our study represents it through community-academic partnership (28) To improve inter-university cooperation and facilitate the translation of research from university to practice eld (29). In this regard, planning the research in the form of a thesis proposal for the Doctor of Philosophy (Ph. D) in nursing is provided.
The neonatal Health Department (NHD) of the MOHME proposed the study idea, and Isfahan University of Medical Sciences funded it. Also, other stakeholders, including various health care professionals related to the NICU (physicians, nurses, lower, middle, and upper-level managers, policymakers, and developers of neonatal clinical guidelines) will participate in various meetings during the study through the interdisciplinary training group a liated with this department and will discuss on the ndings. The phases and activities in each Phase are in the following sections (Table 1).
Phase 1: Exploration
Planning Phase 1 (Exploration) activities are to achieve the rst and second aims of the study. These aims are identifying the structure and process needs and developing structure and process patient safety standards and the expected outcomes in the NICU. According to the WHO model (30), this phase includes activities for scoping based on the theoretical model, determining operational de nitions, deciding on the standard topic, developing the standard template, peer review, stakeholders review, and developing and drafting patient safety standards in the NICU.
In Phase One, for rst to fourth Activities, searching and appraising a range of national and international guidelines and standards in scienti c databases, domestic and foreign sites, and libraries are done, using the desired keywords (Table 2). Also, organizations that may have patient safety standards and websites of standards development institutions are visited. Publication date (from 2011 to 2021) and language (English and Persian) limitations are applied. Excluded ndings are that their full texts inaccessible or irrelevant to patient safety in the NICU. To determine the Theoretical model, decide on the standard topic, and develop the standard template, all evidence and ndings are reviewed and appraised.
The standards development team (research team) will check the entirety of searching and evaluate literature, databases, and websites. Also, team members will agree on scoping, operational de nitions, the standards topic, and the Standard template for the s. The results of the peer review sessions will review in a meeting with neonatal health care stakeholders (physicians, nurses, managers, and health policymakers in the area of neonates) from all over the country. All participants are informed and given their consent to record the session. All opinions and comments will review carefully according to the objectives after transcribing. Important points will identify. Then, to feedback to the standard development team for decision-making, a report of the main ndings and recommendations is prepared and presented (31).
Developing the Patient Safety Standards is based on evidence. National and international clinical guidelines and standards in the last ten years, in which their full text is available, will be collected and appraised based on each standard topic. The initial draft of patient safety standards in the NICU will be prepared and validated in "Stage 2: Preparation". The standards development team will edit the initial draft of the proposed Standards, Before the second phase.
Phase 2: Preparation
This phase included two activities, reviewing the initial draft of standards and developing the nal version of the Patient Safety Standards in the NICU. First, a group of experts will validate the developing draft of the Patient Safety Standards in the NICU. To this end, the RAND/UCLA Appropriateness Method (RAM) will be used (32). According to the instruction for using RAM, 9 to 15 health care system professionals from different specialties will be purposefully selected and invited to participate in two rounds (32,33).
The rst round of rating is via email. For this purpose, the facilitator (ZSH) will contact each panelist to explain the RAM procedure and clarify any questions. Then, the panelists are emailed the draft of standards and asked to offer their opinions on the target and users, goal group, statement, and rationale for each Standard within one month. Also, they will assess usefulness, clarity, relevance, and applicability and rate the appropriateness for components of each standard on a Likert scale of 1 to 9 (nine being the most appropriate) (32).
Following the RAM guidelines, median scores are calculated, and the number of panelists rating outside the median tertile is recorded. The components are classi ed and agreed to as valid based on the median rating of appropriateness and the degree of panel agreement (dispersion). Accordingly, The classi cation of components with a median panel score in the top tertile (7-9) without disagreement is as "appropriate", median ratings in the bottom tertile (1-3) without disagreement is as "inappropriate", and median scores between 4 and 6 or any median with disagreement is as neither appropriate nor appropriate but as "uncertain". The second round is face-to-face for allowing members to discuss their judgments. Reaching a consensus on the components in the "uncertain" category among panelists (32).
To develop the nal version of Patient Safety Standards in the NICU, the standards development team reviews all standards. Requiring corrections will be done according to the panelists' opinions. The nal version enters the next Phase (Phase three, implementation).
Phase 3: Implementation
Studies have indicated that service providers' perceptions of evidence-based initiatives can prevent or facilitate their acceptance and implementation (34). Thus, this phase examines the feasibility of standards from the users' view in a descriptive design. For this purpose, 43 health care professionals, who have at least ve years of experience working in the NICU and are not participants in the rst and second sessions of experts of this research and are willing to cooperate, will be selected by strati ed sampling method.
A questionnaire will be applied to collect information. It consists of demographic data (Age, Gender, Level of education, The eld of study, and the length of employment) and the 20-item Perceived Characteristics of Intervention Scale (PCIS). This scale measures evidence-based interventions that are valid according to the experts based on ten characteristics of relative advantage, compatibility, complexity, trialability, the potential for reinvention, task issues, nature of knowledge, augmentation-technical support, and risk from health care service providers' view on a 5-point Likert scale (34).
After corresponding with the questionnaire designer, obtaining permission, and receiving the questionnaire along with its user guide, the questionnaire will be translated from English to Persian to determine its reliability and validity. Finally, a skillful uent person in English revises the questionnaire. The face validity and content validity of the questionnaire are determined. The internal consistency of the questionnaire is measured using Cronbach's alpha coe cient. Data will be analyzed using SPSS-16 software and descriptive statistics methods. Experts will review the results.
Discussion
Evidence-based standards and guidelines development are examples of knowledge management in the health care system. To meet the needs of people and the community, Policymakers using them. They use them to evaluate healthcare services, improve quality, and achieve goals. Standards are necessary for health care quality improvement to achieve the best health outcomes (35). Besides, the need for them has become more apparent due to the increasing technologies and evidence in the health care area, the need to manage current knowledge considering the available resources, and the context where provided the health care services.
Developing evidence-based standards is a dynamically scienti c process that can be improving the quality of health care. The strong relationship between safety and quality is so strong that providing highquality care cannot be distinguished from safety (36). Therefore, considering the priority of patient safety in the health system and its application as an indicator of quality improvement (9), improving patient safety based on evidence-based standards can play an important role in continuous quality improvement.
Developing patient safety standards determines minimums and leads to coordinated and integrated efforts of different individuals and organizations to promote safety. Also, they can cause a purposeful system for planning, improvement, and evaluation if they develop consistent with the nature of services, the speci c characteristics of the population admitted to the NICU, and the different stakeholders' partnership. It can prevent the waste of available resources. (37). Besides, they cause managers and policymakers to do their best for patient safety improvement based on valid scienti c evidence based on the context, considering the triangle of availability, quality, and cost.
Signi cance and priority need for a comprehensive scienti c collection valid actions to improve patient safety in the NICUs at the national level, and suggestion of the Neonatal health department (NHD) of MOHME made Researchers design a protocol for developing applicable standards. These standards will be developed systematically, based on an appropriate theoretical view on patient safety that is a comprehensive guide for stakeholders. Besides that, what makes this study unique is planning for stakeholder participation from across the country, using interdisciplinary research teams and experts, and paying attention to the parents' roles in infant care. Along with the valid evidence, these characteristics can cause the standards to increase the e ciency and effectiveness of structures and processes, improve outcomes, and provide the conditions to move toward equitable and high-quality health care. Also, they can facilitate translating knowledge and developing evidence-based practice. We will obtain informed consent from the study participants and ensure their complete anonymity and their right to withdraw from the study at any point.
List Of Abbreviations
Before initiating and recording conversations in panel sessions, all attendees will be informed and consent will be obtained.
All information (questionnaires and recorded les) will be coded by a unique identi er number and stored in a secure password protected le that will be kept by investigator (ZSH).
Consent for publication
Not applicable Availability of data and materials Not applicable
Competing interests
As a funding body, this protocol has been reviewed and approved by Isfahan University of Medical Sciences, Isfahan, Iran (grant agreement number 399464 and the total budget of 93,386,03 Rial). Additionally, the Neonatal health department of the Ministry of Health and Medical Education in the Islamic Republic of Iran provide project operational support. Head of the Neonatal health department and department Manager of the educational group a liated to that, "MH", is the project counselor.
Funding
This study protocol is related to the thesis proposal for Ph. D in nursing" Development of patient safety standards in the Neonatal Intensive Care Unit" has reviewed and funded under the Isfahan University of Medical Science, grant agreement number (399464), and the total budget of (93,386,030 Rial)( The maximum approved budget for Nursing Ph.D. research projects).
Authors' contributions
The present study protocol was the research priority of the NHD of MOHME in the IRI. ARI is the Head of the research team.
ARI, ZSH, SJM, and MH were involved in the study design. They have developed the framework of the work. ZSH wrote the rst draft of this manuscript. ARI, SJM, and MH reviewed and worked on subsequent drafts of the protocol and manuscript. All authors read and approved the nal manuscript.
|
v3-fos-license
|
2022-11-26T16:38:20.844Z
|
2022-11-24T00:00:00.000
|
253943744
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4360/14/23/5108/pdf?version=1669285475",
"pdf_hash": "d1cde65f0adfb7e2d558d82521646e26e22c57ed",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2588",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "03d70eaac55a4da79dd1ede485e677f51be5a33f",
"year": 2022
}
|
pes2o/s2orc
|
Behavior of Calcium Phosphate–Chitosan–Collagen Composite Coating on AISI 304 for Orthopedic Applications
Calcium phosphate/chitosan/collagen composite coating on AISI 304 stainless steel was investigated. Coatings were realized by galvanic coupling that occurs without an external power supply because it begins with the coupling between two metals with different standard electrochemical potentials. The process consists of the co-deposition of the three components with the calcium phosphate crystals incorporated into the polymeric composite of chitosan and collagen. Physical-chemical characterizations of the samples were executed to evaluate morphology and chemical composition. Morphological analyses have shown that the surface of the stainless steel is covered by the deposit, which has a very rough surface. XRD, Raman, and FTIR characterizations highlighted the presence of both calcium phosphate compounds and polymers. The coatings undergo a profound variation after aging in simulated body fluid, both in terms of composition and structure. The tests, carried out in simulated body fluid to scrutinize the corrosion resistance, have shown the protective behavior of the coating. In particular, the corrosion potential moved toward higher values with respect to uncoated steel, while the corrosion current density decreased. This good behavior was further confirmed by the very low quantification of the metal ions (practically absent) released in simulated body fluid during aging. Cytotoxicity tests using a pre-osteoblasts MC3T3-E1 cell line were also performed that attest the biocompatibility of the coating.
Introduction
Despite the technological development and continuous research on materials with high performance, metallic materials always remain the best choice to fabricate orthopedic devices such as screws, pins, dentures, or dental implants [1,2]. Nevertheless, the surface of a metallic implant does not have good osteointegration [3,4]. The modification of metallic surfaces represents one of the most used techniques employed to improve the interaction between human bones and orthopedic implants. Differing from physical treatments that aim to increase surface roughness and ensure good adhesion and cellular differentiation [5,6], the realization of biomimetic coatings could be a viable solution because they improve the interaction with periprosthetic tissues.
In this work, the attention was focused on a composite coating of calcium phosphates (CaP), chitosan (CS), and collagen (CL), taking inspiration from the hierarchical structure of bone tissue [7,8]. Bone tissue consists of bone cells and an extracellular matrix that is constituted of organic and inorganic components. The organic portion of tissue consists of 90% CL and other proteins such as osteocalcin, osteonectin, and osteopontin that confer tensile strength to the bone and support the mineralized matrix [9]. The latter contains mainly Ca and P in the form of hydroxyapatite crystals that confer mechanical strength, but also, other numerous elements are present [10,11]. Hydroxyapatite (HA, Ca 10 (PO 4 ) 6 (OH) 2 ) is a ceramic calcium phosphate compound largely used in the orthopedic field to obtain coatings and scaffolds [12]. In addition, HA is already employed in the biomedical field due to its great biocompatibility and osteoconductivity, which increase the strong connection between bone and implanted devices [13,14]. According to Kim et al., HA coatings via sol-gel deposition enhance osteoblastic activity in vitro because of their excellent crystallinity and roughness, which gives a good response with bone cells [15]. In addition to biocompatibility, it is also important to consider that the presence of a well-adhered and compact coating could act as a barrier between the metal and periprosthetic tissues [16][17][18]. As soon as the implant is installed inside the body, corrosion phenomena can occur on the metal surface due to the action of aggressive species such as chlorides richly present in body fluids. The occurrence of corrosion phenomena causes the release of metal ions or nanoparticles that can produce adverse local tissue reactions around periprosthetic tissues during the post-surgery period [19].
In this context, the use of biopolymers can give added value in terms of performance, and in particular, CS has been extensively adopted for composite coatings [20][21][22][23]. CS is a linear polysaccharide compound, and it was synthesized for the first time by C. Rouget in 1859 through the alkaline deacetylation of chitin, a homopolymer of β-1,4-N-acetyl-D-glucosamine. Chitin is extracted predominantly from crab and shrimp exoskeletons and fungi [24][25][26]. The biological activity of CS represents a crucial factor associated with deacetylation degree, and it is directly related to the amount of amino groups on the hydrocarbon backbone of CS [27,28]. This biopolymer solves several technological solutions such as in food and beverage technology, cosmetics, agriculture, water waste treatment, and pharmaceutics [29,30]. In the last decades, numerous applications of CS have had beneficial repercussions in biomedical fields because of its biocompatibility, biodegradability, and antimicrobial and mucoadhesive properties. Recent studies were carried out in bone healing, wound healing, drug delivery, tissue engineering, and biosensors [31][32][33].
As mentioned before, CL is the most plentiful protein in bone tissue. It promotes the adhesion and proliferation of osteoblasts and improves the biocompatibility of the coating [34,35]. This protein was employed in several studies regarding composite coating on biodegradable metals [36,37], titanium alloys [38][39][40], and stainless steel [41,42]. In addition, CL not only has angiogenic promotive properties [43,44] but, also, CL fibrils might be able to chelate calcium ions in order to work as a nucleation site for CaP, stimulating the development of a coating close to the natural bone [45].
The coating proposed in this work is aimed to inhibit corrosion phenomena and, exploiting the characteristics and peculiarities of the three components, increase the biocompatibility of the orthopedic prosthesis to extend its lifetime inside the human body. Another aspect to highlight is the deposition method adopted to coat the metal substrate. In particular, galvanic deposition was used because it is able to realize biocompatible coating [46][47][48][49][50][51]. The distinctive feature of this technique is that it does not require any external power supply. The galvanic contact between the working electrode and the sacrificial anode drives the whole process in an electrochemical cell. In this process, the difference in the electrochemical redox potential of the galvanic couple plays a crucial role in depositing coatings on the metallic substrate [52]. Additionally, galvanic deposition is a scalable and controllable process because it is based on the ratio of exposed areas between the anode and cathode. This consolidated and very versatile technology is also appropriate for producing numerous materials [53][54][55] in the nanostructured form [53][54][55][56][57][58][59][60][61][62][63][64]. In our other previous works, CaP-based, CS-based, and composite coatings were obtained through galvanic deposition on stainless steel [65][66][67][68][69]. In these works, we have demonstrated that, among other things, galvanic deposition is also able to produce coatings with good adhesion on the substrates and a lack of cytotoxicity.
In our previous work, the preliminary results on the fabrication of a calcium phosphatechitosan-collagen composite coating on AISI 304 were reported [70]. Here, the behavior of these coatings was studied in detail. Corrosion tests were carried out in simulated body Polymers 2022, 14, 5108 3 of 18 fluid (SBF) emulating the human body environment. Physical-chemical characterizations of the coatings were carried out to investigate the morphology and chemical composition. Furthermore, the release of metal ions from the coating in SBF was studied. The results show that the obtained composite coatings can slow down the corrosive processes, and also, they do not produce any kind of cytotoxic problems.
Materials
The composite coatings were fabricated on AISI 304 (UNS S30400, 0.025% wt. C, 18.18% wt. Cr, 8.03% wt. Ni, 1.66% wt. Mn, 0.31% wt. Si, 0.031% wt. P, and 0.001% wt. S, and Fe at balance) in the form of bars (1.5 cm × 7 cm × 0.3 cm). Zn (sheets of 3 cm × 7 cm × 0.1 cm) was used as a sacrificial anode. Prior to carrying out galvanic deposition, the electrodes were mechanically pretreated. Initially, the metallic surfaces were degreased in an ultrasonic bath in pure acetone for 10 min. Afterward, mechanical polishing with abrasive papers (#150, #300, #800, #1200) was carried out. Finally, an ultrasonic washing was conducted in deionized water and acetone three times, each lasting 5 min. After the degreasing step, the surface was delimited with an insulator lacquer to expose an active area of 1.13 cm 2 and 27 cm 2 for the cathode and anode, respectively. The cathodic solution was obtained using calcium nitrate tetrahydrate (0.061 M), ammonium dihydrogen phosphate (0.036 M), sodium nitrate (0.1 M), and lactic acid (0.08 M). This solution was prepared at 40 • C and under continuous stirring. After the solubilization of the all salts, 5 gL −1 of CS and 0.2 gL −1 of collagen (type I) were added. The anodic solution consisted of sodium chloride (1 M).
Galvanic Deposition
Galvanic deposition was carried out in a two-compartment electrochemical cell connected via a saturated potassium chloride salt bridge. A scheme of the apparatus can be found in our previous works [67][68][69]71]. The electrodes were short-circuited through a copper wire. A fresh solution was used for each experiment. The galvanic deposition process was performed at 50 • C for 24 h by using a heating chamber with natural convection and an uncontrolled internal atmosphere (Binder, mod ED56, Tuttlingen, Germany). After deposition, the samples were washed with distilled water and left to air dry before characterizations.
Morphology Analysis
The morphology of the coatings was examined using an FEG-ESEM microscope (model: QUANTA 200, FEI, Hillsboro, OR, USA) equipped with an energy dispersive spectroscopy (EDS) probe. EDS was performed in different areas of the sample to investigate its homogeneity. In the text, the average values of deposit composition were reported.
X-ray Diffraction Analysis
The crystallographic structures were studied by X-ray diffraction using a RIGAKU instrument (model: D-MAX 25600 HK, Tokyo, Japan). The analyses were carried out in the 2-theta range from 10 • to 60 • by means of copper Kα radiation (λ = 1.54 Å, setup conditions: tube voltage 40 kV, current 30 mA, scan speed 4 • min −1 , sampling 0.01 • ). The results of X-ray diffraction were studied and compared with the ICDD database [72].
Raman Spectroscopy and FT-IR/ATR Analysis
The Raman spectra were obtained using a Renishaw (model: inVia Raman Microscope, Wotton-under-Edge, UK) spectrometer. The excitation was provided by the 532 nm line of a Nd:YAG laser calibrated by the Raman peak of polycrystalline Si (520 cm −1 ). The Raman spectra were analyzed via comparison with the RHUFF database. FT-IR/ATR analyses were carried out by using an FT-IR/NIR Spectrum 400 spectrophotometer (Perkin-Elmer Inc., Wellesley, MA, USA). The spectra were collected in the range of 4000-500 cm −1 .
ICP-OES Analysis
To quantify the metal ion concentration released from the sample after 21 days of aging in SBF at 37 ± 1 • C, inductively coupled plasma optical emission spectrometry (ICP-OES, PerkinElmer Optima 2100 DV, Waltham, MA, USA) was also executed. In particular, the concentration of Fe, Ni, Cr, Ca, and P was quantified by ICP-OES. Prior to the sample analysis, for each element, a calibration line was obtained using standard calibration solutions.
Corrosion Tests
The behavior of the coating against corrosion phenomena was studied by immersing the samples in an SBF solution, prepared according to the procedure reported in [66], for a period of 21 days at a temperature of 37 • C. SBF was prepared using MilliQ water (18 MΩcm) produced with an AQUA Max system (JOUNGLIN, Basic 360 and Ultra 370, Gyeonggi-do, Korea). MilliQ water was heated at 37 ± 1 • C under stirring at 200 rpm. Then, the salts were added in the same order reported in Table S1 (in Supplementary Materials). Separately, 11.93 g of HEPES (2-(4-(2-hydroxyethyl)-1-piperazinyl)-ethanesulfonic acid) was added in 100 mL of MilliQ water at 37 • C, which afterward was mixed into the first solution. pH was kept up to 7.40 with 0.8 mL of NaOH 1.0 M. The corrosion test consisted of the monitoring of (OCP), potentiodynamic polarization (PP), and electrochemical impedance spectroscopy (EIS) in a conventional three-electrode cell with a Pt wire as the counter electrode and a 3.0 M Ag/AgCl as the reference electrode [65][66][67][68][69]. Corrosion potential (E corr ) and corrosion current density (i corr ) were calculated by extrapolation of Tafel's curves. The polarization measurements were performed with a scan rate of 0.166 mVs −1 in a potential range of ±150 mV with respect to the OCP value. EIS was carried out in the range frequency from 100 kHz to 0.1 Hz with 0.010 V of AC perturbation. The impedance data were fitted using ZSimpWin software (Ametek, Berwyn, PA, USA). using an equivalent circuit (EC).
Cytotoxicity
For cytotoxicity tests, the samples (1.5 cm× 3 cm × 0.3 cm) were first sterilized through soaking in a 70% ethanol bath for 24 h with UV light exposure. Each sample was then incubated with Dulbecco's Modified Eagle Medium (DMEM, Sigma Aldrich, St. Louis, MO, USA) at 37 • C for 24 h at a volume-to-surface area ratio of 5 mLcm −2 . Subsequently, the treated media were collected in a 50 mL Falcon to carry out cytotoxicity analyses. MC3T3-E1 Pre-osteoblastic cells, purchased from Sigma-Aldrich (ECACC), were cultured in DMEM supplemented with 10% fetal bovine serum, 1% glutamine, and 1% antibiotic at 37 • C and in a 5% CO 2 atmosphere. Cells were seeded into wells of a 24-well culture plate at the concentration of 3 × 10 3 cells/well and incubated with normal DMEM at 37 • C and 5% CO 2 . After 24 h, the medium was replaced with the treated medium. Cytotoxicity tests were conducted after 0, 1, 5, and 8 days of culture. Cell viability was assessed with AlamarBlue cell viability reagent (Invitrogen, Waltham, MA, USA). Each well was incubated for 3 h with 500 µL of AlamarBlue reagent (10×) diluted (1:10) in DMEM. The resulting fluorescence was read on a plate reader at an excitation wavelength of 530/25 (peak excitation is 570 nm) and an emission wavelength of 590/35 (peak of emission is 585 nm). Each experiment reported in this work was repeated at least three times.
The determination of the cell number was carried out with a standard curve, prepared seeding a known number of cells (from 10 3 up to 2 × 10 5 ) into wells. After two hours, wells were incubated at 37 • C with AlamarBlue following the procedure described above. The calibration curve was obtained by plotting the number of the seeded cells as a function of the read fluorescence.
Galvanic Deposition
Immediately after the electrodes were short-circuited and immersed in their solutions, at both electrodes, a series of reactions occurred. Specifically, at the anodic compartment, the dissolution of zinc occurred according to the following reaction (1): Electrons generated due to this anodic reaction moved to the working electrode, which operated as the cathode, where the reduction reactions took place, in particular, the reactions of electro generation of base produce hydroxyl ions on the surface of the cathode. Nitrate ions, water molecules, and dissolved oxygen in solution were involved according to the reactions (2)-(4) [73][74][75][76][77][78]: Since the formation of the composite coating consists of a co-deposition, the reactions that lead to the deposition of calcium phosphate and biopolymers must occur simultaneously. With regards to calcium phosphate, the mechanism of deposition occurs due to equilibrium reactions related to the phosphate ions dissociation. In particular, the increase in pH at the electrode interface due to the electrogeneration base reactions shifts the dissociation equilibrium of H 2 PO 4 − toward HPO 4 2− (5): The formation of hydrogen phosphate ions leads to the precipitation brushite (BS) (CaHPO 4 ·2H 2 O) according to the reaction (6): BS is an electrically insulating compound. Therefore, if a compact and uniform coating is deposited, the electrode surface is no longer exposed to the solution electrolyte, and consequently, the hydroxyl ions would not be produced at the interface. However, in our case, a non-uniform and porous layer of BS was formed and the electrogeneration of base reactions was not hindered. Consequently, HPO 4 2− ion continued to dissociate, forming the orthophosphate ion according to the equilibrium (7): As soon as pH reaches a value above 12, HA precipitation occurs following the reaction (8): Regarding biopolymers, the addition of lactic acid during the preparation of the cathodic solution causes the solubilization of the polymeric chains. This is due to the protonation of the amine groups present in the hydrocarbon backbone. As per the mechanism of chitosan deposition, it is also due to the pH increase at the electrode/electrolyte interface. This increase leads to the precipitation of the polymer according to the reaction (9): Collagen is characterized by an isoelectric point around a pH of 7.4. According to a study of Ling et al. the pH gradient interface plays a key role in composite formation at the electrode/electrolyte [79]. Basically, the increase in pH causes the generation of calcium phosphate crystals, and simultaneously, collagen fibrils assemble and mineralize near the cathode surface. Although the increase in pH causes a negative charge on the carboxyl groups, as a matter of fact, carboxyl groups act as nucleation points for the calcium phosphate crystals [80,81]. In the meantime, collagen fibers are incorporated within the coating. Therefore, this mechanism allows us to obtain a composite structure. According to Wang et al., the presence of chitosan might contribute to changing the isoelectric point of collagen since interactions are established between the biopolymer chains [82]. In a more recent study, the same mechanism was proposed for the formation of the composite between calcium phosphates and proteins, collagen/BSA, by electrochemical deposition [83].
Morphological Analysis
The SEM images of the composite coating with and without collagen and before and after the aging in SBF are reported in Figure 1. the electrode/electrolyte [79]. Basically, the increase in pH causes the generation of calcium phosphate crystals, and simultaneously, collagen fibrils assemble and mineralize near the cathode surface. Although the increase in pH causes a negative charge on the carboxyl groups, as a matter of fact, carboxyl groups act as nucleation points for the calcium phosphate crystals [80,81]. In the meantime, collagen fibers are incorporated within the coating. Therefore, this mechanism allows us to obtain a composite structure. According to Wang et al., the presence of chitosan might contribute to changing the isoelectric point of collagen since interactions are established between the biopolymer chains [82]. In a more recent study, the same mechanism was proposed for the formation of the composite between calcium phosphates and proteins, collagen/BSA, by electrochemical deposition [83].
Morphological Analysis
The SEM images of the composite coating with and without collagen and before and after the aging in SBF are reported in Figure 1. SEM images reveal that galvanic deposition allows us to deposit the coating on the entire metallic surface exposed to the cathodic solution. In Figure 1a-d, a massive deposition of the CaP crystals can be observed after deposition. However, the presence of biopolymers was not detected since co-deposition creates an intimate structure between the CaP crystals and polymeric macromolecules. The addition of CL does not contribute to a substantial modification of the structure. It can be interesting to highlight the presence of circular macropores. This peculiarity is attributable to the formation of chitosan in synergy with the hydrogen evolution reaction (HER) [46,84,85] during the deposition. Specifically, the final effect is a porous coating since the bubbles act as a dynamic template [86,87]. According to Mąkiewicz et al. [88], the high viscosity of the solution promotes the adhesion of hydrogen bubbles on the cathode surface during deposition. This phenomenon creates a barrier at the interface electrode/electrolyte that limits the deposition of the coating in that area. Nevertheless, as soon as the bubble reaches a critical size value, the bubbles detachment occurs, and therefore, the deposition begins again in the active area previously occupied. In fact, although the holes appear hollow, their surface is covered by a thinly deposited layer, as shown in the inset of Figure 1b. SEM images reveal that galvanic deposition allows us to deposit the coating on the entire metallic surface exposed to the cathodic solution. In Figure 1a-d, a massive deposition of the CaP crystals can be observed after deposition. However, the presence of biopolymers was not detected since co-deposition creates an intimate structure between the CaP crystals and polymeric macromolecules. The addition of CL does not contribute to a substantial modification of the structure. It can be interesting to highlight the presence of circular macropores. This peculiarity is attributable to the formation of chitosan in synergy with the hydrogen evolution reaction (HER) [46,84,85] during the deposition. Specifically, the final effect is a porous coating since the bubbles act as a dynamic template [86,87]. According to Mąkiewicz et al. [88], the high viscosity of the solution promotes the adhesion of hydrogen bubbles on the cathode surface during deposition. This phenomenon creates a barrier at the interface electrode/electrolyte that limits the deposition of the coating in that area. Nevertheless, as soon as the bubble reaches a critical size value, the bubbles detachment occurs, and therefore, the deposition begins again in the active area previously occupied. In fact, although the holes appear hollow, their surface is covered by a thinly deposited layer, as shown in the inset of Figure 1b. Coating morphology was also characterized after 21 days of aging in SBF. In previous studies [66,68], we have observed that this time is sufficient to achieve a stable behavior in the calcium-phosphate-based coatings. In both cases, the deposit covered the metallic substrates, as shown in Figure 1e-h, with almost the same morphological characteristics described above. From a chemical composition point of view, EDS data give interesting semi-qualitative information, as shown in Figure 2. Coating morphology was also characterized after 21 days of aging in SBF. In previous studies [66,68], we have observed that this time is sufficient to achieve a stable behavior in the calcium-phosphate-based coatings. In both cases, the deposit covered the metallic substrates, as shown in Figure 1e-h, with almost the same morphological characteristics described above. From a chemical composition point of view, EDS data give interesting semi-qualitative information, as shown in Figure 2. In Table 1, Ca/P and Ca/Fe atomic ratios were reported. The first one is useful to evaluate the coating composition, while the second ratio offers qualitative information concerning its thickness. The data reported in Table 1 indicate that coatings are stable and they are constituted by a mixture of BS (Ca/P = 1) and HA (Ca/P = 1.59~1.86) according to the literature [89]. Fe atoms, coming from the steel substrate, were detected only in the EDS spectra of the as-prepared coatings, Figure 2a. Even if a BS phase was present in the as-deposited coating, after 21 days of aging, a total conversion into HA was observed. Furthermore, Figure 2b shows the presence of additional atoms (Cl, K, Na, Mg) due to chloride salts or the formation of other types of substances that are incorporated within the coating [66,68]. Kumar et al. [90] have demonstrated the transformation of BS coating into HA during aging in SBF. In particular, they have shown that a continuous dynamic process of dissolution/reprecipitation occurs. This is in agreement with the mechanism proposed by Nur et al. [91]. According to these authors, a reversible equilibrium is established between the BS and HA phases in SBF according to the reaction (10): The disappearance of Fe in the coating after aging suggests that there is a change in the coating thickness. This is due to the continuous coating dissolution/precipitation process that occurs in SBF, leading, as reported in the literature, to the increases in the coating thickness. In particular, new crystals of calcium phosphate coming from the SBF are formed and are incorporated along with other elements, as discussed above, into the In Table 1, Ca/P and Ca/Fe atomic ratios were reported. The first one is useful to evaluate the coating composition, while the second ratio offers qualitative information concerning its thickness. The data reported in Table 1 indicate that coatings are stable and they are constituted by a mixture of BS (Ca/P = 1) and HA (Ca/P = 1.59~1.86) according to the literature [89]. Fe atoms, coming from the steel substrate, were detected only in the EDS spectra of the as-prepared coatings, Figure 2a. Even if a BS phase was present in the as-deposited coating, after 21 days of aging, a total conversion into HA was observed. Furthermore, Figure 2b shows the presence of additional atoms (Cl, K, Na, Mg) due to chloride salts or the formation of other types of substances that are incorporated within the coating [66,68]. Kumar et al. [90] have demonstrated the transformation of BS coating into HA during aging in SBF. In particular, they have shown that a continuous dynamic process of dissolution/reprecipitation occurs. This is in agreement with the mechanism proposed by Nur et al. [91]. According to these authors, a reversible equilibrium is established between the BS and HA phases in SBF according to the reaction (10): 10CaHPO 4 + 2OH − ↔ Ca 10 (PO 4 ) 6 (OH) 2 + 4PO 4 3− + 10H + (pH < 12) The disappearance of Fe in the coating after aging suggests that there is a change in the coating thickness. This is due to the continuous coating dissolution/precipitation process that occurs in SBF, leading, as reported in the literature, to the increases in the coating thickness. In particular, new crystals of calcium phosphate coming from the SBF are formed and are incorporated along with other elements, as discussed above, into the coating [92]. The increase in thickness is also confirmed by XRD patterns where, after aging, the diffraction peaks of the substrate are practically not present. Figure 3 shows the XRD patterns of the coated samples. In addition, XRD analysis was carried out also on AISI 304 to emphasize the shielding of peaks due to the presence of coatings. In Figure 3a, BS peaks were identified for 2-theta equal to 11.65 • , 20.95 • , 29.29 • , and 30.54 • . Furthermore, a peak for 2-theta equal to 25.87 • relative to HA was observed. Unfortunately, it was not possible to attest to the presence of biopolymers in the composite coating. Nevertheless, an increase in the HA peaks was observed in the CaP/CS/CL coating where the main peak of HA (2-theta = 25.87 • ) was more intense with respect to the CaP/CS coating. After aging, the peaks of BS disappeared (Figure 3b) coating [92]. The increase in thickness is also confirmed by XRD patterns where, after aging, the diffraction peaks of the substrate are practically not present. Figure 3 shows the XRD patterns of the coated samples. In addition, XRD analysis was carried out also on AISI 304 to emphasize the shielding of peaks due to the presence of coatings. In Figure 3a, BS peaks were identified for 2-theta equal to 11.65°, 20.95°, 29.29°, and 30.54°. Furthermore, a peak for 2-theta equal to 25.87° relative to HA was observed. Unfortunately, it was not possible to attest to the presence of biopolymers in the composite coating. Nevertheless, an increase in the HA peaks was observed in the CaP/CS/CL coating where the main peak of HA (2-theta = 25.87°) was more intense with respect to the CaP/CS coating. After aging, the peaks of BS disappeared (Figure 3b), while new HA peaks emerged for 2-theta equal to 25.87°, 31.74°, 32.18°, 32.87°, and 34.045°. These results are in line with the equilibrium between CaP compounds described above [90,91].
Raman Spectroscopy and FT-IR/ATR Analysis
Raman spectra were shown in Figure 4. For comparison with the composite coating, Raman analysis was also performed on the CaPs sample without polymers. Through this comparison, it was possible to see the effect of the presence of the polymeric matrix within the coating structure [93][94][95]. BS is characterized by high splitting, and band shifts can be attributed to the protonated phosphate group. In particular, the stretching of the phosphate groups (ν1 P-O) can be observed at 985 cm −1 and 878 cm −1 . The Raman modes at 1081 cm −1 and 1059 cm −1 were related to the stretching of the group PO4 (ν3 P-O). The vibrational modes at 379 cm −1 and 415 cm −1 refer to the bending of the HPO4 group (ν2 O-P-O). A less intense band was observed at 1121 cm −1 , which can be attributed to the stretching of the HPO4 2− group (ν3 P-O). In addition, the bands related to the bending of the PO4 (ν4 P-O: 593 cm −1 ) and to the stretching of the HPO4 2− group (ν4 O-P-O: 530 cm −1 ) were observed. In CaP/CS/CL coating, the typical HA stretching mode (ν1: 960 cm −1 ) is more intense with respect to the BS ones [94,96,97]. Signals related to chitosan can be traced in the range between 1000~1500 cm −1 and are due to stretching of the -CH2-groups of the polymer [98][99][100]. The presence of collagen was confirmed by RAMAN modes at 1289 cm −1 for Amide III, at 1349 cm −1 related to bending (δCH), 1447 cm −1 and 2933 cm −1 related to the deformation of -CH2-and -CH3, respectively [101,102]. After 21 days of aging in SBF, the typical HA vibrational mode was noted at 960 cm −1 as the only crystalline phase (Figure 4), in agreement with the XRD patterns previously shown. Typical fluorescence interference, due to the polymer matrix of the coating, was also observed [103]. The absence of collagen peaks is due to the mineralization of collagen during aging in SBF [104][105][106].
Raman Spectroscopy and FT-IR/ATR Analysis
Raman spectra were shown in Figure 4. For comparison with the composite coating, Raman analysis was also performed on the CaPs sample without polymers. Through this comparison, it was possible to see the effect of the presence of the polymeric matrix within the coating structure [93][94][95]. BS is characterized by high splitting, and band shifts can be attributed to the protonated phosphate group. In particular, the stretching of the phosphate groups (ν 1 P-O) can be observed at 985 cm −1 and 878 cm −1 . The Raman modes at 1081 cm −1 and 1059 cm −1 were related to the stretching of the group PO 4 (ν 3 P-O). The vibrational modes at 379 cm −1 and 415 cm −1 refer to the bending of the HPO 4 group (ν 2 O-P-O). A less intense band was observed at 1121 cm −1 , which can be attributed to the stretching of the HPO 4 2− group (ν 3 P-O). In addition, the bands related to the bending of the PO 4 (ν 4 P-O: 593 cm −1 ) and to the stretching of the HPO 4 2− group (ν 4 O-P-O: 530 cm −1 ) were observed. In CaP/CS/CL coating, the typical HA stretching mode (ν 1 : 960 cm −1 ) is more intense with respect to the BS ones [94,96,97]. Signals related to chitosan can be traced in the range between 1000~1500 cm −1 and are due to stretching of the -CH 2 -groups of the polymer [98][99][100]. The presence of collagen was confirmed by RAMAN modes at 1289 cm −1 for Amide III, at 1349 cm −1 related to bending (δCH), 1447 cm −1 and 2933 cm −1 related to the deformation of -CH 2 -and -CH 3 , respectively [101,102]. After 21 days of aging in SBF, the typical HA vibrational mode was noted at 960 cm −1 as the only crystalline phase (Figure 4), in agreement with the XRD patterns previously shown. Typical fluorescence interference, due to the polymer matrix of the coating, was also observed [103]. The absence of collagen peaks is due to the mineralization of collagen during aging in SBF [104][105][106]. The addition of chitosan revealed main absorption bands in the range 1700-1000 cm −1 such as amine I C=O stretching (1627 cm −1 ) and NH bending (1071 cm −1 ). In the range 3600-3000 cm −1 , the typical broad band related to the absorption of the -OH stretching of the hydroxyl group, where the polymer matrix is present, is present [98,107]. The peaks related to the presence of collagen are the stretching ν(C-OC) at 1076 cm −1 and the bending δ(N-H) at 1198 cm −1 [108]. With regard to the inorganic component of the coating, a definite absorption peak was observed in all samples for 1648 cm −1 related to the bending of the H-O-H bond. The presence of the HA phase in the composites is confirmed by the very intense peak at 1040 cm −1 belonging to the asymmetric stretching of the phosphate group. Both the BS and HA phases are present in both coatings in agreement with the results obtained by XRD and Raman [109][110][111][112]. The spectrum of the aged CaP/CS/CL sample in SBF solution at 37 °C for 3 weeks was also collected to analyze its effect on the mineral phase of the coating. By comparing CaP/CS/CL spectra before and after aging, the main changes can be related to a slight decrease in the typical brushite peaks associated with the stretching frequencies (νOH: 3540-3153 cm −1 ) and bend mode (δOH: 1642 cm −1 ). Furthermore, an increase in the peaks attributable to the HA phase, in particular at 870 cm −1 The addition of chitosan revealed main absorption bands in the range 1700-1000 cm −1 such as amine I C=O stretching (1627 cm −1 ) and NH bending (1071 cm −1 ). In the range 3600-3000 cm −1 , the typical broad band related to the absorption of the -OH stretching of the hydroxyl group, where the polymer matrix is present, is present [98,107]. The peaks related to the presence of collagen are the stretching ν(C-OC) at 1076 cm −1 and the bending δ(N-H) at 1198 cm −1 [108]. With regard to the inorganic component of the coating, a definite absorption peak was observed in all samples for 1648 cm −1 related to the bending of the H-O-H bond. The presence of the HA phase in the composites is confirmed by the very intense peak at 1040 cm −1 belonging to the asymmetric stretching of the phosphate group. Both the BS and HA phases are present in both coatings in agreement with the results obtained by XRD and Raman [109][110][111][112]. The spectrum of the aged CaP/CS/CL sample in SBF solution at 37 °C for 3 weeks was also collected to analyze its effect on the mineral phase of the coating. By comparing CaP/CS/CL spectra before and after aging, the main changes can be related to a slight decrease in the typical brushite peaks associated with the stretching frequencies (νOH: 3540-3153 cm −1 ) and bend mode (δOH: 1642 cm −1 ). Furthermore, an increase in the peaks attributable to the HA phase, in particular at 870 cm −1 The addition of chitosan revealed main absorption bands in the range 1700-1000 cm −1 such as amine I C=O stretching (1627 cm −1 ) and NH bending (1071 cm −1 ). In the range 3600-3000 cm −1 , the typical broad band related to the absorption of the -OH stretching of the hydroxyl group, where the polymer matrix is present, is present [98,107]. The peaks related to the presence of collagen are the stretching ν(C-OC) at 1076 cm −1 and the bending δ(N-H) at 1198 cm −1 [108]. With regard to the inorganic component of the coating, a definite absorption peak was observed in all samples for 1648 cm −1 related to the bending of the H-O-H bond. The presence of the HA phase in the composites is confirmed by the very intense peak at 1040 cm −1 belonging to the asymmetric stretching of the phosphate group. Both the BS and HA phases are present in both coatings in agreement with the results obtained by XRD and Raman [109][110][111][112]. The spectrum of the aged CaP/CS/CL sample in SBF solution at 37 • C for 3 weeks was also collected to analyze its effect on the mineral phase of the coating. By comparing CaP/CS/CL spectra before and after aging, the main changes can be related to a slight decrease in the typical brushite peaks associated with the stretching frequencies (νOH: 3540-3153 cm −1 ) and bend mode (δOH: 1642 cm −1 ). Furthermore, an increase in the peaks attributable to the HA phase, in particular at 870 cm −1 (νCO 3 2− ), can be observed [113]. This result is consistent with that obtained in our previous work [68].
Corrosion Tests
To scrutinize the protective action of the coating, electrochemical tests were performed in vitro using SBF solution at 37 • C. Each corrosion test involved a first step in which the monitoring of OCP was executed for 30 min. This operation is necessary not only for stabilizing the system but also because the presence of an abrupt spike of potential could be a symptom of a chemical instability in the coating in SBF. In Figure S1 (in Supplementary Materials) the results of the OCP measurements were reported. The OCP of uncoated AISI 304 was also added for comparison. Considering the entire period of aging, it can be possible to highlight that the composite coatings hold higher OCP values than bare steel. Thus, the deposit on the metallic surface ensures a barrier effect. Although irregular and porous morphologies were found in Figure 1, the protective skills of the coating were not affected. In fact, the OCP value remains almost constant, with a minor variation of no more than 10 mV during 30 min. Further confirmation comes from the polarization curves reported in Figure 6a,b. In Table 2, values of E corr and i corr values were calculated by fitting of Tafel's curves.
Corrosion Tests
To scrutinize the protective action of the coating, electrochemical tests were performed in vitro using SBF solution at 37 °C. Each corrosion test involved a first step in which the monitoring of OCP was executed for 30 min. This operation is necessary not only for stabilizing the system but also because the presence of an abrupt spike of potential could be a symptom of a chemical instability in the coating in SBF. In Figure S1 (in Supplementary Materials) the results of the OCP measurements were reported. The OCP of uncoated AISI 304 was also added for comparison. Considering the entire period of aging, it can be possible to highlight that the composite coatings hold higher OCP values than bare steel. Thus, the deposit on the metallic surface ensures a barrier effect. Although irregular and porous morphologies were found in Figure 1, the protective skills of the coating were not affected. In fact, the OCP value remains almost constant, with a minor variation of no more than 10 mV during 30 min. Further confirmation comes from the polarization curves reported in Figure 6a,b. In Table 2, values of Ecorr and icorr values were calculated by fitting of Tafel's curves. Table 2. Ecorr and icorr of CaP/CS/CL and CaP/CS coatings obtained by extrapolation of Tafel's curves from Figure 6. For comparison, Ecorr and icorr of the uncoated substrate were also reported. The mean standard deviation was ± 2%. In particular, in both coatings, a higher potential was observed during the aging compared to the uncoated steel. As per CaP/CS/CL shown in Figure 6a, all the curves remain higher than uncoated steel and shift toward nobler potentials. The same trend was noticed for both samples, even if the CaP/CS/CL sample revealed more positive values of Ecorr with Table 2. E corr and i corr of CaP/CS/CL and CaP/CS coatings obtained by extrapolation of Tafel's curves from Figure 6. For comparison, E corr and i corr of the uncoated substrate were also reported. The mean standard deviation was ± 2%. In particular, in both coatings, a higher potential was observed during the aging compared to the uncoated steel. As per CaP/CS/CL shown in Figure 6a, all the curves remain higher than uncoated steel and shift toward nobler potentials. The same trend was noticed for both samples, even if the CaP/CS/CL sample revealed more positive values of E corr with respect to CaP/CS (Figure 6b). In Figure 6a, it can be observed that from the beginning of aging in SBF, polarization curves move within the range of 0-100 mV and the Polymers 2022, 14, 5108 11 of 18 E corr values remain positive and higher than bare steel [114]. A slight decrease in E corr was observed on the 21st day of aging. In agreement with the literature data, these fluctuations in i corr value are an additional proof of the continuous evolution of the coating in SBF where BS/HA equilibrium was established [115,116]. The consequence of this dynamic development is also the change in coating thickness, as discussed above, leading to the different corrosion resistance during aging in SBF. For CaP/CS, in Figure 6b, at the end of the 21st day of aging, an increase of approximately 200 mV from the beginning with a slight decrease in i corr was attested. From these data, it is possible to conclude that the composite coating CaP/CS/CL is more stable than CaP/CS. It is important to underline, the coatings cannot hinder phenomena corrosion, but they are able to decrease the rate of metal dissolution inside the human body.
To evaluate the protective characteristics of the coating, EIS characterizations were performed on CaP/CS/CL samples and Bode and Nyquist plots are reported in Figure 7. Impedance data were fitted using an equivalent circuit R s (CPE 1 R 1 )(CPE 2 (R 2 (CPE 3 R 3 ))), shown in Figure S2 (in Supplementary Materials). This model was proposed by Orazem and Tribollet [117], and the values are reported in Table S2 (in Supplementary Materials). R s is related to the resistance of the solution. CPE 1 and R 1 are related to the outer layer of the coating in contact with the solution. CPE 2 was inserted to model capacitance of the inner layer of coating near the substrate, while R 2 gives information concerning the resistance of the pore. CPE 3 and R 3 describe double-layer capacitance and charge transfer resistance, respectively. This model could fit systems with an outer layer thicker than the deepest one, characterized by few pores. This morphology creates resistances related to mass transport to diffusive phenomena [118,119]. This assumption may be plausible based on SEM images, where inside circular macropores, there was found a compact coating in contact with the substrate. With regard to the bare steel, a simpler R s (CPE 3 R 3 ) circuit was used, which takes into account the resistance of the solution, the double-layer capacitance, and the charge transfer resistance. The equivalent circuit fits the system well with a χ 2 value of the order of 10 −4 . The relative error of each parameter is less than 10%. During aging, the values change, and this is attributable to the continuous evolution of the coating in line with the above results. The values showed an increase in overall impedance during the 3-week observation. respect to CaP/CS (Figure 6b). In Figure 6a, it can be observed that from the beginning of aging in SBF, polarization curves move within the range of 0-100 mV and the Ecorr values remain positive and higher than bare steel [114]. A slight decrease in Ecorr was observed on the 21st day of aging. In agreement with the literature data, these fluctuations in icorr value are an additional proof of the continuous evolution of the coating in SBF where BS/HA equilibrium was established [115,116]. The consequence of this dynamic development is also the change in coating thickness, as discussed above, leading to the different corrosion resistance during aging in SBF. For CaP/CS, in Figure 6b, at the end of the 21st day of aging, an increase of approximately 200 mV from the beginning with a slight decrease in icorr was attested. From these data, it is possible to conclude that the composite coating CaP/CS/CL is more stable than CaP/CS. It is important to underline, the coatings cannot hinder phenomena corrosion, but they are able to decrease the rate of metal dissolution inside the human body.
To evaluate the protective characteristics of the coating, EIS characterizations were performed on CaP/CS/CL samples and Bode and Nyquist plots are reported in Figure 7. Impedance data were fitted using an equivalent circuit Rs(CPE1R1)(CPE2(R2(CPE3R3))), shown in Figure S2 (in Supplementary Materials). This model was proposed by Orazem and Tribollet [117], and the values are reported in Table S2 (in Supplementary Materials). Rs is related to the resistance of the solution. CPE1 and R1 are related to the outer layer of the coating in contact with the solution. CPE2 was inserted to model capacitance of the inner layer of coating near the substrate, while R2 gives information concerning the resistance of the pore. CPE3 and R3 describe double-layer capacitance and charge transfer resistance, respectively. This model could fit systems with an outer layer thicker than the deepest one, characterized by few pores. This morphology creates resistances related to mass transport to diffusive phenomena [118,119]. This assumption may be plausible based on SEM images, where inside circular macropores, there was found a compact coating in contact with the substrate. With regard to the bare steel, a simpler Rs(CPE3R3) circuit was used, which takes into account the resistance of the solution, the double-layer capacitance, and the charge transfer resistance. The equivalent circuit fits the system well with a χ 2 value of the order of 10 −4 . The relative error of each parameter is less than 10%. During aging, the values change, and this is attributable to the continuous evolution of the coating in line with the above results. The values showed an increase in overall impedance during the 3-week observation.
ICP-OES Analysis
A further confirmation of the protective effect of the coating was the quantification of metal ions released (Fe, Ni, Cr) in the SBF solution after 21 days of aging. This quantification was carried out by ICP-OES. As reported in Table 3, the concentration of metal ions is very low, below the thresholds dangerous to human health [120]. In addition, in line with our previous work [68], it can be noted that the concentration of calcium and phosphorus ions changes in SBF after aging. This is due to the dissolution and reprecipitation of CaP compounds in SBF [90,91] and, thus, is a further confirmation of the results discussed above. Table 3. Concentration of ions in the SBF solution after 3-weeks of aging. For comparison, the concentration of Ca and P ions in the as-prepared SBF solution was reported (SBF measured). The mean standard deviation was 0.7%. (SBF calculated is the expected concentration determined from the quantity of the salts used to prepare the solution).
Cytotoxicity
With regard to the cytotoxicity, an in vitro cellular test was carried out. The CaP/CS/CL sample was soaked in a standard medium for 24 h at an established volume-to-surface ratio according to the ISO standard [121]. MC3T3-E1 pre-osteoblastic cells were maintained in culture for seven days with this treated medium. The cellular growth curve is shown in Figure 8.
ICP-OES Analysis
A further confirmation of the protective effect of the coating was the quantification of metal ions released (Fe, Ni, Cr) in the SBF solution after 21 days of aging. This quantification was carried out by ICP-OES. As reported in Table 3, the concentration of metal ions is very low, below the thresholds dangerous to human health [120]. In addition, in line with our previous work [68], it can be noted that the concentration of calcium and phosphorus ions changes in SBF after aging. This is due to the dissolution and reprecipitation of CaP compounds in SBF [90,91] and, thus, is a further confirmation of the results discussed above. Table 3. Concentration of ions in the SBF solution after 3-weeks of aging. For comparison, the concentration of Ca and P ions in the as-prepared SBF solution was reported (SBF measured). The mean standard deviation was 0.7%. (SBF calculated is the expected concentration determined from the quantity of the salts used to prepare the solution).
Cytotoxicity
With regard to the cytotoxicity, an in vitro cellular test was carried out. The CaP/CS/CL sample was soaked in a standard medium for 24 h at an established volumeto-surface ratio according to the ISO standard [121]. MC3T3-E1 pre-osteoblastic cells were maintained in culture for seven days with this treated medium. The cellular growth curve is shown in Figure 8. From the graphs, it can be appreciated that the number of cells increases in the time from 3 × 10 3 (seeding days) to approximately 4 × 10 5 (8 days). These data indicate that a From the graphs, it can be appreciated that the number of cells increases in the time from 3 × 10 3 (seeding days) to approximately 4 × 10 5 (8 days). These data indicate that a physiological in vitro growth occurred. Therefore, the in vitro cell cytotoxicity assay revealed the non-cytotoxicity-and consequently the biocompatibility-of the CaP/CS/CL coated samples.
Conclusions
In this work, CaP/CS/CL composite coatings were fabricated on AISI 304 via galvanic deposition. The galvanic coupling between the working electrode and the sacrificial permits the co-deposition of CaP and biopolymers creating a composite coating. SEM images show that the metallic substrate of AISI 304 was thoroughly covered by the deposit. X-ray diffraction reveals that the coating was a mixture of calcium phosphate compounds, in particular brushite and hydroxyapatite, attested by their characteristic diffraction peaks. Nevertheless, only the hydroxyapatite was found at the end of 3-week period of aging in the simulated body fluid due to the total conversion of brushite into hydroxyapatite. The presence of biopolymers was revealed by RAMAN spectroscopy and FT-IR/ATR. Corrosion tests were executed for an aging period of 21 days in simulated body fluid at 37 • C. From these tests, it was observed that the corrosion potential moved toward higher values with respect to uncoated steel. Contemporaneously, corrosion current density decreased, leading to a slow rate of corrosion of the substrate. In line with these results, EIS attests an increase of approximately one order of magnitude in charge transport resistance. A further confirmation of the protective action of coating came from the ICP-OES analyses of the SBF solution before aging, where concentration values well below the threshold limit value were found. In addition, the cytotoxicity test, carried out with MC3T3-E1 preosteoblastic cells, revealed that the CaP/CS/CL coated samples do not influence the normal cellular growth and can be considered not cytotoxic and, consequently, suitable for biomedical applications. Further tests are underway to evaluate whether the increase in collagen concentration modifies the behavior of the coatings.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/polym14235108/s1, Figure S1: OCP curves of: (a) CaP/CS/CL and (b) CaP/CS coatings; Figure S2: Scheme of the Equivalent Circuit. Table S1: Composition of SBF. Table S2: Fitting parameters of impedance measurements of CaP/CS/CL coatings obtained by galvanic deposition during 3-weeks of aging in SBF solution.
|
v3-fos-license
|
2022-05-19T15:18:04.797Z
|
2022-05-01T00:00:00.000
|
248879902
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.rbmojournal.com/article/S1472648322003509/pdf",
"pdf_hash": "a0d99909e29d0d481673e3f3ae4abdbb5eed6bb1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2589",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "73d9c6b3e236059db27eae81415dfe1e44d9e200",
"year": 2022
}
|
pes2o/s2orc
|
Transcriptomic analysis supports collective endometrial cell migration in the pathogenesis of adenomyosis
Research question: Adenomyosis is a common uterine disorder of uncertain causes. Can transcriptomic analyses of the endometrium and myometrium reveal potential mechanisms underlying adenomyosis pathogenesis? Design: Transcriptomic profiles of eutopic endometrium and myometrium from women with and without diffuse adenomyosis and with symptomatic FIGO type 2–5 fibroids in the proliferative phase of the menstrual cycle were assessed using RNA sequencing and bioinformatic analysis. Differentially expressed genes (DEG) and potential pathways were validated by quantitative reverse transcription polymerase chain reaction, immunoblotting and Masson staining, using additional clinical samples. Results: Top biological processes in the endometrium of women with versus without adenomyosis, enriched from DEG, comprised inflammation, extracellular matrix (ECM) organization, collagen degradation and hyaluronan synthesis, which are key in cell migration and cell movement. Top biological processes enriched from DEG in the myometrium of women with versus without adenomyosis revealed ECM organization dysfunction, abnormal sensory pain perception and gamma aminobutyric acid (GABA) synaptic transmission. Dysregulation of prolactin signalling was also enriched in eutopic endometrium and in the myometrium of women with adenomyosis. Conclusions: Overall, our results support the invasive endometrium theory in the pathogenesis of adenomyosis, in which inflammation induces ECM remodelling resulting in a track for subsequent endometrial collective cell migration and onset of adenomyosis. Moreover, abnormal myometrial GABA synaptic transmission may contribute to dysmenorrhoea in women with adenomyosis and is a possible target for novel therapeutic development. Prolactin signalling abnormalities may serve as another opportunity for therapeutic intervention.
INTRODUCTION
Adenomyosis is a common, hormonally driven uterine disorder occurring in 8-27% of reproductive age women (Kissler et al., 2008). It is associated with uterine enlargement, heavy menstrual bleeding (HMB), chronic pelvic pain, infertility and miscarriage (Harada et al., 2016). The pathognomonic feature of adenomyosis is the abnormal, heterotopic location of endometrial epithelial cells and stromal fibroblasts in the myometrium where they elicit hyperplasia and hypertrophy of surrounding smooth muscle cells (Zhai et al., 2020). The mechanisms and pathogenesis of how the adenomyotic lesions develop are uncertain, although the endometrial, myometrial compartments, or both, have been suggested as prime contributors.
One hypotheses involves enhanced invasion of the endometrial basalis through an injured or abnormal junctional zone into the myometrium (Zhai et al., 2020), via epithelial-tomesenchymal transition (EMT) in early disease progression and collective cell migration in later invasion (Garcia-Solares et al., 2018). Junctional zone injury can be iatrogenic, e.g. caused by uterine surgery, or physiologic through microtissue injury and repair (TIAR) after each menstrual cycle (Leyendecker et al., 2009). Notably, adenomyosis lesions have been reported in the myometrium of women who lack functional endometrium, e.g. in those with Mayer-Rokitansky-Kuster-Hauser syndrome (Chun et al., 2013) or Asherman's syndrome , and so other mechanisms may also be operational (Hoo et al., 2016).
Functional abnormalities believed to contribute to the pathogenesis of adenomyosis include increased endometrial cell proliferation, high invasive capacity of endometrial stromal cells, epithelial-to-mesenchymal transition and aberrant TIAR induced by microtrauma and trauma at the endometrial-myometrial interface (Benagiano et al., 2012;Zhai et al., 2020). Eutopic endometrium (lining the uterus) and ectopic endometrium of adenomyosis lesions in the myometrium aberrantly display activation of interleukin 6 (IL-6) and ERK/ MAPK signalling, although studies are limited (Xiang et al., 2019). The myometrium also contributes to the pathogenesis and pathophysiology of adenomyosis, as increased uterine contractility, which is induced by overexpression of the oxytocin receptor in women with symptomatic adenomyosis, and is associated with dysmenorrhoea, common in this disorder (Guo et al., 2013). To date, transcriptomic analyses of the myometrium in the pathogenesis of adenomyosis are lacking.
The aim of the present study was to investigate potential mechanisms underlying the pathogenesis and pathophysiology of adenomyosis, with a focus on the endometrium and myometrium, and to potentially identify druggable targets to control its associated symptoms. To this end, endometrial and myometrial transcriptomic signatures and associated biologic processes and signalling pathways were pursued, with the use of RNA-sequencing, in a well-defined hormonal milieu of women with and without diffuse adenomyosis. These analyses, along with target validation studies, identified biological processes and regulatory networks that support endometrial and myometrial dysfunction in adenomyosis and the theory of collective endometrial cell migration in the pathogenesis of this disorder.
Clinical samples
Endometrium and myometrium of women with adenomyosis and controls without adenomyosis were collected from hysterectomy specimens. Patients with adenomyosis were identified through clinical history and symptoms, and ultrasound, magnetic resonance imaging, or both. Histologic evaluation of hysterectomy specimens confirmed diagnosis, along with International Federation of Gynecology and Obstetrics (FIGO) type 2-5 uterine fibroids. Controls had undergone hysterectomy owing to symptomatic FIGO type 2-5 uterine fibroids, HMB, or both. Although it was not possible to collect myometrial tissue from normal controls, areas near uterine fibroids were avoided using immunohistochemistry (IHC) when selecting uterine tissue for RNA-sequencing, with the aim of minimize the effect of uterine fibroids on the transcriptome data. Full thickness uterine specimens (including endometrium and myometrium) were collected and stored at −80°C. Endometrium and myometrium were dissected from the frozen full thickness tissue using a surgical blade and away from areas of fibroids. All participants (n = 16 cases; n = 15 controls) were in the proliferative phase of the menstrual cycle, confirmed by endometrial histology (Noyes et al., 1975). Participant clinical characteristics are presented in Supplementary Table 1. All participants were documented as not pregnant and had not received hormonal or gonadotrophin releasing hormone agonist (GnRHa) treatments for at least 3 months before tissue sampling. Out of the 31 samples, six cases and five controls were used for RNA sequencing. The other 10 cases and 10 controls were used for validation using quantitative reverse transcription polymerase chain reaction (qRT-PCR). Of these, six cases and six controls were also used to validate protein using western blotting. The clinical samples were collected from the Human Endometrial Tissue and DNA Bank at the University of California, San Francisco, under an approved human subject's protocol, which was initially approved in November 2010, with continuing review approval annually to date (IRB number 10-02786) and Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University (IRB number 2019122704) under an ongoing protocol, approved initially in 2019, after written informed consent was obtained from all participants.
RNA extraction and sequencing
Total RNA was extracted from the endometrium and myometrium separately in six cases and five controls using the NuleoSpin RNA isolation Kit (Macherey-Nagel, Allentown, PA, USA). RNA quality was detected using a bioanalyzer; renewable identification numbers of all RNA samples were over 7. RNA sequencing library preparation was carried out as described previously (Klohonatz et al., 2019). Briefly, the Illumina TruSeq RNA Library Prep Kit (Illumina, San Diego, CA, USA) was used to prepare the mRNA sequencing library. The quality and concentration of all libraries were analysed with an Agilent Bioanalyzer (Agilent, Santa Clara, CA, USA). The Illumina Hiseq 2500 sequencing system (Illumina, San Diego, CA, USA) was used for mRNA sequencing, and 150-bp paired-end FASTQ read files were generated. The quality of fastq files was tested using the FastQC (Ward et al., 2020). A raw count of reads per gene was also obtained with STAR (Dobin et al., 2013). The data have been deposited in the NCBI GEO database (GSE190580). R/Bioconductor package (v1.20.0) was used to assess differential expression between cases and controls. Statistically significantly DEG were considered when P < 0.05 and log fold change was over 2.
Enrichment analysis
DAVID Bioinformatics Resources 6.8 (https://david.ncifcrf.gov/) was used for clustering of DEG. Gene Ontology analysis was used to identify possible molecular functions and to visualize the potential biological translation of DEG. Kyoto Encyclopedia of Genes and Genomes (KEGG) was used to analyse the potential functions of these genes. The R package 'clusterProfiler' was used for Gene Ontology and KEGG pathway enrichment analyses.
Protein-protein interaction and network analysis among differentially expressed genes
Interactions and K-means clustering among DEG of myometrium in adenomyosis versus controls were analysed using STRING (http://www.string-db.org/). Moreover, the proteinprotein interaction (PPI) among DEG of endometrium were also identified using STRING.
Masson's trichrome stain
Human endometrial tissues were fixed with 4% paraformaldehyde solution for 24 h and embedded in paraffin. Then, the tissue was cut into 5-μm thick pieces and placed on glass slides, which were then baked at 60°C for 1 h, routinely dewaxed, rinsed and stained with Masson trichrome staining. Samples were imaged by microscopy (Zeiss, Axio Vert. A1) (Zeiss, Oberkochen, Germany).
Real-time quantitative polymerase chain reaction
Total RNA from endometrium and myometrium tissues (10 cases and 10 controls) was extracted separately using an Animal Total RNA Isolation Kit (Foregene, Chengdu, China) and then reverse-transcribed into cDNA using PrimeScript RT Master Mix (Takara, Dalian, China) and BIO-RAD C1000 Touch Thermal Cycler. The mRNA expression of target genes was detected using real-time quantitative polymerase chain reaction. Results were analysed using the ΔΔCt method. The ratio of a target gene to β-ACTIN expression was calculated and reported as the target mRNA level, as in Wara-aswapati et al. (2007). The primer sequences of targeted genes are presented in Supplementary Table 2.
Statistical analysis
Results are presented as mean ± SEM or SD. Differences between women with versus without adenomyosis were analysed in unpaired Student's t-test with SPSS software (IBM, NY, USA). Statistical significance is shown as *P < 0.05, **P < 0.01, or ***P < 0.001.
RESULTS
Endometrium RNA-sequencing-In the transcriptomic analysis of eutopic endometrium, 1014 DEG were identified in the comparison of six women with adenomyosis compared with five controls (FIGURE 1A). The clusters identified by DAVID from the DEG focused on wound healing, inflammatory response and DNA binding (FIGURE 1B). Similarly, Gene Ontology enrichment revealed dysfunction of inflammatory response, extracellular matrix disassembly and cell population proliferation in the endometrium of women with adenomyosis, whereas KEGG identified dysregulated inflammatory related signalling pathways (TNF, IL-17 and NF-kappa B signalling pathways) (FIGURE 1C and 1D).
Network analysis-Protein-protein interaction and network analysis were conducted to investigate the interaction between inflammation and ECM remodelling and identify key molecules in these biological processes. Inflammatory response shares numerous DEG with ECM remodelling (ECM organization, ECM disassembly, collagen catabolic process and hyaluronan biosynthetic process), as well as positive regulation of cell migration in the analysis of endometrium of women with versus without adenomyosis (FIGURE 2A), indicating the close interactions between inflammatory response and ECM remodelling. Moreover, PPI analysis revealed that inflammatory factors, especially tumour necrosis factor (TNF), interleukin 1 beta (IL1B) and chemokines, correlated with many genes in ECM degradation and cell migration, including matrix metalloproteinase (MMPs) and a disintegrin and metalloproteases (ADAMs) (FIGURE 2B). Therefore, inflammatory induced ECM remodelling and subsequent activated cell migration in eutopic endometrium of women with adenomyosis is supported from the above analysis.
Several prominent biological processes were identified in the endometrium of women with versus without adenomyosis, including cell-cell adhesion and coupling, forming a migration track or ECM remodelling and driving force of cell migration (TABLE 1). Persistent cell adhesion and abnormal expression of chemokine (CXC motif) ligand/receptor (CXCL/ CXCR) signalling pathways, essential in collective cell invasion (Strieter et al., 2006), were also detected in the endometrium of women with adenomyosis (TABLE 1). Therefore, these processes involved in creating a track for cell migration and persistent cell adhesion support a role for collective endometrial cell migration, driven by CXCL/CXCR signalling, in the development of adenomyosis.
Validation-Key DEG of top enriched biological processes, especially inflammation and ECM remodelling, in the endometrium of women with adenomyosis were validated at the RNA and protein levels (FIGURE 3). IL1B, IL18 and TNF were all significantly increased in women with adenomyosis compared with controls (P = 0.002, P = 0.041 and P = 0.022, respectively) (FIGURE 3A). Moreover, the enzymes responsible for collagen degradation, MMP1, MMP8 and MMP13, and their natural inhibitor, tissue inhibitor of metallopeptidase 1 (TIMP1), also increased in the endometrium of women with adenomyosis (P = 0.002, P = 0.001, P = 0.008 and P = 0.0085, respectively), although changes in COL1A1, COL1A2 and COL3A1 mRNAs were not observed (FIGURE 3B and 3C). Hyaluronan is another critical component of the ECM, and the enzymes for hyaluronan synthesis, hyaluronan synthase HAS1, HAS2 and HAS3, were also highly expressed in the endometrium of women with adenomyosis (P = 0.016, P = 0.049 and P = 0.015, respectively) without changes in their receptor, CD44, a cell-surface glycoprotein involved in cell-cell interactions, cell adhesion and migration (FIGURE 3D). To verify key proteins involved in ECM remodelling and collagen catabolic process, western blot was carried out. Increased MMP1 and MMP13 were detected in the endometrium of women with versus without adenomyosis (P = 0.018 and P = 0.031) (FIGURE 3E). COL1A1 and COLIII protein immunoreactivity was significantly increased (P = 0.009 and P = 0.024) in the absence of significant changes in the corresponding mRNA in endometrium of women with adenomyosis (FIGURE 3E and 3F).
Myometrium
RNA-sequencing-Myometrial dysfunction may also contribute to the pathogenesis of adenomyosis. In the present study, 1906 DEG were identified in the transcriptomic analysis of the myometrium of women with versus without adenomyosis (FIGURE 4A). Further enrichment analysis found that the myometrial layer of adenomyosis patients also presented ECM organization and collagen catabolic processes dysfunction, similar to endometrium (FIGURE 4B-4D). Interestingly, abnormalities were found in sensory pain perception as well as gamma aminobutyric acid (GABA) synaptic transmission in the myometrium of women with adenomyosis (FIGURE 4C), suggesting a neuropathic nature for chronic pelvic pain and dysmenorrhoea associated with this disorder.
Network analysis-From the network analysis, ECM remodelling and myometrial neural disorder interact with each other closely (FIGURE 5A). Moreover, K-means clustering of DEG involved in the top enriched Gene Ontology terms identified the association and key molecules that function in the dysregulated biological processes in endometrium of women with adenomyosis (FIGURE 5B). The analysis showed that ECM remodelling is mainly attributed to the green cluster, whereas neuropathic processes and humoral immune response belong to the red cluster (FIGURE 5B). Also, CXCL8 may function as a mediator of both ECM remodelling and neuropathic dysfunction owing to its extensive contact with the DEG in the green and red clusters (FIGURE 5B).
Validation-In addition, mRNA levels of CXCL8 were also significantly increased in the myometrium of women with adenomyosis compared with controls (P = 0.044) (FIGURE 5C) without significant changes in either IL1B or TNF. On the contrary, MMP1 and MMP8 mRNAs were significantly increased in the myometrium of women with adenomyosis (P = 0.0005 and 0.021, respectively) (FIGURE 5D), similar to that observed in the endometrium of women with disease (FIGURE 3B). For the neuropathic processes, synaptic transmission related genes were validated, including decreased gamma-aminobutyric acid type A receptor subunit alpha2 (GABRA2), increased neurotensin (NTS) and oxytocin receptor (OXTR), in the myometrium of women with adenomyosis versus without (P = 0.020, P = 0.009 and P = 0.022, respectively) (FIGURE 5E).
Endometrium and myometrium
In the present analysis, 115 DGE commonly dysregulated in both the endometrium and myometrium of women with versus without adenomyosis were identified. The top enriched Gene Ontology terms are collagen catabolic process, immune response and chemotaxis (TABLE 2). Comparison of genes and pathways commonly dysregulated in both endometrium and myometrium of women with versus without adenomyosis revealed a role for prolactin (PRL) signalling, supporting a longstanding hypotheses for involvement of PRL and PRLR in the pathogenesis and pathophysiology of this disorder (Mori et al., 1981). In the present study, the PRL signalling pathway was enriched in DEG of both endometrium and myometrium of women with versus without adenomyosis (enrichment scores of 1.955 and 2.23, respectively). Five common DEG genes included SHC4, CCND1, GALT, SOCS5 and ELF5 (FIGURE 6A). In the present validation, CCND1 mRNA was significantly increased in both the eutopic endometrium (P = 0.044) and myometrium (P = 0.002) of women with adenomyosis (FIGURE 6B), consistent with a role for CCND1 in endometrial cell proliferation in women with adenomyosis. In addition, GALT mRNA expression was decreased in the eutopic endometrium (P = 0.014) of adenomyosis versus controls without significant changes in the myometrium (P = 0.282) (FIGURE 6C).
Insights into pathogenesis: collective endometrial cell migration theory
In the endometrium of women with versus without adenomyosis, biological processes and functional analyses derived herein revealed inflammation-induced ECM remodelling and cell cohesion and coupling, forming a migration track and driving force for guided cell migration. Previous studies have demonstrated a role for inflammatory factors in promoting ECM remodelling and subsequent cell migration in tumours (Lee and Heur 2013;Wang et al., 2017), and the observations herein provide supporting molecular evidence for this phenomenon in the endometrium of women with adenomyosis using whole genome transcriptomics. A recent publication on single cell RNA sequencing of endometrium from a woman with adenomyosis versus women with uterine fibroids is consistent with our findings (Liu et al., 2021).
Cell movement ranges from uncoordinated ruffling of cell boundaries to migration of single cells to collective motions of cohesive cell groups (Thuroff et al., 2019). Cell migration, the basis of cell invasion, comprises migration of single cells to position themselves in tissues and collective migration wherein cells remain connected as they move, resulting in migrating cohorts (Friedl and Gilmour, 2009). For the latter, cells remain physically and functionally connected during movement; multicellular polarity and 'supracellular' organization of the actin cytoskeleton generate traction and protrusion force for migration. Also, moving cell groups structurally modify the tissue along the migration path, either by clearing the track or by causing secondary ECM modification (Friedl et al., 2004;Montell, 2008). Previous studies have suggested the potential role of collective cell migration in the invasion process of deep endometriotic lesions and latter phases of adenomyosis (Donnez et al., 2015;Garcia-Solares et al., 2018). No direct evidence to date, however, has clearly demonstrated collective cell migration in adenomyosis development. In our data, however, the striking triad of cell cohesion and coupling form a migration track and guided cell migration among the biological processes derived from DEG in the endometrium of women with versus without adenomyosis. This supports endometrial collective cell migration into the myometrium, resulting in the development of ectopic endometrium lesions in the myometrium of women with adenomyosis. Whether collective cell migration plays a role in the onset of adenomyosis needs to be further verified through IHC of E-cadherin, β-catenin, N-cadherin and other biomarkers and in animal models.
Increased ECM organization and collagen catabolic process were detected in the endometrium and myometrium of adenomyosis cases with unclear pathogenesis. One of the most common causes is chronic injury and inflammation caused by hyperperistalsis of the junctional zone, which further leads to abundant myofibroblasts and collagen hyperplasia. Collective cell behaviour in response to mechanical injury is central to various regenerative and pathological processes (Jiang et al., 2020). Therefore, the trigger for this may be micro TIAR caused by hyperperistalsis (Leyendecker et al., 2009) or an iatrogenically injured endometrial-myometrial interface (junctional zone), involving local oestrogen signalling, inflammation and wound repair mechanisms. Importantly, women who have had a caesarean section or dilatation and curettage procedures have higher risk of developing adenomyosis (Upson and Missmer, 2020), consistent with this hypothesis.
Regarding mechanisms underlying collective cell migration from the endometrium to the myometrium leading to adenomyosis, the present data support a role for CXCL/CXCLR signalling, as direction of migration along a track depends on the polarity of cell clusters and chemokines within the anatomic niche (Zhou et al., 2010). What regulates dysfunctionality of endometrial CXCL/CXCLR signalling, the cell types and specific ligand/receptor pairs involved, and specific roles for this signalling family in the pathogenesis of adenomyosis, warrant further investigation. Notably, as collective cell movement is relevant for processes in morphogenesis, tissue repair and cancer invasion and metastasis, conserved mechanisms may be operational, as suggested by Garcia-Solares et al. (2018).
Endometrium and myometrium: prolactin and adenomyosis
A possible role for PRL in adenomyosis was derived initially from an experiment conducted over 40 years ago wherein hypophyseal transplantation into mice uteri induced adenomyosis (Mori et al., 1981). Subsequently, infusion of PRL or administration of dopamine agonist causing hyperprolactinemia resulted in adenomyosis in the mouse (Singtripop et al., 1991). More recently, higher serum levels of PRL in women with adenomyosis compared with those without disease have been reported (Sengupta et al., 2013). In the present study, several members of the PRL signalling pathway (FIGURE 6) that are dysregulated in the endometrium of women with adenomyosis and are involved in cell proliferation, cell cycle progression and gluconeogenesis. For example, SHC4 is involved in PRL signalling and plays a role in cell proliferation, differentiation and survival (Ahmed and Prigent, 2017). These are important processes in the pathogenesis of adenomyosis. In endometrial cancer cells, autocrine PRL expression stimulates cell proliferation, migration and invasion, and promotes tumour growth, local invasion and metastases, processes that are important in adenomyosis pathogenesis.
Additionally, over-expression of PRL in the Ishikawa endometrial adenocarcinoma cell line increases cyclin D1 (CCND1) mRNA levels and enhances cell cycle progression (Ding et al., 2017). CCND1 is a key component of PRL signalling and may be a factor in endometrial cell proliferation and adenomyosis. Galactose-1-phosphate uridyl transferase (GALT) is a key enzyme in gluconeogenesis, which is inhibited by PRL/PRLRs via Foxo3a (Devi et al., 2009). Recently it was found to be associated with adenomyosis (Goumenou et al., 2000). SOCS5, another common DEG identified in PRL signalling, is a member of the suppressor of cytokine signalling (SOCS) protein family with controversial tumourpromoting and tumour-suppressive roles in cancer. Zhang et al. (2019a) reported that SOCS5 overexpression promoted hepatic cancer cell migration and invasion in vitro by inactivating PI3K/Akt/mTOR-mediated autophagy. E74-like factor 5 (ELF5) also plays a key role in the processes of cell differentiation and apoptosis, whereas overexpression of ELF5 inhibits migration and invasion of ovarian cancer cells (Zhang et al., 2019b). Previously published studies have shown that several additional factors can affect the PRL pathway. For example, nuclear receptor (NR) 4A modulates decidualization of endometrium by upregulating PRL via forkhead box O (FOXOA1) (Jiang et al., 2016). Notably, we found no changes in the expression of NR4A or FOXOA1 in our data, and this warrants further investigation. Overall, combining the published research with our results, local PRL signalling may contribute to dysfunction of endometrium and myometrium in women with adenomyosis via SHC4, CCND1, GALT, SOCS5 and ELF5, with specific mechanisms awaiting further study.
Myometrium: pain pathways and heavy menstrual bleeding
In the present study, myometrial transcriptomic analysis revealed a possible neuropathic nature of dysmenorrhoea in women with adenomyosis. Dysmenorrhoea is a clinical hallmark of adenomyosis (Upson and Missmer, 2020). It has been postulated that myometrial hypercontractility, caused by high expression of oxytocin receptors and increased contractile amplitude of uterine smooth muscle cells in the myometrium of women with versus without adenomyosis, are responsible for the severe dysmenorrhoea associated with the disease (Nie et al., 2010). Inflammatory factors, such as IL-1β and corticotropin releasing hormone, play a role in pain associated with deep infiltrating endometriosis (Carrarelli et al., 2016), a disorder that is physiologically and histologically similar to adenomyosis. Accumulating data indicate that sensory nerve-derived neuropeptides, such as calcitonin gene related-protein (CGRP), can accelerate the progression of endometriosis via their respective receptors, whereas adrenergic β2 receptor (ADRB2) agonists also are involved in facilitating lesion progression. More remarkably, lesional expression of ADRB2 correlated positively with the severity of dysmenorrheoa in women with endometriosis (Yan et al., 2019). Therefore, complex mechanisms, including mechanical movement, inflammatory factors and neuropeptides, likely play important regulatory roles in dysmenorrhoea in adenomyosis. In the present study, abnormal expression of spexin (SPX), cannabinoid receptor 2 (CNR2) and POU class 4 homeobox (POU4F3) may have been involved in sensory perception of pain in the myometrium of women with and without adenomyosis, relevant to dysmenorrhoea (FIGURE 5A). GABA, a neurotransmitter involved in pain sensation, functions as an inhibitory synaptic transmitter (Yam et al., 2018). GABRA2 is a member of the GABAA receptor family that signals inhibitory functions of GABA in the central nervous system and in peripheral tissues, including rat and human uterine myometrium and in smooth muscle vasculature of the endometrium (Human Protein Atlas) (Greenfield et al., 2002). The neurosteroid allopregnanolone binding to GABAAR has been proposed to inhibit myometrial contractility, involving the π subunit (Greenfield et al., 2002). Dysregulation of GABA synaptic transmission in our in-silico analysis of myometrium from women with adenomyosis supports a local neuropathic disorder in adenomyosis myometrium, likely involving enhanced myometrial contractility and pain, in addition to GABA's role in the central nervous system. Decreased expression of GABRA2 in the myometrium of adenomyosis women and further definition of the different subunits that confer tissue-specific expression may provide potential targets for drug development and underpin future mechanistic studies aimed to minimize pain associated with adenomyosis (Vannuccini et al., 2017).
Heavy menstrual bleeding is a common symptom in patients with adenomyosis. Previous studies have indicated that mechanisms underlying HMB in adenomyosis involve neoangiogenesis, abnormal uterine contractility and high microvessel density (Harmsen et al., 2019). Events leading to increased proangiogenic factor expression, such as vascular endothelial growth factor, are triggered by TIAR, hypoxia and hormonal dysfunction. Therefore, HMB may result from both endometrial and myometrial pathology in women with adenomyosis. In the present study, most participants with adenomyosis and only three controls had HMB. The transcriptomic result highlighted the increased OXTR in myometrium and dysregulated ECM changes, collagen degradation and inflammation in the endometrium of women with adenomyosis. Overexpression of OXTR in adenomyosissurrounding myometrium coupled with vasopressin receptor (VP1αR) expression in blood vessels and myometrium may contribute to altered microcirculation as well as increased uterine contractility (Mechsner et al., 2010). Collagen degradation and inflammation in the endometrium may also be involved in endometrium dysfunction and further HMB in adenomyosis with molecular mechanisms awaiting further definition.
Strengths and challenges
The strength of the present study is that, to the best of our knowledge, it is the first comparison of endometrium and myometrium of women with and without diffuse adenomyosis at the transcriptomic level and subsequent analyses of biological processes and signalling pathways. Moreover, all specimens were obtained in one phase of the menstrual cycle (proliferative phase), avoiding different hormonal milieu across the cycle confounding data and interpretation. Although the results indicate that ECM remodelling in myometrium is involved in the pathogenesis of adenomyosis, ECM degradation and abnormal expression of MMPs have also been detected in leiomyomas (Islam et al., 2018). Since the probability of co-occurrence between adenomyosis and uterine fibroids is up to 70% (Upson et al., 2020), it is difficult to find cases without uterine fibroids. Although the location of fibroids by IHC was avoided when selecting uterine tissue for RNA-sequencing in both cases and controls, the coexistence of uterine fibroids in participants and controls recruited for this study is still considered a limitation of our study. Moreover, our result still needs to be validated in a larger sample size and future in-vivo animal models In conclusion, our results support abnormalities in endometrium and myometrium of women who have adenomyosis compared with controls. The data strongly support the collective endometrial cell migration theory in the pathogenesis of adenomyosis, wherein inflammation induces ECM remodelling, creating a track for subsequent collective cell migration and onset of adenomyosis in the myometrium. Also, our results underscore the importance of PRL signalling in the endometrium and myometrium of women with versus without adenomyosis, providing opportunity for developing targeted treatments for the disease. Moreover, abnormal myometrial GABA synaptic transmission in the myometrium of women with disease also offers a novel target for innovation in management of dysmenorrhoea and chronic pelvic pain in women with adenomyosis. Zhang X,Lin J,Ma Y,
KEY MESSAGE
This study supports invasive endometrium theory in the pathogenesis of adenomyosis, proving evidence for the vital role of endometrial collective cell migration. Abnormal myometrial GABA synaptic transmission and prolactin signalling abnormalities may contribute to dysmenorrhoea in women with adenomyosis and as a possible target for novel therapeutic development. presented as means ± SEM. *P < 0.05, **P < 0.01; (E) protein levels of COL1A1, COLIII, MMP1 and MMP13 in the endometrium of adenomyosis patients and controls. n = 6 for each group. GAPDH was used as reference control, and the ratio of band intensity of a target protein to that of the intensity of GAPDH was obtained at each target protein level (Li et al., 2018). The P-value of the protein level of COL1A1, COLIII, MMP1 and MMP13 in adenomyosis versus controls are 0.009, 0.024, 0.018 and 0.031, respectively. *P < 0.05, **P < 0.01; (F) Masson stain of eutopic endometrium in adenomyosis patients versus controls. respectively. Data are presented as means ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001. Potential target genes of prolactin (PRL) signalling pathways contributing to the dysfunction of both eutopic endometrium and myometrium in women with adenomyosis. (A) Venn diagram of the differentially expressed genes (DEG) enriched to PRL signalling pathways in eutopic endometrium and myometrium. Five common target genes were identified, which consist of SHC4, CCND1, GALT, SOCS5 and ELF5; (B) the expression of CCND1 mRNA in the eutopic endometrium (P = 0.044) and myometrium (P = 0.002) of women with and without adenomyosis (n = 10 for each group); (C) the expression of GALT mRNA in the eutopic endometrium (P = 0.014) and myometrium (P = 0.282) of women with and without adenomyosis (n = 10 for each group). *P < 0.05, **P < 0.01. Zhai et al. Page 23
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
0001-01-01T00:00:00.000
|
1379912
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://om-pc.biomedcentral.com/track/pdf/10.1186/1750-4732-1-5",
"pdf_hash": "355fd102c39604bcf4ae297055152669b9f92753",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2590",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "355fd102c39604bcf4ae297055152669b9f92753",
"year": 2007
}
|
pes2o/s2orc
|
Osteopathic Medicine and Primary Care Open Access Educating Primary Care Clinicians about Health Disparities
Racial and ethnic health disparities inarguably exist in the United States. It is important to educate primary care clinicians regarding this topic because they have the ability to have an impact in the reduction of health disparities. This article presents the evidence that disparities exist, how clinicians contribute to these disparities, and what primary care clinicians can do to reduce disparities in their practice. Clinicians are able to impact health disparities by receiving and providing cross-cultural education, communicating effectively with patients, and practicing evidence-based medicine. The changes suggested herein will have an impact on the current state of health of our nation.
Background
The U.S. racial and ethnic minority population will grow from 28% in 1998 to nearly 40% in 2030 [1]. According to the Institute of Medicine (IOM), health disparities inarguably exist among racial and ethnic minorities [2]. It is important to address health disparities because consequences include poorer health, increased suffering, and higher mortality [2]. Many racial and ethnic minorities have higher mortality rates from cancer, diabetes, and cardiovascular disease [3]. African Americans have a higher cancer mortality rate (243.1 vs. 193.9 per 100,000, respectively) and twice the cardiovascular mortality rate compared to white Americans [4,5]. Among Hispanics, the diabetes death rate ranges from 47-172 per 100,000 depending on nationality (Cuban, Mexican, Puerto Rican, etc.), more than twice the rate of white Americans (23 per 100,000) [4]. Furthermore, Hispanic women have the highest cervical cancer incidence rate [6].
Health disparities have a financial toll as well. The higher burden of disease affects the health of the nation as a whole. Poorer health requires increased expenditure, especially when complications arise from uncontrolled or undetected disease. For example, African American women are more likely to have late-stage breast cancer at the time of diagnosis, more often requiring intensive treatment and hospitalization, and leading to more disability [7]. Loss of individual productivity also contributes to national health care costs, impacting all individuals regardless of race or ethnicity.
Despite concerted efforts to address and eliminate health disparities, many complicated, interrelated factors still need to be overcome. According to the IOM report, Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care, health disparities occur at different levels, including health care systems and their administration, clinicians and their practices, and patients themselves [2]. At the clinical level, there are several factors that may contribute to racial and ethnic health inequity [2]. Clinicians, patients, and the clinical encounter all impact health disparities. For example, a person's interaction with the clini-cian may lead to non-adherence, distrust, and misunderstandings that lead to poor health. Therefore, primary care clinicians have an important role and the ability to decrease health disparities [8,9].
The purpose of this paper is to expose primary care clinicians to the current state of health inequality and to describe how they may positively impact health disparities in their practice.
How are health disparities and primary care related?
There are a variety of factors that lead to disparities in care, such as access to care, socioeconomic position, and social factors. In addition, there is evidence that clinic interactions (front desk, medical assistant, etc.) and clinicianpatient encounters may lead to health disparities [2, [10][11][12].
Primary care is the gateway to accessible health care in the United States, especially since the growth of managed care. Primary care has been defined as the provision of integrated, accessible health care services by clinicians who are accountable for addressing a large majority of health care needs, developing a sustained partnership with patients, and practicing in the context of family and community [13,14]. The importance of receiving quality care from primary care clinicians is reflected in a recent review [15]. First, health is better in areas with more primary care clinicians. Better health is characterized by lower rates of mortality, improved health outcomes, and increased lifespan. Second, people who identify a primary care clinician as their usual source of care have better health outcomes as well. Third, the characteristics of primary care are associated with better health. These characteristics are first-contact access for each need; long-term person focused care; comprehensive care; and coordinated care [15]. However, primary care access is inequitable and factors associated with the clinical encounter are related to various health inequalities which interact at different levels [12,16]. Minorities have reported poorer care compared to whites in several domains of care, such as communication, trust, accessibility to clinics, and continuity of care [17,18].
Evidence and potential sources of health disparities
Factors that contribute to health disparities can be divided into two sets. The first set involves the operation of healthcare systems and the environment in which they operate. These factors affect access to care. Health insurance has been the most studied factor that affects access to health care. There are about 39.2 million uninsured people in the country, and minorities comprise more than 60% of that population [19]. Availability of services also affects access. Whites are the group with the highest percentage of a usual source of care, while Hispanics are the group with the lowest percentage [19].
Evidence exists of the differences in the quality of care that is received [2]. Three mechanisms by which healthcare disparities can occur at the clinical encounter are: 1) bias (or prejudice) against minorities; 2) greater clinical uncertainty when interacting with minority patients; and 3) beliefs (stereotypes) held by clinicians about the behavior or health of minorities [2].
Healthcare provider bias can occur unconsciously. Research has found that prejudicial attitudes still remain common in America [2], and that clinicians' diagnostic and treatment decisions may be influenced by the patients' race or ethnicity. For example, physicians were found to be less likely to recommend catheterization procedures to African American females compared to white males and females, and African American males [20]. Physicians were also found to rate African American patients as less intelligent, less educated, more likely to abuse drugs and alcohol, more likely to not follow medical advice, and less likely to participate in cardiac rehabilitation than white counterparts [21]. Although there are many factors influencing clinician decisions, subtle factors such as bias may have an effect on the patients and their health outcomes. Primary care clinicians need to become aware of unconscious and unintentional actions or decisions in order to make changes in the way they provide care.
Clinical uncertainty occurs when clinicians make decisions about the severity of an illness based on prior beliefs or experience [2]. These prior beliefs and experiences will be different depending on the age, gender, socioeconomic status, race and ethnicity of the patient. If the clinician does not have the information needed to make a diagnostic decision, (for example, if the clinician has difficulty understanding the symptoms), then the clinician will be more likely to use prior beliefs and experiences to make diagnostic and treatment decisions. As a consequence, the patient's needs may not be met.
Stereotypes can be defined as categories that people use (sex, race, etc.) to process and recall information about others [22]. People then use the information in these categories to understand and simplify complex situations. Although explicit stereotyping is rarely seen these days, it still exists in more implicit and subtle ways. Even people who do not believe they are prejudiced often demonstrate implicit or unconscious bias or stereotypes.
Clinicians must become aware that they are not exempt from unintentional (or intentional) bias or discrimination when caring for patients. Most clinicians strongly refute the idea that they provide differential care to ethnic and racial minorities [2]. However, it is usually small recurrent unintentional acts during the clinician-patient encounter that may contribute to existing health disparities [2]. Awareness by the clinic staff and clinicians is one of many concerted efforts that are needed to reduce health disparities in this country.
Quality medical care is often influenced by system factors outside of the clinician's control, such as time restrictions, cost-containment pressures, insurance status and ability to pay. However, it is important for primary care clinicians to be vigilant and address these issues in order to provide equal and comprehensive medical care regardless of an individual's age, race, ethnicity, gender, and socioeconomic position [2].
What can primary care clinicians do to address health disparities?
There are several things that primary care clinicians can do in their practice to aid in national efforts to reduce health disparities. Clinicians can receive and provide cultural competence/cross-cultural education, learn how to communicate effectively with patients, and practice evidencebased medicine.
Cross-cultural education
Education about different cultures can be used to avoid stereotypes, bias, and clinical uncertainty. Students and clinicians may greatly benefit from cross-cultural education or training. However, clinicians should be aware that achieving cultural competence is a process, and does not happen from one day to another with a textbook, or as a quick fix. Cross et al (Table 1) developed a framework in which cultural competence occurs in a continuum and in six stages: 1) Cultural Destructiveness, 2) Cultural Incapacity, 3) Cultural Blindness, 4) Cultural Pre-competence, 5) Cultural Competency and 6) Cultural Proficiency [23]. An awareness of one's own position within the different stages is the first step to achieving full cultural competence.
The Office of Minority Health published the Culturally and Linguistically Appropriate Services Standards (CLAS) in 2000 [24]. One of the main themes of the standards is culturally competent care. Cultural competence is defined as a set of congruent behaviors, attitudes, and policies that come together in a system, agency, or among professionals that enables effective work in cross-cultural situations [24]. Culture refers to the patterns of behavior in humans that include language, thoughts, communication, actions, customs, beliefs, values, and institutions of race, ethnicity, religion, or social groups [25]. Culture not only refers to race, ethnicity, and religion, but also refers to gender, sexual orientation, age, disability, and socioeconomic status [24]. Educational programs should have a patient-centered focus, where the patient is the center of attention, rather than the patient's cultural group characteristics, or the disease itself.
However, many training programs use a categorical approach to teaching cultural competence by focusing on certain groups of people. Carrillo, Green, and Betancourt recommend an emphasis on the differences between individual patients, rather than groups, in cross-cultural curricula [26]. There are many different models of cultural competency/cross-cultural curricula that are currently being used in medical schools. Examples of such curricula and suggested readings in the topic are presented in Table 2 [26][27][28][29].
Communication
Many racial and ethnic minorities, especially limited English-speaking minorities, report poor communication with their clinicians and have more problems with different aspects of the clinician-patient relationship [30][31][32]. Many patients who experience poor communication are less likely to follow instructions, take medications, and follow-up with tests and appointments, all leading to
Cultural Incapacity
This stage occurs when there is unintentional cultural destructiveness, bias, paternalism, ignorance, and/or fear.
Cultural Blindness
Involves a philosophy of being unbiased, treating all people the same, belief that culture, class or color does not make a difference. People in this stage are well-intentioned; however, it is still ethnocentric. Cultural Pre-competence Characterized by the realization of weaknesses and gaps that are missing when working with other cultures. There is a desire for inclusion, a commitment to civil rights, and a desire to implement training. However, there may be a danger of false accomplishment.
Cultural Competency
Characterized by an acceptance and respect for differences. There is a continual inquiry about other cultures and an expansion of knowledge.
Cultural Proficiency
Last stage where all cultures are held in high esteem and there is a responsibility taken for constant development of new knowledge and approaches to interaction. This stage assumes responsibility to transfer skills and advocate cultural competence to others within a system or an organization.
Adapted from Cross et al [25] poorer health [33][34][35]. Thus, effective communication is another strategy primary care clinicians can use to reduce health disparities. Effective communication can be defined as using little medical jargon, speaking clearly, and ensuring the patient understands the given information [36]. Stereotypes can be avoided if the clinician is able to gather accurate information about whether the patient understands his or her condition. Kleinman and colleagues developed a set of interviewing questions to elicit how patients understand their condition [37]. These patient-centered questions are presented in Table 3 and can help clinicians understand and address the patient's ailments.
Complex language can have a negative effect on successful communication between a clinician and patient. A report by the IOM found that the complex language that clinicians use to communicate with patients, either verbally or written, is a problem for many patients, not just recent immigrants or those with a low level of education [38]. Termed "health literacy," this important concept must be taken into account when communicating with patients.
Health literacy is defined by the National Library of Medicine and Healthy People 2010 as the "degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions" [38][39][40][41]. Many fac-tors affect health literacy, such as the patient's level of education, cultural background, and native language. The clinician's ability to effectively and appropriately communicate with the intended audience is also important [38]. Even people with strong health literacy skills have difficulty understanding written information from clinicians, such as patient information sheets and prescription drug labeling [38]. If patients have difficulty understanding instructions given by a clinician, they may not be able to understand their health condition, may have difficulty with treatment decision making, and may not take their medications correctly [38]. A patient centered approach, as shown in Table 3, where the patient's perspectives, values, beliefs, and behaviors are taken into account may reduce these communication barriers.
Clinicians and patients who do not speak the same language substantially complicate communication issues. Using trained interpreters is the best way to ensure that patients understand information that is given to them. If non-trained interpreters are used, such as family members or employees who are pulled from their regular job to interpret who are not aware of the potential problems that may arise, problems of lost information, misunderstandings, and miscommunication may occur. This may result in patients not having their needs addressed, requiring returned clinic visits, ordering unnecessary tests, or even misinterpretations regarding prescribed drugs. The Cross [37] Cultural Health Care Program (CCHCP) developed guidelines to help clinicians work through an interpreter [42]. These guidelines state that the decision to use an interpreter is made whenever the clinician feels that language or cultural differences may cause a barrier to clear communication, or whenever a patient requests an interpreter. Choosing an interpreter may also be a challenge.
The CCHCP makes several suggestions as to how to choose an interpreter. First, make sure that the interpreter is fluent in both languages; testing may be needed. Second, make sure the interpreter is trained as an interpreter. The fact that a person is bilingual does not make her or him an interpreter; there are special skills involved. Third, do not use a family member. Family members often edit the patient's message, add their own opinions, and answer for the patient. Fourth, never use a child. This creates role reversal and power reversal, and it should not be the responsibility of a child to relay bad news to parents or family members.
The CCHCP also provides suggestions on how to work through an interpreter [43]. First, request interpretation of everything, and in the first person. Second, speak directly to the patient, not to the interpreter. Third, insist that everything you say is interpreted, as well as everything that the patient says, or that family members say. Fourth, be patient. Providing care through an interpreter often takes longer. However, this will avoid wasted time, misunderstandings, or unnecessary tests.
Some organizations or clinicians' offices may be too small to hire a full time interpreter or there may be barriers to hiring bilingual staff. In such cases, another option would be using the American Telephone and Telegraph (AT&T) language line [44]. The service may be used by a subscribed client or company, or may be used by an unsubscribed individual for less frequent use. Although at first glance the price for this service may seem quite expensive (ranging from $2.20 per minute to $7.25 per minute), it becomes cost-efficient in the long run because clinicians will have a better understanding of the patients' symptoms, conditions, and life styles. Patients will also have a better understanding of their condition and their medica-tions, and will be less likely to return due to misunderstandings.
Practicing Evidence-Based Medicine
The use of evidence-based medicine (EBM) can be another method to reduce health disparities. According to the University of Toronto Center for Evidence-Based Medicine, EBM is the integration of best research evidence that is clinically relevant with clinical expertise and patient values [45]. The need for valid information, the inadequacy of current resources, and the lack of time to spend with the patient are some reasons why interest in EBM has increased in the past years [46]. EBM can also reduce clinician bias and stereotypes by ensuring that practice is based on one's expertise and the most current applicable evidence. Adherence to evidence-based guidelines allows clinicians to make decisions that are reflective of current research findings, avoiding conscious or unconscious decisions based on bias or stereotypes. However, there are many realities that must be considered. When serving low income patients and/or individuals from underserved populations, resources may be severely limited. Utilizing the best evidence that fits the clinician's practice environment and special circumstances is recommended. For example, clinicians practicing in non-profit free clinics must make strategic and economic decisions when deciding what medication to prescribe because medications are often out-of-pocket expenses for patients. Nonetheless, studies have shown that practicing EBM has economic advantages as well. The lack of compliance with antihypertensive guidelines, by using second-line medications over first-line medications (such as hydrochlorothiazide), was associated with potential increases in health care expenditures in the range of $2.6 billion to $3.2 billion in 1996 [47]. Numerous EBM resources are available on the Internet to allow primary care clinicians to keep abreast of EBM guidelines ( Table 4). Use of EBM principles may potentially increase health equity among patients.
Although the recommendations provided may not be simple to implement, primary care clinicians can have a significant role in reducing health disparities through incremental changes. Education is the key to understand- ing patients' perspectives and providing a higher quality of care. Other steps that can be taken are conscious efforts to communicate with patients more clearly and using trained interpreters when needed. Also, communication style, such as asking questions in a more caring manner or validating a patient's concern, may have a positive impact on the health of patients. The use of EBM may be beneficial, not only for the populations that experience health disparities, but also for the patient population as a whole, reducing costs and increasing equity. The sum of our small changes, taken together, will make a significant impact.
|
v3-fos-license
|
2023-02-21T14:53:49.312Z
|
2021-01-25T00:00:00.000
|
257045213
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-021-81794-4.pdf",
"pdf_hash": "7c2c7ad478448f1fa5821d0265452ac228fc71ae",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2591",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "7c2c7ad478448f1fa5821d0265452ac228fc71ae",
"year": 2021
}
|
pes2o/s2orc
|
Valproic acid influences the expression of genes implicated with hyperglycaemia-induced complement and coagulation pathways
Because the liver plays a major role in metabolic homeostasis and secretion of clotting factors and inflammatory innate immune proteins, there is interest in understanding the mechanisms of hepatic cell activation under hyperglycaemia and whether this can be attenuated pharmacologically. We have previously shown that hyperglycaemia stimulates major changes in chromatin organization and metabolism in hepatocytes, and that the histone deacetylase inhibitor valproic acid (VPA) is able to reverse some of these metabolic changes. In this study, we have used RNA-sequencing (RNA-seq) to investigate how VPA influences gene expression in hepatocytes. Interesting, we observed that VPA attenuates hyperglycaemia-induced activation of complement and coagulation cascade genes. We also observe that many of the gene activation events coincide with changes to histone acetylation at the promoter of these genes indicating that epigenetic regulation is involved in VPA action.
www.nature.com/scientificreports/ as glycogen during hyperglycaemia and releasing sugars during hypoglycemia. Liver dysfunction, classified as elevated hepatic glucose production during hyperglycaemia, is common in type-2 diabetes, and inhibiting this is a major mechanism of action of the widely prescribed glucose lowering drug metformin 13 . Thus, understanding the molecular mechanisms underpinning the hyperglycaemic activation of metabolism, coagulation, complement and other inflammatory pathways in hepatocytes could identify new therapies to reduce the burden of diabetic complications. At the interface between genetic and environmental factors, epigenetic mechanisms are proposed to play a major role in the development of metabolic disease including diabetic complications 14,15 . Previous reports have demonstrated that chromatin remodeling and histone acetylation are important mechanisms in diabetes development 16,17 . The epigenetic component of metabolic/inflammatory disorders has come recently to attention, revealing epigenetic drugs as potential immunomodulatory agents. The recent discovery that histone deacetylase (HDAC) inhibitors (HDACi) have the ability to reduce the severity of inflammatory and autoimmune diseases, including diabetes, in several animal models, has positioned them as alternative anti-inflammatory agents [18][19][20][21] . Their paradigmatic mode of action has been defined as increased histone acetylation of target genes, leading to higher gene expression; however, recent studies have shown a more diverse mechanism of gene regulation [22][23][24][25] .
Valproic acid (VPA; IUPAC: 2-propylpentanoic acid), the most clinically prescribed HDACi, is a fatty acid with anticonvulsant properties used for the treatment of epilepsy and seizures 26 . VPA inhibits class I (HDAC1, HDAC2, HDAC3, HDAC8) and class IIa (HDAC4, HDAC5, and HDAC7), leading to an increase in the acetylation of histones H2, H3, and H4, which modify the expression of associated genes 21 . Recently, its use has been investigated associated with different diseases as a strategy of repurposing clinically approved drugs [27][28][29] . There are reports of VPA reducing the blood glucose level and fat deposition in adipose tissue and liver of mice and rats 30,31 , while Class I 32,33 and class IIa 34 HDACs seems to be involved in the control of gluconeogenesis signaling and the insulin production. VPA also reduces the microvascular complications of diabetes 35,36 .
We have previously shown that treatment of HepG2 human hepatocytes with the HDACis Trichostatin A (TSA) and VPA attenuated hepatic glucose production, although no significant difference was detected in global chromatin structure and epigenetic landscape. Chromatin alterations promoted by HDACi under hyperglycaemia may be a function of the differently regulated nuclear domains and genes rather than of global remodeling 17 . Therefore, identification of genes influenced by HDAC inhibition is paramount to understanding of its mechanisms of action and therapeutic target in amelioration of hyperglycaemic state 23 . We hypothesise that hepatocytes undergo major gene expression alterations when exposed to a hyperglycaemic environment as the liver is an organ of critical importance to carbohydrate metabolism. Furthermore, we hypothesised that VPA could attenuate some of the deleterious pathways promoted by hyperglycaemia by conferring changes to promoter histone acetylation.
In this study, HepG2 cells exposed to high-glucose (HG) were stimulated with VPA. We performed high throughput RNA-sequencing (RNA-seq) to understand transcriptome-wide analysis of genes and pathways in response to hyperglycaemia and VPA. We observe genes influenced by VPA are altered in H3K9ac at their promoters. This work identified that complement and coagulation pathways activated by hyperglycaemia were attenuated by HDAC inhibition.
Results
Hyperglycaemia regulates hepatocyte gene expression. In order to understand the effect of high glucose on whole genome hepatic gene expression, RNA-seq was performed on HepG2 cells cultured under continuous low glucose (LG) or stimulated with 48 h high glucose (HG; 20 mM) in triplicate. After read alignment and gene expression quantification, differential expression analysis of genes and pathways was undertaken. Multidimensional scaling analysis measures the similarity of the samples and projects this on two-dimensions. We observed that LG and HG samples clustered into distinct groups (Fig. 1A). Statistical analysis showed that HG treatment had a strong effect on HepG2 cells, with 4259 genes (26%) showing differential expression (FDR ≤ 0.05; Fig. 1B-red points). This effect in gene expression is greater than that reported previously for high glucose treated THP-1 human monocytic cell line 37 and skeletal muscle of diabetic Goto-Kakizaki when compared to control Wistar rat 38 suggesting that hepatic cells are especially sensitive to alterations in glucose level. The top 50 differentially expressed genes by significance are shown in heatmap form (Fig. 1C). Some of the genes influenced by HG are also highlighted in Table 1.
Gene Set Enrichment Analysis (GSEA) was used in order to understand pathways regulated by hyperglycaemia. From 575 REACTOME gene sets considered, 34 were upregulated and 139 were down-regulated (FDR ≤ 0.05). The top 20 gene sets by significance in the up and down-regulated directions are shown ( Fig. 2A). Down-regulated gene sets included those associated with extracellular matrix interactions, chaperone function, calnexin/calreticulin cycle, N-glycan trimming and peptide chain elongation ( Fig. 2A), while gene sets upregulated in response to hyperglycaemia included cholesterol biosynthesis, complement cascade and fibrin clotting cascade ( Fig. 2B-D). These findings show a distinctive severe response of hepatocytes to hyperglycaemia.
VPA treatment influences the expression of hyperglycaemic response genes. Given that hyperglycaemia induces changes to the hepatocyte transcriptome and activates pathways relevant to cardiovascular health (such as cholesterol metabolism and complement/clotting cascades) and our previous work shows that VPA attenuates hepatic function, we hypothesised that VPA might inhibit hyperglycaemic gene expression signatures. To resolve this, cells under LG and HG conditions for 48 h were exposed to 1.0 mM VPA for a further 12 h. The gene expression profiles were compared to the respective controls without VPA. Quantitative analysis of histone H3K9/14ac protein using LI-COR Odyssey imaging system show significant hyperacetylation in response to VPA (Fig. 3A). LG, low glucose-normoglycemic condition. HG, high glucose-hyperglycaemic condition. www.nature.com/scientificreports/ Multidimensional scaling analysis shows that samples cluster based on treatment group. Untreated samples (LG, HG) are clearly separated from VPA-treated samples (LGV, HGV); and normoglycemic samples (LG, LGV) are separated from hyperglycaemic ones (HG, HGV) (Fig. 3B). Smear plot shows that 7802 genes were altered in expression due to VPA treatment under hyperglycaemia (Fig. 3C). This plot also shows genes with initially low expression were upregulated after VPA treatment; on the other hand, genes initially highly expressed were down-regulated. Heatmap of top 50 genes by significance shows that the majority were upregulated (Fig. 3D) and differential gene expression modulated by VPA under normoglycaemic and hyperglycaemic conditions (Fig. 3E).
Next, we sought to identify gene sets altered by VPA in the context of high glucose. The top 20 gene sets by significance in the up-and down-regulated directions are shown (Fig. 4A). Gene sets upregulated included those related to function of neurons including potassium channels, neurotransmitter receptor, L1-type/ankyrins interactions. Down-regulated gene sets included common pathway of fibrin clot formation, complement cascade and genes involved in protein synthesis. Clotting and complement cascade genes were down-regulated by VPA in hyperglycaemic condition (Fig. 4B,C). The regulation of all genes in response to glucose and VPA was visualised on a two dimensional rank-rank plot (Fig. 4D). We observe that overall, genes are distributed relatively evenly among the four quadrants. Using rank-rank visualisation of clotting and complement cascade genes we observed coordinated upregulation of these genes with hyperglycaemia and attenuation by VPA (Fig. 4E,F). The FDR corrected MANOVA p values for the two-dimensional association were 2.0E−4 and 1.5E−7 for clotting and complement cascades respectively.
To validate some of the differentially expressed genes from the RNA-seq findings, we cultivated HepG2 cells under hyperglycaemic conditions prior to treatment with VPA as above described followed by quantitative reverse www.nature.com/scientificreports/ transcription PCR (RT-qPCR). The selected genes included those involved in the complement (MASP2 and C3) pathway. Selected genes upregulated by hyperglycaemia according to the RNA-seq results, and confirmed by real-time PCR, were attenuated by VPA stimulation in agreement with RNA-seq (Fig. 5). As proof of concept, we assessed the relative abundance of H3K9ac at the promoter of the MASP2 and C3 genes using chromatin immunoprecipitation (ChIP) and qPCR detection. We observe reduced H3K9ac in VPA treated cells and under hyperglycaemic conditions H3K9ac is partly attenuated suggesting hyperglycaemia-induced expression of complement and clotting genes could be regulated by histone acetylation.
Discussion
The metabolic syndrome and associated cardiovascular complications are a major health burden. There is limited information on how hyperglycaemia influences gene regulation. Because the liver plays a major role in energy homeostasis, we hypothesised that hepatocytes show robust gene expression changes in response to www.nature.com/scientificreports/ hyperglycaemia, some of which could be deleterious to cardiovascular health. Furthermore, we hypothesised that HDAC inhibition via VPA could reverse or attenuate some of these gene pathways. We used high throughput RNA sequencing (RNA-seq) for its unbiased ability to detect expressed genes with greater sensitivity and accuracy than gene expression microarrays. With appropriate bioinformatics tools, regulatory events to genes and pathways (sets of genes) can be pinpointed in a way that is more efficient than single-gene assays. These tools were applied to identify genes and pathways that respond to hyperglycaemia and/ or VPA in hepatocytes.
The major gene sets upregulated by hyperglycaemia were related to cholesterol metabolism, DNA replication and complement cascade and clotting cascades. The observation of elevated expression of clotting and complement factors is consistent with reports of these proteins being elevated in patients with diabetes. Interestingly, of these pathways, only complement and clotting cascades were attenuated by VPA.
The complement system is central to innate immunity against microorganisms and modulator of inflammatory processes; it comprises a complex and tightly regulated group of proteins involving various soluble and surface-bound components. Depending on the activation trigger, the complement cascade follows one of three pathways: classical, lectin or alternative 39 . Although these pathways differ in their mechanisms of target recognition, all converge in the activation of the central component C3. This process is followed by C5 cleavage and the assembly of the pore-like membrane attack complex, MAC. Important chemoattractants and inflammatory mediators are produced by the enzymatic cleavage of C3 and C5, which leads to the release of anaphylatoxins C3a and C5a 40 .
The coagulation system is another major blood-borne proteolytic cascade 41 . Upon activation of the coagulation cascade, a sequential series of serine protease-mediated cleavage events occur. Thrombin is activated from its zymogen prothrombin and then catalyze the polymerization of fibrin by cleaving small peptides from its subunits. This way, soluble fibrinogen is converted into insoluble fibrin, which allows the clot formation 42 . Thrombin also plays a key role in amplifying the cascade by feedback activation of coagulation factors 43 . Other LG, low glucose-normoglycemic condition. LGV, normoglycemic condition followed by VPA treatment. HG, high glucose-hyperglycaemic condition. HGV, hyperglycaemic condition followed by VPA treatment. *p < 0.05; **p < 0.01; ***p < 0.001 as determined by one-way anova analysis. www.nature.com/scientificreports/ components such as circulating red and white blood cells and platelets are incorporated into the clot structure. In addition, factor XIIIa, which is also activated by thrombin, provides further structural stability by cross-linking with fibrin 44 . In this context, weak clots are more susceptible to fibrinolysis and bleeding, while resistant clots may promote thrombosis 42 . Coagulation and complement cascades share a common evolutionary origin 41 and their interplay is highlighted by C3, C4, C5a and FB complement protein presence in thrombus 45 . Similarly, pro-coagulation enzymes thrombin and IXa, Xa, XI factors might activate complement cascade 46 . Moreover, MASP2, a component of lectin complement activation, is capable of cleaving coagulation factors prothrombin in thrombin, fibrinogen, factor XIII and thrombin-activatable fibrinolysis inhibitor in vitro 47,48 . Thus, understanding the crosstalk between these pathways has fundamental clinical implications in the context of diseases with an inflammatory and thrombotic pathogenesis, in which complement-coagulation interactions contribute to the development of complications 49 .
Liver, mainly hepatocytes, is responsible for the biosynthesis and secretion of the majority of complement and coagulation components. Furthermore, the promoter regions of these components are controlled by several common liver-specific transcription factors like HNFs and C/EBP 50 . Thomas and co-workers 51 compared the genome-wide binding of Fxr and Hnf4α in mouse liver and characterized their cooperative activity on binding to and activating target gene transcription. Genes bound by Fxr and Hnf4α are enriched in complement and coagulation cascades, as well as in pathways related to drug metabolism. Furthermore, these transcription factors are involved in gluconeogenesis and glycogenolysis gene regulation 52,53 . Thus, a common transcription factor network may be controlling these immune and metabolic pathways.
The participation of complement in metabolism and metabolic disorders has recently received increasing scientific attention. Earlier studies demonstrate higher plasma C3 levels in diabetic patients compared to healthy individuals 6,54 . Increased complement gene expression has also been associated with adipocyte insulin resistance, waist circumference, and triglyceride levels 55,56 . MASP-1 and MASP-2 levels were significantly higher in children and adults with T1DM than in their respective control groups, whereas these proteins levels decreased when glycemic control improved 57 . In a recent study it was reported that in a murine model of diabetic nephropathy, genetic knock-out or pharmacological blockade of complement component 5a receptor 1 (C5ar1) conferred renal protection and attenuated disease-associated metabolic changes, further reinforcing the importance of the complement pathway 58 .
Metabolic syndrome, including diabetes mellitus, is associated with a procoagulant state, in which the clotting system is switched toward a prothrombotic state, involving reduced fibrinolysis, increased plasmatic coagulation, and platelet hyperactivity 43,59,60 . Intensive glycemic control with insulin reduces the impact of this procoagulant state by affecting components of clotting the system 60 . Abnormalities in the coagulation and fibrinolytic systems may contribute to the development of cardiovascular complications in patients with metabolic syndrome 43 and consistent lowering of clotting factors are used for the treatment of acute cardiovascular syndromes 61 .
This study has limitations. The experimental results are derived from transformed HepG2 cells are informative but do not replace data derived from primary hepatic cells. Future work is proposed to examine the therapeutic benefit of VPA using pre-clinical models of transient and chronic hyperglycaemia 62,63 . While, HDAC inhibitors such as VPA are associated with changes in gene expression mediated in part by lysine acetylation of histone residues 64-70 more recent studies have shown dramatic genome wide histone deacetylation associated with the transcription factors, CBP and EP300 using ChIP-Seq in primary vascular cells [23][24][25]71 . The experimental results presented in this article suggests lysine acetylation is associated with VPA attenuating MASP and C3 gene expression we cannot rule out histone deacetylation of other complement genes. Furthermore, in addition to causing histone hyperacetylation, VPA has been shown to regulate replication-independent loss of DNA methylation 72 that is consistent with elevated Tet2 DNA demethylase enzyme 73 . This seemingly extends the functional role of HDAC inhibitors such as VPA that alter lysine acetylation and deacetylation including DNA methylation by extracellular signalling mediated by hyperglycaemia 74 .
Several reports in the medical literature demonstrate that patients with neurological conditions taking VPA exhibit greater blood loss during surgery, impaired clotting, and reduced concentration of clotting factors [75][76][77][78] . Interestingly, emerging evidence points to complement cascade as having a causal role in some seizure types. Micro-array analysis identified complement cascade gene hyperactivation in brain tissue of epilepsy patients 79,80 . Studies in mice identified complement component C3 as necessary for acute viral infection associated seizures 81 . Our findings are consistent with these reports and indicate VPA mediated reduction of circulating complement and coagulation factors is a result of specific changes in hepatic gene expression. These changes in gene expression seem to be regulated at least in part by the relative abundance of H3K9ac at their promoter as observed here by ChIP qPCR analysis. As a prime initiator and important modulator of immunological and inflammatory processes, the complement system has emerged as an attractive target for early and upstream pharmacological intervention in inflammatory diseases 82 . In this context, repurposing clinically approved drugs such as VPA provides a time-and cost-effective alternative.
In conclusion, we could, for the first time, associate HDAC inhibition with complement and coagulation gene expression modulation. We demonstrate that coagulation and complement cascade genes were upregulated by hyperglycaemia and that these can be attenuated with VPA through its histone acetylation modulation ability. Future preclinical studies will resolve whether VPA can mitigate the complications of diabetes in vivo.
Materials and methods
Cell culture. HepG2 cells from ATCC at passage 9 were maintained in Dulbecco's modified Eagle's medium (DMEM) basal glucose (5.5 mM) (Gibco, Carlsbad, USA) supplemented with 10% fetal bovine serum (GE Healthcare, Chicago, USA) and penicillin and streptomycin (Gibco) (working dilution: 100 IU and 100 μg/mL, respectively). Cells were cultivated for 48 h in normoglycemic (LG 5.5 mM) or hyperglycaemic (HG) medium, 85 using a map quality threshold of 10. Genes with an average of fewer than 10 reads per sample were omitted from downstream analysis. EdgeR version 3.6.8 and limma version 3.20.9 were used to perform statistical analysis 86 . False discovery rate-controlled p values (FDR) ≤ 0.05 were considered significant. Gene expression of pathways was analyzed with GSEA-P using the classic mode 87 . A differential abundance score was obtained for each gene by dividing the sign of the fold change by the log10(p value). This score was used to rank genes from most up-regulated to most down-regulated as described previously 23 . Curated gene sets were downloaded from MSigDB 88 . To understand the correlation between effects of VPA and hyperglycaemia on global gene expression, we generated a rank-rank density plot of each detected gene. Genes were ranked as above and plotted using the filled contour plot function in R. Significance of two-dimensional enrichment of gene sets away from a uniform distribution was calculated Manova test of ranks in R as described previously 89 .
Reverse transcriptase quantitative PCR. To validate some differentially expressed genes from the RNA-seq findings, we repeated the experiment using the same conditions of cell culture and treatment, isolated total RNA using the RNeasyMini Kit (Qiagen, Hilden, Germany) and prepared cDNA using the High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Waltham, MA). Real-time PCR was performed using an Applied Biosystems 7500 Real Time PCR system following standard protocols and TaqManGene Expression assays (Applied Biosystems) for complement (MASP2 (Hs00373722_m1), C3 (Hs00163811_m1) genes. Target gene expression was normalized to the expression of H3F3 (Hs02598544_g1) Relative quantification was achieved with the comparative 2−ΔΔCt method as described previously 90 .
LI-COR Odyssey H3K9/14ac quantitation. Histones were isolated from HepG2 cells using the acid extraction method 91 . Proteins were separated using 4-12% gradient SDS-PAGE and transferred into a PVDF membranes (Immobilon-FL; Millipore). Blots were probed with primary antibodies specific for acetyl-histone H3 (06-599; Millipore), or total H3 (14269; Cell Signaling Technology) overnight at 4 °C. Following incubation with primary antibodies, membranes were rinsed and probed with appropriate mouse or rabbit secondary antibodies. Protein bands were visualized and quantified using Odyssey CLx image system (LI-COR Biotechnology).
Chromatin immunoprecipitation (ChIP) qPCR. Chromatin immunoprecipitation was performed as previously described 16 . Three independent 10-cm plates of HepG2 cells growing under conditions above described (LG, LG VPA, HG, HG VPA) were used per immunoprecipitation. Cells were fixed with 1% formaldehyde in PBS for 10 min at room temp (RT) and the reaction quenched with glycine at a final concentration of 0.125 M. Sonicated chromatin was checked and anti-H3K9ac (C5B11 Rabbit mAb #9649, Cell Signaling Technology) enriched DNA was immunoprecipitated overnight. Soluble immunoprecipitated material was washed with a salt buffer sequence and collected. The eluted DNA was subjected to qPCR using specific primers (Integrated DNA Technologies) compared to inputs. The primers used in ChIP PCR are as follows; MASP2 forward www.nature.com/scientificreports/
|
v3-fos-license
|
2018-12-03T12:48:26.947Z
|
2014-05-22T00:00:00.000
|
54544026
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://academicjournals.org/journal/AJAR/article-full-text-pdf/85DC12544802.pdf",
"pdf_hash": "047262b89116a95131895e7d3f841ca4f0be7369",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2593",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "047262b89116a95131895e7d3f841ca4f0be7369",
"year": 2014
}
|
pes2o/s2orc
|
Response of selected sorghum (Sorghum bicolor L. Moench) germplasm to aluminium stress
Sorghum (Sorghum bicolor L. (Moench) is an important food security crop in sub-Saharan Africa. Its production on acid soils is constrained by aluminium (Al) stress, which primarily interferes with root growth. Sorghum cultivation is widespread in Kenya, but there is limited knowledge on response of the Kenyan sorghum cultivars to aluminium stress. The aim of the study was to identify and morphologically characterise aluminium tolerant sorghum accessions. The root growth of three hundred and eighty nine sorghum accessions from local or international sources was assessed under 148 μM Al in soaked paper towels, and 99 of these were selected and further tested in solution. Ten selected accessions were grown out in the field, on un-limed (0 t/ha) or limed (4 t/ha) acid (pH 4.3) soils with high (27%) Al saturation, and their growth and grain yield was assessed. Although the Al stress significantly (P ≤ 0.05) reduced root growth in most of the accessions, there were ten accessions; MCSRP5, MCSR 124, MCSR106, ICSR110, Real60, IS41764, MCSR15, IESV93042-SW, MCSRM45 and MCSRM79f, that retained relatively high root growth and were classified as tolerant. The stress significantly (P ≤ 0.05) reduced seedling root and shoot dry matter in the Al-sensitive accessions. Plant growth and yield on un-limed soil was very poor, and liming increased grain yield by an average 35%. Most of Kenya sorghums were sensitive to Al stress, but a few tolerant accessions were identified that could be used for further breeding for improved grain yield in high aluminium soils.
Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License 2006) mainly because of poor agronomy, or abiotic and biotic stresses. Many of the soils used for sorghum cultivation in the tropics are acidic (pH<5.5). Soil acidity is common in the tropics and subtropics because of the nature of the parent rocks and the high degree of weathering and base leaching that has occurred (Johnson, 1988). The greater proportion of potentially arable land worldwide is acidic (Von Uexküll and Mutert, 1995), and in Kenya acid soils cover up to 13 % of the arable land (Kanyanjua et al., 2002).
Although Aluminium (Al) is one of the most abundant mineral elements in soil, it occurs in insoluble or non-toxic oxide and hydroxide compounds under neutral or basic pH. However, the compounds become more soluble under acidic (pH<5.5) conditions and release a variety of Al species, especially the trivalent aluminium ion (Al 3+ ) and soluble hydroxides. The Al 3+ is toxic to plants, and occurs both in solution and at the cation exchange sites, where it can be easily exchanged with other soluble cations. Acid soils in Kenya have between 8 and 61% Al saturation (Obura et al., 2010). Most plants are adversely affected if the soil contains more than 20% aluminium saturation.
The primary effect of Al stress is stunting of the roots (Rengel, 1996). The resulting restricted root system is inefficient in water and mineral absorption, making the plant more susceptible to water stress or mineral nutrient deficiency. The combined limitation on water and mineral nutrient absorption leads to poor plant development and low crop yield. However, aluminium tolerant plants maintain high root growth and plant vigour under Al through the exclusion of Al from the root symplasm or tolerance to high Al 3+ concentration in the symplasm (Kochian, 1995). The exclusion of Al from the root is achieved by releasing Al-chelating ligands such as organic acids. The organic acid exudates, secreted in significant amount by the tolerant genotypes, form Alcarboxylate complexes that are not taken up by plant roots. Al-tolerant sorghum genotypes have been shown to secrete relatively large quantities of citric, malic and transaconitic acids (Goncales et al., 2005;. Although lime is conventionally applied to amend soil acidity and related stresses, the practice increases farming cost. Large quantities of lime (2 to 10 t/ha) are required to ameliorate the acidity and enhance growth of crops . Moreover, sub-soil acidity is not effectively corrected by surface liming (Ernani et al., 2004) unless lime is applied in large quantities and mixed into the deeper soil layers. Therefore, the use of Altolerant crop cultivars in addition to lime application could greatly enhance yields in soils that have high percentage of exchangeable aluminium.
Sorghum has a significant genotypic variation in relation to tolerance to Al stress (Caniato et al., 2007) that can be exploited to develop varieties with superior tolerance. However, although significant sorghum cultivation in Kenya occurs on acid soils of western Kenya (Obura, 2008;Kisinyo, 2011), there has been limited selection and breeding for Al tolerant sorghum for this region. Moreover, the amount of yield loss occasioned by Al toxicity in Kenya is not known. The objectives of this study were to determine the level of tolerance in selected Kenyan sorghum lines and to identify Al tolerant accessions, under laboratory and field conditions with specific reference to seedling root growth and grain yield.
MATERIALS AND METHODS
Three hundred and eighty nine sorghum accessions comprising of Kenyan landraces, commercial varieties, breeding lines, recombinant inbred lines (RILs) and Al tolerant and sensitive standard lines, hereinafter termed accessions, were pre-screened for tolerance to Al stress using moistened paper-towels. The sorghum seeds were surface sterilized in 1% sodium hypochlorite for 8 min, rinsed with sterile distilled water, germinated and grown at 26°C for 5 days between sterilized paper towels that were moistened with 10 ml treatment solution (pH 4.0) at two levels of Al stress; 0.82 mM Al or without Al (control). The cellulose fibres in the paper bind Al 3+ and thus reducing the effective concentration. Earlier studies had shown that 0.82 mM Al 3+ in filter paper tests is equivalent to 148 μM Al in free solution (Tamas et al., 2006). The root length was measured and root tolerance index (RTI) was calculated as follows: The RTI was used to group the accessions into tolerant or sensitive categories. After the pre-screening, a representative sample of 99 accessions ( Table 1) that had been rated as tolerant, sensitive or intermediate were selected and subjected to Al stress in aerated nutrient solution (Magnavaca et al., 1987). Sterilized sorghum seeds were pre-germinated in the dark for 72 h at 25°C between sheets of sterilized paper towels that were moistened with sterile distilled water. Healthy seedlings with the similar root size and form were grown in the nutrient solution without Al for 24 h to equilibrate. The initial length of the main root (IRL) was measured and recorded. Thereafter the seedlings were transferred individually into the growth vials that were placed in holding plastic rafts and transferred to trays containing eight litres of nutrient solution without (control) Al or with 148 or 222 μM Al (Caniato et al., 2007). The seedlings were grown in a plant growth chamber with gentle, continuous aeration for 120 h at 28°C with 17/7 photoperiod and light intensity of 200 µmol m -2 s -1 . The set up was replicated five times. The length of the main root with branches in the control (RLBc) and in the Al treatment (RLBAl) was measured and recorded. The shoot and root dry weight (68°C for 48 h) of five representative sorghum accessions were determined and recorded.
The data was used to calculate seedling growth indices: net root length (NRL), percentage of response (% response), relative net root length (RNRL) and percentage of reduction in root branching (% RRB) (Magalhaes et al., 2004), thus; Where FRL is the final root length in both Al treated and control plants and IRL is the initial root length. The response (%) was measured as:
FRL
Where FRLC is final root length in control and FRLAl is the final root length in Al. RNRL was calculated as: Where NRLAl is net root length in Al, and NRLC is net root length in control Where % RRB is the percent reduction in root branching, RLBC is the length of root with branches in control, and RLBAl is length of root with branches in aluminium.
A sample of five of the accessions: MCSRP5 (Al-tolerant popular landrace); ICSR110 (Al-tolerant standard check); MCSR15 (Altolerant RIL); Seredo (Al-sensitive commercial variety) and MCSRL5 (Al-sensitive popular landrace) were used to evaluate the effect of Al on root and shoot dry weight. To show root injury caused by Al stress the root tips of some lines were visualized and photographed using a microscope (Leica DMLB) fitted with a Leica DC 300 digital camera.
The accessions were grown out in plots in the field with or without lime in a split plot design. Lime (21% Calcium oxide) was applied and mixed with the top soil in one block 60 days before planting at a rate equivalent to 4 t/ha. The plots were ploughed to a fine tilt. The seeds were hand sowed at a spacing of 60 cm between rows and 20 cm within rows in plots measuring 2 × 3 m, which translated into 83,333 plants per hectare. Both blocks received uniform application of 75 kg/ha of diammonium phosphate (DAP) at sowing. The number of leaves and leaf area per plant were assessed at 50% flowering. The length and width of individual leaves per plant were measured using a meter ruler and then leaf area was calculated using the following formula (Stickler et al., 1961): Grain yield and thousand-seed weight were assessed and recorded after harvest. All the data were subjected to analysis of variance (ANOVA) using SPSS ® . Differences were adopted as significant at P ≤ 0.05. Means were separated using Tukey's 'honestly significant difference' (HSD) test. The indices data were subjected to square root transformation before statistical analysis.
RESULTS
It was possible to grade the 389 sorghum accessions for aluminium tolerance using the RTIs of filter-paper grown seedlings. Fifty percent of the accessions had RTI of more than 0.75, whereas the other half had RTI of less than 0.75 ( Figure 2). Some of the resistant accessions had better root growth (RTI>1.0) when grown under the 148 μM than under control.
In the nutrient solution, the net root length of most sorghum accessions was significantly (P ≤ 0.05) reduced by the 148 μM Al stress (Table 2). Percent response to Al corresponds to Al-induced reduction in root growth. Only 10 accessions; MCSRP5, MCSR124, MCSR106, ICSR110, Real60, IS41764, MCSR15, IESV93042-SW, MCSRM45 and MCSRM79f, had less than 30% root growth reduction in response to Al (RNRL > 70%), and were therefore classified as tolerant to Al stress. Twentyfive accessions expressed root growth reduction ranging between 35 and 50% (RNRL-50 to 65%), and were classified as moderately tolerant. Sixty-four accessions had between 51 to 82% root growth reduction (RNRL-18 to 49%) and were classified as sensitive to Al stress. The accessions that expressed more the 70% reduction in root growth (RNRL 30%) were classified as highly sensitive; they included MCSRG2, MCSRM44, MCSRL5, MCSRN120, Hakika, MCSRN88 and MCSRM45b.
A relative effect of Al stress on root growth in representative sensitive and tolerant sorghum accessions is presented in Figure 3. The root growth in sensitive accessions was severely reduced by the stress, whereas that of tolerant accessions was only minimally affected. Figure 4 shows the appearance of root tips under bright field microscope examination. Although the root tip morphology of the Al-resistant accessions was fairly normal, those of Al-sensitive accessions developed surface lesions after 120 h of exposure to 148 μM Al.
Some accessions, such as MCSR124, MCSR15, MCSR 17, MCSR60, MCSRJ3b, MCSRI19, ICSV112, Pato and MCSRM45b had significantly longer roots than the rest of the accessions when grown without Al stress. However, only two accessions from this group; MCSR124 and MCSR15, maintained high root growth under the Al stress. There was a significant (P ≤ 0.05) variation in root branching both among the different sorghum accessions grown without the Al stress, and among those subjected to the 148 μM of Al stress ( Table 2). The root branching was significantly reduced by the stress, with most accessions having a percent relative root branching Root tolerance index (RTI) reduction of >50% (Table 2). However, some accessions, such as MCSR124, MCSR15, IESV93042-SW, MCSRN81, MCSRL6 and MCSRG2 had ≤50% relative reduction in root branching, whereas in some, root branches were initiated but failed to elongate. The roots of MCSRF-6, ICSB608, MCSRF-1 and MCSRN88, did not branch at all under the Al stress.
Aluminium stress at 148 μM significantly (P ≤ 0.05) reduced root and shoot dry weight in MCSRL5, Seredo and MCSRP5, but not in ICSR110 and MCSR15 ( Figure 5a and b). MCSR15 and MCSRP5 had the highest root and shoot dry weight, respectively, at 148 μM, whereas MCSRL5 and Seredo had the lowest root and shoot dry weight, respectively. At 222 μM Al, all the accessions had a significant reduction in root and shoot dry weight (P ≤ 0.05).
Results on the effect of soil liming on plant growth in the field are presented in Table 3 and Figure 6. There were differences in vigour between sorghum plants grown in the limed and un-limed field plots at the early vegetative stages with the crop in the limed plots showing higher vigour than those in the un-limed plots ( Figure 6). Lime application did not cause a significant change in leaf area per plant in any of the sorghum accessions (Table 3). ICSV112 and MCSRM33 had the highest and the lowest total leaf area per plant, respectively, in un-limed soil. IS41764 had the highest, whereas MCSRM33 and Real60 had the lowest total leaf area per plant in the limed soil. The number of leaves per plant was significantly higher in limed soil than in non-limed soil in Macia, Real60 and MCSRL5 (P ≤ 0.05), whereas lime application had no significant effect on number of leaves in the rest of the accessions. In non-limed soil, MCSRL5 and MCSRM33 had the least number of leaves per plant whereas IS41764 had the highest number of leaves per plant.
In non-limed soils, MCSRM33 had the lowest grain yield per plant (21.2 gequivalent to 1767 kg/ha), while Real60 had the highest grain yield per plant (47.9 gequivalent to 3916 kg/ha) (Table 4). In limed soils, ICSR110 had the lowest grain yield (33.9 gequivalent to 2825 kg/ha), while ICSV112 had the highest grain yield Table 4. Effect of liming (4 t/ha) on 1000 seed weight (g) and grain yield per plant in some selected sorghum accessions. (4733) 35 † Values with similar letters within the column and row of the same attribute are not significantly different at P ≤ 0.05. T he means were separated using Tukey's HSD test. S.E 0.8 and 7.6 for 1000 seed weight and total grain yield respectively. The values given in brackets are equivalent to grain yield in kg/ha. I = percent increase in grain yield. C= Classification based on solution culture assay for response to Al stress; HS = highly sensitive, MT = moderately tolerant, S = sensitive, Ttolerant.
Lime application caused a significant increase in total grain yield per plant in ICSV112 and MCSRN61 (P ≤ 0.05). The increase in grain yield ranged from 24 to 46%, where ICSR110 and ICSV112 had the lowest and highest increase in grain yield, respectively. An average of 35% increase in overall grain yield was registered as a result of lime application. Similarly, the application of lime significantly increased the 1000 seed weight in all the sorghum accessions, except ICSR 110 (P ≤ 0.05; Table 4).
DISCUSSION
Differential response to Al stress was observed at 148 μM Al concentration, where only 10% of the 389 accessions were tolerant. At 222 μM Al root growth was severely restricted in all the sorghum accessions, which showed that this concentration was too high to be used to differentiate sorghum response to Al stress. Therefore, screening for Al resistance in sorghum should be carried out at 148 μM Al concentration. Aluminium concentrations at 148 μM and 222 μM correspond to 27 μM and 39 μM free Al ions (Al 3+ ) (Magalhaes et al., 2004). These concentrations have previously been reported to reduce root growth in sorghum (Caniato et al., 2007). In this study, some of the accessions had inherently long roots in nutrient solution without Al. A few of these accessions were tolerant to Al stress, whereas most of them were sensitive. These accessions can be crossed with the sorghums that had short roots but tolerant to Al stress. A combination of long roots and Al tolerance are good attributes for enhanced acquisition of nutrients and moisture in acid soils with high levels of Al consequently improving growth, drought tolerance and grain production in such soils.
The most Al sensitive accessions used in this study which included MCSRG2, MCSRM44, MCSRL5, MCSRN120, Hakika, MCSRN88 and MCSRM45b had stubby roots with brown colouration at the 148 μM Al concentration. The root tips had surface lesions due to injury caused by Al stress. Similar observations on root injury due to Al stress have been previously reported (Mossor-Pietraszewska et al., 1997). Root stunting is a consequence of Al-induced inhibition of root elongation, which is the most evident symptom of Al toxicity (Matsumoto, 2000). Aluminium stress has been reported to reduce cell wall extensibility in wheat roots and that this Al-induced change in the cell wall contributes to the inhibition of root growth (Ma et al., 2004). In addition, Alinduced inhibition of K+ uptake by blocking the responsible channels would interfere with turgor driven cell elongation (Liu and Luan, 2001).
Aluminium stress significantly reduced root branching in most sorghum accessions; where ninety five percent of the accessions had 50% reduction in root branching. The most sensitive accessions did not develop any lateral roots, while in some, the root branches were initiated but failed to elongate, which is in line with previous reports (Roy et al., 1988). Differential elongation of root branches in response to aluminium stress was also reported in maize (Bushamuka and Zobel, 1998) and apparently is a common reaction of plant root systems to the stress.
Aluminium stress significantly reduced root and shoot dry matter especially in the Al-sensitive sorghum accessions. The Al tolerant accessions had higher average root and shoot dry matter than the susceptible accessions. Similar results have been reported in barley (Foy, 1996). Aluminium has been reported to interfere with uptake, transport and utilization of nutrients, especially Ca, Mg, P, N and K and reduce accumulation of dry matter (Nichol and Oliveira, 1995). Larger root systems are known to have a greater capacity for absorbing water and minerals, as they are able to explore a larger rhizosphere (Osmont et al., 2007).
The sorghum accessions grown on acid non-limed soil had lower above ground growth and yield compared to that grown in limed soil. Some sorghum accessions that were Al-sensitive in solution culture were also severely affected by the stress in the field. Application of lime significantly increased total leaf area and number of leaves per plant. High leaf area is important in interception of photosynthetic active radiation, which translates to enhanced rates of photosynthesis and consequently high biomass accumulation. It has been reported that high levels of Al inhibited leaf growth in soybean (Zhang et al., 2007). The significant increase in growth and production in the limed soil can be attributed to increased root growth and establishment which translates to improved access to water and nutrients. Liming the acid soil raised soil pH, as reported by Kisinyo (2011), and because the solubility of Al is highly pH dependent, this could result in concentrations of exchangeable Al being lowered to negligible levels that did not limit sorghum growth.
Soil chemical factors that limit root growth in acid soils, such as aluminium diminish crop production through a rapid inhibition of root growth that translates to a reduction in vigour and crop yields (Kochian et al., 2005). Plants grown in soils with high levels of aluminium have reduced root systems and exhibit a variety of nutrientdeficiency symptoms, with a consequent decrease in yield. Decreased above ground plant growth in soil with high percentage of Al saturation has been reported (Miller et al., 2009). This was accompanied by reduced uptake of P and N in the acidic soil. An Al-tolerant maize line had increased levels of mineral nutrients in roots and shoots compared with a sensitive inbred line when grown in an Al-treated-nutrient solution (Giannakoula et al., 2008). Genotypic variation in nutrient uptake in the presence of toxic levels of aluminium has also been reported in sorghum (Baligar et al., 1993), where the Altolerant genotypes had higher nutrient uptake efficiency than the Al-sensitive genotypes.
An overall 35% reduction in sorghum grain yield was realized in non-limed soil, with the Al-sensitive accessions having higher reductions than the Al-tolerant accessions. In this regard, some researchers (Gallardo et al., 1999) reported 50 and 30% reduction of grain yield in Al sensitive and resistant cultivars of barley respectively, when they were grown in soil that contained high levels of exchangeable Al.
The Al tolerant standard check ICSR110 registered low grain yields in non-limed soil but had the lowest response to lime application. Similar results have been reported in maize (Zea maize), where 'Cateto', one of the most Altolerant Brazilian lines has been shown to be a low yielder and has been used as a source of genes for Al tolerance in maize breeding programmes . The Al sensitive lines MCSR L5 and ICSV112 had relatively higher yields but had low and moderate response to lime respectively. The yield of these accessions could be improved in acid Al-toxic soils by crossing with ICSR 110 which had better root growth under Al stress conditions. Real60 and MCSRM45 registered high yields and were also tolerant to Al stress in solution culture and therefore in addition to ICSR110 are potential sources for Al tolerance genes in sorghum breeding programmes.
Conclusions
Al toxicity significantly reduced development and elongation of main roots and root branches in aluminium sensitive sorghum accessions. Only 10% of the sorghum accessions used in the study were tolerant Al stress reduced root and shoot dry weight as well as the plant growth and grain production under field conditions. Therefore, there is a need to disseminate the Al-tolerant lines to the sorghum farmers for cultivation in areas where soil acidity and aluminium stress are known to occur. Future sorghum breeding programmes should include the identified superior sorghum accessions as donors of aluminium tolerance genes to the locally adapted sorghums cultivated in acid soils with high levels of Al.
|
v3-fos-license
|
2023-12-16T16:16:51.764Z
|
2023-12-12T00:00:00.000
|
266303327
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://brill.com/downloadpdf/view/journals/nu/71/1/article-p29_3.pdf",
"pdf_hash": "8180f5c7eaa2f53f2e195d840e9c17e34f5a4702",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2594",
"s2fieldsofstudy": [
"History",
"Philosophy"
],
"sha1": "248e219b1408fdf2bb8d3d0de03f08d6b39b8011",
"year": 2023
}
|
pes2o/s2orc
|
The Ascended Confucius: Images of the Chinese Master in the Euro-American Esoteric Discourse
This article provides a diachronic panorama of the nineteenth-and twentieth-century Euro-American esoteric images of Confucius. After selected appraisals capturing the polyphony of nineteenth-century notions of Confucius, emphasis is given to spiritual-ist and Theosophical appropriations. Nex t, his soteriological elevation and the introduction of fellow Chinese Masters within the Ascended Masters context are explored in relation to the I AM Activity and, specificall y, the post-Second World War groups The Bridge to Freedom (present-day The Bridge to Spiritual Freedo m) and The Church Universal and Triumphant. Overall, this article traces the transformation of the eso-teric Confucius trope, which substantially contributed to the wider public perception of Confucius and Confucianism.
Pokorny
Numen 71 (2024) 29-47 2011) in the People's Republic of China (PRC).The Master's metamorphosis from the Cultural Revolution's "arch-villain of the feudal past" (Murray 2015: 157) to the PRC's chief sociopolitical theoretician tenderly evoked by Xi Jinping (b. 1953), is a most recent, monumental episode in the ever-expanding hagiographical continuum of one Kong Qiu 孔丘 (tr. 551-479).This historical figure, in his guise as Kongzi 孔子 (Master Kong), personifies the ancient ru 儒 (gentle scholar-teachers) tradition, emically serving as its "crucial transmitter" upon which one of China's Three Teachings (sanjiao 三教) was effectively built: "Confucianism," a nineteenth-century neologism derived from the Jesuit missionary project of the late sixteenth and early seventeenth centuries.The Jesuit encounter proved enormously influential for the western reception of China and its religious heritage until modern times.Kongzi became an integral part of the Jesuit accommodation (Rule 1972;Mungello 1985).His favorable appropriation qua "Confucius" sparked a globally entangled negotiation of his credentials.1This ongoing manufacturing of the Confucius trope (Jensen 1997) spread across many discourses with numerous parties involved, generating a plethora of profiles and identities in the process.
In the seventeenth and eighteenth centuries, the European imagination of China and Confucius embraced both Sinophile fascination and Sinophobic vilification (Dijkstra 2022: 266-268) as well as everything in between, with the Master's portrayal oscillating from a noble monotheizing savage and a moral icon of the Enlightenment to a cold and pagan fossil of sterile hierarchism and paralyzing ritualism.Confucius, the "notable philosopher … of most upright and incorrupt manners," was first introduced to a wider readership in 1599 through a second-hand travel account (DeLapp 2022: 75-76), the sixth volume of The Principal Navigations, Voyages, Traffiques and Discoveries of the English Nation by the prominent English writer and cleric Richard Hakluyt (1553Hakluyt ( -1616)).Hakluyt never went to East Asia, nor did the vast majority of those engaging publicly with China, Confucianism, and Confucius in the centuries ahead.Meandering across the centuries, "their" Confucius was the multifaced expression of an ever-growing intertextually linked corpus of writings, which at the core was indebted to the Jesuit interpretative paradigm -however qualitatively different its actual reception -garnished by a panoply of (factual or factitious) travelogues.
The Ascended Confucius
Numen 71 (2024) 29-47 A watershed in the general history of religions, the nineteenth century witnessed a surging proliferation of source materials in translation and scholarly expositions.At this vibrant juncture of reception history, a distinct discursive prism came increasingly to be applied that significantly contributed to the shaping of the wider public perception of (East) Asia and its religiosities, including Confucianism (Pokorny and Winter, Forthcoming).
This article sheds light on the variegated portfolio of roles assigned to Confucius in the course of the wider Euro-American esoteric encounter with "him" and "his" tradition across some nearly two hundred years, tracing in particular Confucius's soteriological elevation.2A focus is thereby put on the English-language context.The following section starts in the nineteenth century with a collage of appraisals of Confucius that clearly echo the dichotomous portrayals of the preceding centuries.It then briefly addresses the appropriation of Confucius in spiritualist and, specifically, Theosophical circles.Especially the latter's appreciation of the Chinese sage in conjunction with a newly defined concept rising in centrality, namely, that of spiritual Mastersthat is, enlightened intermediaries between transcendence and immanence who gradually initiate their disciples to a perennial truth -put Confucius on a particular trajectory alongside other distinguished religious and philosophical personalities.3It subsequently came into bloom within the Ascended Masters narrative concocted and saliently carried forward from the 1930s by the I AM Activity, which arose out of a marriage of Theosophy and New Thought.After the Second World War, the extended I AM Discourse held in store a great salvific transformation for Confucius, fleshing out most resonantly in The Bridge to Freedom and The Church Universal and Triumphant.Born in a schismatic fashion, these two represent pivotal movements within the New Age current, serving as a powerhouse for the still continuing expansion of the Ascended Masters universe today.The second part of this article examines Confucius's progression into a prime salvational figure therein while also touching on his Chinese peers.
Euro-American Esoteric Musings of the Long Nineteenth Century
Confucius was generally but a side note in the majority of nineteenth-and early twentieth-century esoteric discourse.If mentioned, he mostly appeared as the Great Chinese Philosopher or Sage, but at times also took other guises, such as "the reformer of the degenerate Bhuddism [sic], or Lamaism" (Rebold 1868: 367), "a firm believer in mesmerism" (Gee 1885: 11), a revelator and handler of ancient magic (Lévi 1860: 3, 410), and a failed God-sent redeemer of a decaying civilization (Nason 1880: 363-364).Drawing on the Flemish Jesuit Philippe Couplet's (1623-1693) reading of a passage from the Daoist Liezi 列子,4 the "Moses of China" (Yarker 1882: 85) was sporadically ranked among the prophetic voices presaging Jesus's arrival.As one English Freemason suggested, the misattribution of Confucius's alleged foresight to involve Buddha in lieu of Jesus had his disciples flock to the former and China thus quickly "became celebrated for the practice of every impurity and abomination which characterized the most degraded nation of the heathen world" (Oliver 1829: 59-60).Spiritualists occasionally traced him as a fellow practitioner of the art, a "great purifier of the morals" (Home 1878: 25) reviving "primal knowledge" with his spiritualistic teachings (Howitt 1863: 298).For some, losing sight of his real message led Chinese society into "degradation … appear[ing] almost irremediable" (Home 1878: 26), whereas others viewed his authoritarianism and secularism as the very reason for "the decadence of all true grandeur of religious idea among so many millions of Chinese" (Kenealy 1878: lxvii).The eminent Danish-American Theosophical connoisseur of Chinese thought, Carl Henrik Andreas Bjerregaard (1845-1922), deemed Confucius's ceremonialism to be no less than "the bane of China" (Bjerregaard 1912: 96).
When directly compared in esoteric assessments, Laozi usually eclipsed Confucius.Austrian physician Joseph Ennemoser's (1787Ennemoser's ( -1854) ) characterization in his Geschichte der Magie (History of Magic) is a case in point: whereas Confucius is at first introduced alongside Laozi as "the greatest mind of the Chinese Nation," he is subsequently degraded to lack his older peer's deep inwardness and pondering the secrets of God and the world.Barren of enlightenment, Confucians would have facilitated spiritual idleness and the inactivity of the Chinese mind while fighting societal progress (Ennemoser 1844: 334-335).Similarly, the English spiritualist Emma Hardinge Britten (1823-1899) disparaged the Chinese sage's spiritual teachings as greatly inferior vis-à-vis those of "Lao-Kiun" (Hardinge Britten 1876: 92).Finally, writing for The Occult Review, German-born mysticism aficionada Regina Miriam Bloch (1888Bloch ( -1938) ) put it bluntly: "Naturally, one's heart goes out more to Lao-tsze than to Confucius.The latter was a great educative factor, but the former was both a poet and a mystic and altogether higher and finer" (Bloch 1923: 167).
However rarely, Confucius also came to be marshaled in séances, if only to confirm succinctly that the "doctrine of Christ is in the centre of our true heart" (Anonymous 1881: 499).The English Swedenborgian and spiritualist William Oxley (1823Oxley ( -1905) ) even elevated Confucius to a chief (albeit largely taciturn and unintelligible) agent of Christianity, the "Mighty Operating Angel" (Oxley 1877: 238).Not only did he utter English aphorisms or indulge in enigmaticness, but Confucius was later also to demonstrate both his oral Mandarin proficiency and his Chinese writing skills, such as prominently displayed through the well-known mediums George Valiantine (1874Valiantine ( -1947) ) and Mina " Margery" Crandon (1888-1941) in 1927(New York) and 1928(Boston), respectively (de Brath 1929).
More than half a century earlier, in 1866, the famous Boston medium Frances Ann Conant (alias J. H. Conant, 1831-1875) allegedly attested to Confucius (reincarnate) even a millenarian role.While in rapport with a spirit addressing French spiritist Allan Kardec's (1804-1869) theory of reincarnation, the spirit suddenly presaged that upon re-embodiment in circa 1868, Confucius "would … shed a great spiritual illumination among the Chinese" (von Langsdorff 1889: 270)."His" actual reincarnation, as it was conjectured in the Boston-based spiritualistic magazine Banner of Light in a December 1888 article, might indeed have been confirmed by an account of an approximately Pokorny Numen 71 (2024) 29-47 twenty-year-old "Duke Confucius … of Pekin" (Anonymous 1888: 4), published two months earlier in a Pennsylvanian newspaper, the Warren Mirror.5The apparent new Confucius was in fact the Duke of Yangsheng 衍聖, Confucius's seventy-sixth direct descendent Kong Lingyi 孔令貽 , who, in 1888, came from Qufu to Beijing for his own wedding and an audience with the Guangxu 光緒 Emperor (r. 1875Emperor (r. -1908)).With low sociopolitical visibility hardly going beyond his ancestral home, Kong turned out not to be Conant's "messianic" Confucius.
One American spiritualist, Marcenus Rodolphus Kilpatrick Wright (1830-?), although not in mediumistic conversation with Confucius or related spirits, offered one of the very first books on the Chinese Master printed in the United States, The Moral Aphorisms and Terseological Teachings of Confucius (1870).6To him, Confucius was a "pungent maximist of unexceptionable character," whose unmatched "love of justice" made him into the "originator of the most astute civil and religious philosophy ever given to mankind" and thus "the REEDEMER of the Mongolian race" (Wright 1870: 7-8).Wright did not give his book an explicit spiritualistic bent.Indeed, he concluded that despite affirming the existence of "good and mischievous spirit-beings … [Confucius] refused to countenance their delivery to mankind as familiar messengers" (33).Yet, his Confucius homage circulated relatively well, leading to the publication of a second edition in 1900, which was sympathetically reviewed in the Free Thought Magazine (1900: 541).Wright's Terseological Teachings also found its way into the bookshelves of esotericists, such as those of the eminent American Rosicrucian Sylvester Clark Gould (1840Gould ( -1909) ) and the founder of chiropractic Daniel David Palmer (1845-1913) (Albanese 2007: 406).That the published maxims of Confucius, within whom "the light of divine truth shone,"
The Ascended Confucius
Numen 71 (2024) 29-47 were held dearly also by other giants of the long nineteenth-century esoteric universe, one finds in the American spiritualist Andrew Jackson Davis's (1826Davis's ( -1910) ) Arabula or The Divine Guest, which contains "The Gospel according to St. Confucius" (Davis 1867: 310-312), that is, a selection of aphorisms from Barnard's Moral Sayings of Confucius.
1.1
Theosophical Snippets Confucius's transition from a sage to a highly evolved Master gained esoteric momentum through the Theosophical project.Foundress Helena Petrovna Blavatsky (1831Blavatsky ( -1891)), at first dismissively depicting him as a "cold, practical," narrow-minded nationalist philosopher and explicitly endorsing Ennemoser's negative appraisal (Blavatsky 1875: 224), shortly thereafter reluctantly positioned Confucius as a second-rank "divine son of God" in Isis Unveiled (Blavatsky 1877: 159).In The Secret Doctrine, Blavatsky's Confucius, partially echoing French occultist Éliphas Lévi (1810-1875) (cf.Winter, Forthcoming), appears as "one of the greatest sages of the ancient world" and a "Fifth Round" practitioner of "ancient magic" (Blavatsky 1888: 162, 441).Blavatsky divided human spiritual evolution into seven cycles or "rounds" further partitioned into seven "root races" with seven "subraces" each.Confucius qua "fifth rounder," specifically mentioned alongside Plato, she deemed being tens of thousands of years spiritually ahead of ordinary humans, who would represent the fifth (or "European") subrace of the fifth (or "Aryan") root race in the fourth round.Yet, Confucius still clearly lagged behind spiritually the "sixth rounders" Buddha and Christ.Eventually, Confucianism found a solemn place within Blavatsky's perennialism as a distinct expression of Theosophical Ethics (Blavatsky 1889: 48-49).Alongside Laozi (cf.Pokorny, Forthcoming), Confucius commonly became one of the "two greatest Chinese Theosophers" (Countess of Caithness 1887: 199) in the Theosophical tradition; his writings, like those of all other major traditions -such as the Kabbalah, the Vedas, and the Apocalypse -containing a "hidden doctrine," which was the "basis of Theosophy" (Anonymous 1882: 30).
Second-generation Theosophist mastermind and former Anglican clergyman Charles Webster Leadbeater (1847Leadbeater ( -1934) ) weaved Confucius into his messianic Maitreya narrative, if merely in a passing comment.Confucius (and Laozi) turned into a disciple of the then-World Teacher "Lord Buddha," who had initiated the former into arhatship, that is, bestowing the so-called Fourth Initiation and therefore freeing the spirit from the cycle of forced rebirth.When Buddha moved on to even grander duties as part of the triune Logos forming the "Occult Government of the world" (Leadbeater 1925: 303), his predecessor, Maitreya, sent Confucius (and Laozi) specifically "to incarnate in China" in order to propel the religious progress of humankind in East Asia (Leadbeater 1913: 216).However, Leadbeater's Confucius, relegated to the status of "arhat," was not part of the so-called Great White Brotherhood, the community of even higher initiated undying custodians of the "secret doctrine" and life givers of all religions.
"Great White Brotherhood" is a term not coined but widely popularized by Leadbeater; it was derived from Blavatsky's band of "mahatmas," which she seminally defined as individuals "who, by special training and education, [have] evolved those higher faculties and [have] attained that spiritual knowledge, which ordinary humans will acquire after passing through numberless series of re-incarnations" (Blavatsky 1884: 233).Leadbeater's Theosophical sister in arms Annie Besant (1847Besant ( -1933) ) thus spoke of these Great Masters to represent the Great White Lodge (Besant 1894: 496).Around the same time, American Theosophist William Lincoln Garver (1867Garver ( -1953) ) creatively novelized Theosophical ideas centering on "the true occult school, the White Brotherhood of the East … [or] Great White Brotherhood" (Garver 1894: 176, 187) also involving Confucius, giving the overall topos additional momentum.Subsequently, the Great White Brotherhood gradually turned into more common esoteric currency when chief Theosophists of the day -Alfred Percy Sinnett (1840-1921), Besant, and especially Leadbeater -picked it up to marshal it in their writings.
The Theosophical Confucius, however, generally retained a pale and rarely referenced profile: a "fine Statesmen as well as a great Sage" (Besant 1925: 131) but not a Great Master in his own right.Invoking Ennemoser and the early Blavatsky, Scottish Theosophist Violet Tweedale (1862Tweedale ( -1936) ) therefore had the pragmatic Confucius, unsurprisingly, lack Laozi's "[deep] spirituality and direct divine inspiration" (Tweedale 1930: 85).An unexpected glimpse into what salvific course was allegedly lying ahead of Confucius in the New Age (current)given his otherwise almost complete invisibility in her oeuvre -was offered by Neo-Theosophy's grand dame and dear friend of Tweedale, Alice Ann Bailey (1880Bailey ( -1949)).A member of the Great White Brotherhood/Lodge, Confucius was to "incarnate in order to superintend the [millenarian] work" (Bailey [1925] 1999: 1080) of a New World Religion emerging from Bailey's very own Arcane School.Bailey's systematic teachings massively influenced the New Age (a term she had popularized) in general and the (post-Second World War) I AM discourses in particular.The conspicuous millenarian role of the Baileyian Confucius might have been the driving force behind his widespread promotion into the highest echelons of the Great White Brotherhood.For example, one of the most eminent self-styled Bailey disciples, Scottish New Ager Benjamin Creme (1922Creme ( -2016) ) (Pokorny 2021) saw Confucius as one of the most spiritually elevated beings on the earth, far superior to even the likes of Jesus, Mohammed, Zoroaster, Hermes, and Laozi (Creme [1986(Creme [ ] 1996: 373-393): 373-393).
2
The These and other writers were formative for the thought of American Guy Warren Ballard (alias Godfré Ray King;1878-1939), the mediumistic founder of the I AM Activity together with his spouse and successor Edna (1886Edna ( -1971)).A student of the wider occult milieu with a special interest in Theosophy and New Thought, Ballard, in the 1930s, coined the term "Ascended Masters" around which he built his nationalism-steeped esoteric program that spawned a range of groups after the Second World War (Rudbøg 2013) and echoes widely in the New Age to this day.Ballard relocated the Great White Brotherhood's chief present-day stronghold from the "Far East" to Wyoming's Grand Teton Mountain (that is, the "Royal Teton"), thereby assigning to the United States utmost millenarian import.Yet, he retained a connection to the "(Far) East," which pervaded the "Masters narrative" since Blavatsky, for the ancient Royal Teton Retreat, was not led by the group's salvific favorite Saint Germain or an American Master but by a certain "Lanto" (King 1935: 246), an "oriental" Master, introduced by Ballard, whose credentials were not disclosed but who probably stemmed from China, deemed as the most honest nation in the world (Anonymous 1943: 15).Whereas there is no trace of Confucius in the early I AM Discourses, two Chinese Masters are occasionally mentioned, namely, "the Goddess of Mercy Quan Yin," that is, Guanyin 觀音,7 and one American-Chinese "Fun Wey [or, alternatively, Fun Way]," the alleged "embodiment of happiness" (Anonymous 1941: 14).
Pokorny
Numen 71 (2024) 29-47 Lanto's key soteriological role in the I AM current notwithstanding, he remained mostly a passing note in the writings at large.However, his biography and function were to be expanded and elevated in line with the further schismatic evolution of the tradition, which also introduced "his disciple" Confucius among the top-tier Ascended Masters.I AM Messenger Geraldine Innocente (alias Thomas Printz, d. 1961) hived off her The Bridge to Freedom (today's The Bridge to Spiritual Freedom) in 1951, drawing on messages she allegedly received from two particularly powerful Ascended Masters, the Maha Chohan and El Morya, who, alongside Kuthumi, had famously featured in the early Theosophical Society.Innocente thus fused the I AM and the Theosophical Great White Brotherhoods, creatively populating the Ascended Masters hierarchical spectrum with old as well as entirely new Masters.8The office of World Teacher, hitherto famously inscribed by Besant andLeadbeater onto Maitreya, for whom Krishnamurti (1895-1986) was meant to serve as a vessel (Wessinger 1988: 284-287), she passed on to Kuthumi and Jesus in 1956, having Lanto (who had "embodied" over millennia in China) succeed Kuthumi as so-called Lord of the Second Ray.9According to Innocente (Printz 1958), Lanto's promotion entailed another key personnel change on July 4, 1958: the inauguration of his disciple Confucius as "Hierarch of the Rocky Mountain Retreat" or Temple of Precipitation (that is, Ballard's Royal Teton Retreat), the venue of humankind's first embodiment and paramount rallying site of the Ascended Masters where they are thought to gather semiannually to decide on humanity's further millenarian course.Innocente's knowledge regarding Confucius was largely informed by English Humanist Richard Dimsdale Stocker's (1877Stocker's ( -1935) ) booklet Considerations from Confucius (1910),10 a prefaced selection of translated sayings, of which she quotes several passages that were taken in turn from the works of British Sinologists James Legge (1815-1897) and Lionel Giles (1875Giles ( -1958)).11Innocente's Confucius account is very brief, highlighting his belonging to the lineage of the (mythical) Yellow Emperor (Huangdi 8 One of these new Masters was "the God of Happiness Lord Ling," a Chinese Master who was previously embodied as Ānanda and Moses.9 The Baileyian notion of Seven Rays refers to the division of reality into seven Godimparted energies with distinct attributes and colors.The Second Ray is conventionally connected to wisdom.10 In fact, Innocente's brief biographical account of Confucius is largely a verbatim copy of portions of Stocker's (1910: 5-12) introduction.11 Legge's translations of the "Sacred Books" of Confucianism (vols. 3 and 16 [1879], 27 and 28 [1885]) and Daoism (vols. 39 and 40 [1891]) in the Sacred Books of the East series (1879-1910) alongside his earlier (1861-1872; revised: 1893-1895) eight-volume The Chinese Classics became the seminal reference for the late nineteenth-and twentieth-century (esoteric) reading of both Confucianism and Daoism.
The Ascended Confucius
Numen 71 (2024) 29-47 黃帝), a pedigree first addressed by the German Jesuit Johann Adam Schall von Bell (1592-1666) and dismissed by Couplet (Meynard 2015: 73), but frequently adopted by later writers.While Innocente vested "Beloved Confucius" with a conspicuous role as manager of the Great White Brotherhood headquarters, she simultaneously underlined his retiring profile and still subordinate status vis-à-vis his "guru" and effective co-Hierarch Lanto (Printz 1959) -who later in the tradition even came to occupy an immediate superior rank qua Patriarch at the Temple itself -at whose feet he was instructed the universal Law of which Confucianism is but a fraction (Printz 1961).Following Innocente's passing, personnel shifts among the Masters continued with Lanto being promoted even further, rising to World Teacher in union with Kuthumi and Bailey's very own Master Djwal Khul, the latter was also to succeed Confucius as Hierarch.12"Lord Confucius," in turn, had a quick interlude as Lord or Chohan of the Second Ray before virtually disappearing from the group's mythoscape.13Whereas Confucius indeed sank into functional insignificance in The Bridge to Freedom in more recent decades, he lives on with a larger back story and more eminent visibility and popularity (as does Lanto) in The Church Universal and Triumphant (CUT).Reestablished in 1974 by Elizabeth Clare Prophet (née Wulf, 1939-2009), this movement evolved into the international esoteric lighthouse of the Ascended Masters trope.CUT's origins are rooted in the first wave of I AM splinter groups from some twenty years earlier when a The Bridge to Freedom member, Francis Ekey, branched off with her own group, The Lighthouse of Freedom.One of its new Messengers was Wisconsin-based former railway worker Mark L. Prophet (1918Prophet ( -1973)), who like Blavatsky, Sinnett, and Innocente supposedly followed El Morya's direct call to take revelatory action.Toward the end of the 1950s, Prophet, again acting upon the Master's request, eventually severed ties with Ekey and founded his very own The Summit Lighthouse.Mark met his future spouse Elizabeth in 1961, guiding her alongside Masters El Morya and Saint Germain through a five-year Messenger training.Ultimately, Mark, like Ballard before him, became a revered Master in his own right upon his premature death in 1973.His wife took the organizational reins, renamed the group to Church Universal and Triumphant, and gradually turned it into the chief crucible of Ascended Masters storylines (Whitsel 2003: 19-42;Melton 1994), which to this day bring 12 Best-selling American Ascended Masters Messenger Joshua David Stone (1953Stone ( -2005) ) even turned Confucius into an earlier incarnation of Djwal Khul (Stone 1995: 137), Stone's own chief conversation partner.
13
An office in which he was succeeded by a fellow (newly introduced) Chinese Master "Lady Soo Chee."
The CUT Confucius is more lasting and powerful than any of his esoteric alter egos of old, save for his quick detour as Chohan of the Second Ray in The Bridge to Freedom, a position persistently claimed by Lanto in CUT.Like the early The Bridge to Freedom Confucius, the CUT Confucius is the present-day Hierarch of the Royal Teton Retreat, an office he assumed on July 3, 1958, that is, one day prior to the date Innocente had revealed.Confucius is introduced as a "brilliant social, economic, political and moral philosopher" who facilitated China's rise to "one of the greatest civilizations of all time" (Prophet and Prophet 2003a: 61), a status the PRC has lost due to its energetic and moral perversion.Prophet's hagiography of Confucius reiterates the standard traditional account, highlighting his connection to the Duke of Zhou.But not only did Confucius serve as a clerk in a memorial temple of the latter, but he was allegedly his direct disciple, for the duke was no other than Lanto whom Confucius assisted in a previous embodiment formulating the "ideals for God-government" and spiritual cultivation, whose etheric patterns are most strikingly contained in the Yijing 易經 (Classic of Changes).Lanto's career reportedly commenced eons ago, becoming a "master of sages and philosophers" under one Lord Himalaya.He subsequently served as a High Priest of the Divine Mother (that is, God's feminine aspect) in the now lost continent of Lemuria/Mu before spending several lives in Atlantis after which he was incarnated as the Yellow Emperor or Huangdi 黃帝, in whose guise he originated the Chinese civilization and established Daoism, some two thousand years before Laozi put down into writing its major tenets in his Daodejing (Prophet andProphet [2003] 2018b: 114).Finally, he became the "guru" of Confucius and later his contemporary as the reigning monarch (most likely either King Jǐng 周景王 [r. 544-520] or King Jìng 周敬王 [r. 519-477]), more recently handing over to him the administration of the physical-etheric Ascended Masters headquarters, where Lanto still serves as a special instructor for "God-government."In fact, Lanto is viewed as one of many Masters teaching in this veritable Great White Brotherhood University, which incidentally also houses, among others, the Akashic Records.CUT deems the Chinese Masters the lynchpin of the global dissemination of wisdom.Unsurprisingly, their chief sponsor, the Archangel Jophiel, is an apparent Sinophile, who with an angelic partner works out of the vicinity of Lanzhou in the PRC, both being frequently joined by Innocente's Lord Ling.14 Wisdom as personified by Confucius is seen to be at the very heart of ancient Chinese culture.Accordingly, to call the Chinese the "yellow race" would be no coincidence for yellow/golden represents the color of wisdom (Prophet andProphet [2003] 2018b: 115).Yet, the Chinese refused to follow the "light of Confucius.""Had he been heeded, so China should never have fallen" (Prophet and Prophet [1986] 2018a: 241).Notably, because of China's degeneration, the Chinese Masters turned to the new spiritual hub on the earth, that is, the United States, and with them many other souls of ancient China were embodying there in recent decades.According to Confucius, the mission of these "quiet Buddhic souls" is to anchor familyism in America and from there, being the spiritual superpower, to the world at large.The CUT Confucius is nicknamed the "Champion of Families" and "Architect of Community Building," who alongside legions of reembodied wisdom-laden ancient Chinese souls promotes "learning as the means to God-awareness" (Prophet and Prophet 2003a: 465).Referencing Legge's translation,15 the Daxue 大學 (Great Learning), probably the quintessential Confucian writing, is presented as a formula for community building and spreading piousness and self-realization concentrically from the individual, the family, the society, to the world (Prophet and Prophet 2003b: 34-36).Whereas his mission failed in ancient China, his new home, the United States, greatly enables him to victoriously put into practice his teachings of divine government based on God-centered self-reliance and familyism.He acts as the loving and caring grandfather of the American people and, by extension, the whole of humankind.His ethereal presence and message, upon which the Ascended Masters disciples act, put the United States at the scientific and technological vanguard.He inspires the extraordinary practical skills of the American people and can be called upon for spiritual empowerment and guidance in daily life affairs.Drawing on Russian Theosophist Nicholas Roerich's (1874Roerich's ( -1947) ) appraisal (Roerich 1929: 97), Confucius is given as one of the "law-givers of Fun Wey.The Goddess of Mercy, Kuan Yin, for example, would reside in the Beijing area rendering the Chinese people distinctively gentle, family-oriented, and benevolent.The "Cosmic Christ and Planetary Buddha" Maitreya is occasionally found in his abode over Tianjin.And, most notably, Shamballa, the miraculous stronghold of the Lord of the World indeed hovers over China's Gobi Desert.Built in Venusian architectural style, the city is now seen as home to the Buddha.Interestingly, Laozi does not receive particular attention in the CUT oeuvre. .One also encounters emic translations by a CUT member with Sinological training.
Pokorny
Numen 71 (2024) 29-47 human welfare" and "justice of life" (Prophet 1984: 279-280), embodying the fusion of wisdom and practicality.Finally, it was reportedly Confucius who, in August 1977, revealed to Prophet her actual spiritual name "Guru Ma," thereby both affirming her status as the Great White Brotherhood's chief mouthpiece on earth and bestowing upon her the religious title and post-ascension (that is, posthumous) appellation.
Concluding Remarks
This article diachronically traces major contributions to the Euro-American esoteric encounter with Confucius since the nineteenth century.It spotlights in what different ways and informed by which internal and external sources Confucius was appropriated, thereby coshaping and amplifying multiple perceptions of the Chinese Master in the esoteric and public discourse.Particular attention is drawn to one especially influential strand in the overall reception process, namely, Confucius's wider soteriological appearance.That is, the varying appreciation and salvational functions he is credited with in the writings and reports of nineteenth-and especially twentieth-century esotericists beyond the taciturn standard portrait of Confucius qua Sage and Great Teacher.Indeed, almost always Confucius served as a part-time extra and not one of the dramatis personae.When being summoned before the curtain of the esoteric play, he usually remained there but for an instant, relatively tight-lipped before retiring into oblivion.In these brief appearances, however, he occupied a range of roles that mostly presented him as an ancient renovator with a "sound message."Increasingly, toward the end of the nineteenth century, this notion of Confucius solidified.He became gradually more referenced specifically by those pursuing the perennialist agenda in a global perspective, as was so powerfully enunciated and fostered by the Theosophical Society.Notwithstanding, Confucius remained an extra for the esoteric stage, oftentimes performing in a duet together with his "complementary partner" Laozi.Both were solicited when Chinese representatives were needed to add to a global gathering of interlocutors of ancient wisdom.
In particular, Theosophy's newly conceived Masters narrative could well make use of Confucius and so it occasionally did.Against the backdrop of first-(Blavatsky, Sinnett) and second-generation (Besant, Leadbeater) Theosophists, Bailey's Arcane School in tandem with Ballard's I AM Activity elevated the Masters narrative to new heights from the 1920s and 1930s.From there an incredibly versatile and visible stream within twentieth-and twenty-first-century Numen 71 (2024) 29-47 esotericism emerged.The post-Second World War Ascended Masters current, as most stridently encapsulated within The Bridge to Freedom and the CUT, gave both Confucius and other Chinese Masters old and new, powers and storylines hitherto unknown.His "own" Master -Ballard's Lanto -aside, Confucius in particular climbed to the forefront of humankind's supposed conclave of divine-like spiritual teachers, the Great White Brotherhood, in many cases eclipsing the erstwhile esoteric popularity of his senior Laozi.The "Ascended Confucius" still wields the favorable attributes of his nineteenth-century occult alter ego -excelling pragmatism and superior moral wisdom -but more mightily so.In addition, he transcended time and space.No longer is he a sage limited to the Sinosphere who only lives on through his broadly misunderstood doctrinal legacy, but an immortal superhuman vested with salvific abilities he deploys for the sake of humankind's spiritual evolution at large or, upon invocation, directs straight to every disciple on his/her path of divine self-transformation.A journey across nearly two centuries of esoteric characterologies of Confucius exhibits an impressive hagiographical career, one indeed only surpassed by a few other time-honored exponents of the world's religions.
5
The account in turn was likely taken from a report in The Chinese Times from July 14, 1888.6 Likely the first one was the twenty-page booklet Moral Sayings of Confucius, A Chinese Philosopher (1855) by one Cleveland-based L. E. Barnard, whose historical introduction (Barnard 1855: 3-10) is largely a verbatim copy of the Confucius entry in the 1784 edition of A New and General Biographical Dictionary (vol.IV, pp.77-85).The seventy-four "moral sayings" (Barnard 1855: 11-20) are taken word-for-word from the article "The Moral Sayings, and Wise Maxims, of Confucius, A Chinese Philosopher" in the first issue of English freethinker Richard Carlile's (1790-1843) journal The Moralist, published ca.1823.The aphorisms in turn were culled from the 1818 edition of The Life and Morals of Confucius, A Chinese Philosopher by Josephus Tela (perhaps the pseudonym of one Joseph Webb), an edited reprint of the 1691 partial English translation of Confucius Sinarum philosophus.Wright drew on Barnard's Moral Sayings of Confucius, rearranging and largely rephrasing the latter's seventy-four to arrive at his more resonating one hundred aphorisms.In his historical introduction, Wright likewise champions merging, elegantly and at times exaggeratingly reformulating existing accounts.
Ascended Confucius and His Master Occult
fiction following in the footsteps of Garver's Brother of the Third Degree (1894), such as Maude Lesseuer Howard's Myriam and the Mystic Brotherhood (1912), and, especially, Baird T.Spalding's (1872Spalding's ( -1953) )series Life and Teaching of the Masters of the Far East (vol. 1 and 2: 1924/1927) helped to spread the Great Masters theme in the first decades of the twentieth century.Leadbeater and Bailey aside, a more specialized seminal treatment was also given by fellow English Theosophist Brian Ross (alias David Anrias) in his Through the Eyes of theMasters (1932).Neither Confucius nor China was of concern to Ross, whereas Spalding -while likewise ignoring Confucius -deemed China to be one of several "far eastern" abodes of the Masters(Spalding 1924: 3, 8).
|
v3-fos-license
|
2017-08-03T02:12:42.467Z
|
2016-12-03T00:00:00.000
|
6732194
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/s12985-016-0656-6",
"pdf_hash": "0bf34744644b05a971d36b96cc5b668a025dd50c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2600",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "0bf34744644b05a971d36b96cc5b668a025dd50c",
"year": 2016
}
|
pes2o/s2orc
|
Three novel bacteriophages isolated from the East African Rift Valley soda lakes
Background Soda lakes are unique environments in terms of their physical characteristics and the biology they harbour. Although well studied with respect to their microbial composition, their viral compositions have not, and consequently few bacteriophages that infect bacteria from haloalkaline environments have been described. Methods Bacteria were isolated from sediment samples of lakes Magadi and Shala. Three phages were isolated on two different Bacillus species and one Paracoccus species using agar overlays. The growth characteristics of each phage in its host was investigated and the genome sequences determined and analysed by comparison with known phages. Results Phage Shbh1 belongs to the family Myoviridae while Mgbh1 and Shpa belong to the Siphoviridae family. Tetranucleotide usage frequencies and G + C content suggests that Shbh1 and Mgbh1 do not regularly infect, and have therefore not evolved with, the hosts they were isolated on here. Shbh1 was shown capable of infecting two different Bacillus species from the two different lakes demonstrating its potential broad-host range. Comparative analysis of their genome sequence with known phages revealed that, although novel, Shbh1 does share substantial amino acid similarity with previously described Bacillus infecting phages (Grass, phiNIT1 and phiAGATE) and belongs to the Bastille group, while Mgbh1 and Shpa are highly novel. Conclusion The addition of these phages to current databases should help with metagenome/metavirome annotation efforts. We describe a highly novel Paracoccus infecting virus (Shpa) which together with NgoΦ6 and vB_PmaS_IMEP1 is one of only three phages known to infect Paracoccus species but does not show similarity to these phages. Electronic supplementary material The online version of this article (doi:10.1186/s12985-016-0656-6) contains supplementary material, which is available to authorized users.
Background
Soda lakes are sodium carbonate (Na 2 CO 3 )-dominated environments with varying salinity and high pH values, usually between 9 and 11, but occasionally greater than pH 12 [1][2][3]. These lakes are found in arid and semiarid areas where high evaporation rates facilitate the accumulation of salts in local depressions and due to the high buffering capacity of sodium carbonate, soda lakes are the only habitats that maintain stable high alkalinity [4][5][6]. The best studied lakes are those of the East African Rift Valley (EARV) which have been scientifically documented for many decades, two of which, Lake Shala (LS) and Lake Magadi (LM) are located in Ethiopia and Kenya respectively [7][8][9][10][11]. The EARV lakes are situated in an environment of active volcanism and differ from other soda lakes as surrounding hot springs supply water to the lake depressions, whereas others are supplied by the leaching of rainfall through the surface into the lake basins [3].
Soda lakes are the most biologically productive nonmarine aquatic environments known [10,12] and although the microbial composition of these lakes has been well studied [11,[13][14][15][16], little is known about their viral populations [17,18]. It is now accepted that bacteriophages (phages) are the most abundant biological entities in most ecosystems and soda lakes are no exception, with studies conducted on Mono Lake placing viral abundance at 10 9 ml -1 , among the highest in natural aquatic environments [19]. Although much is known about phages that infect certain groups of microorganisms (Mycobacterium, Staphylococcus, Escherichia, Pseudomonas and Lactococcus), much work is needed to expand on our knowledge of phages infecting other hosts [20,21]. The impact that phages have on higher trophic levels is also becoming clearer as elegantly demonstrated using a soda lake environment as model [22,23]. Although this study was a demonstration of the direct effect phage infection can have on a bacterial population and higher trophic structures, they are also thought to shape populations through genetic exchange [24,25] and by so doing affect the biogeochemical cycles present. The study of phages from these environments has also highlighted some unique viruses such as ΦCh1, an archaeal virus which carries both DNA and RNA [26]. Therefore, there is a need to better understand the viral composition in these, and other environments.
Here, we describe phages that infect Bacillus-and Paracoccus species. Several studies have highlighted the importance of these microbes in biogeochemical cycling in soda lakes [27]. The Firmicutes make up a substantial portion (11%) of the microbial community in Lonar Lake and Bacillus species, together with Methylomicrobium and Methylophaga species, and were shown to be the dominant methylotrophs in this Lake [28]. They've also been shown to be responsible for metal speciation and mobilization of arsenic in Mono Lake [29] while some members of the Firmicutes such as Bacillus alkalidiazotrophicus, play a role in the nitrogen cycle [30]. Rhodobacteraceae in turn were shown to be one of the dominant families in many soda lakes (Bogoria, Lonar, Zabuye and Kauhako), and the most diverse Family in Ethiopian soda lakes [16]. In particular, Paracoccus species (family Rhodobacteraceae) have been identified as part of the denitrifying community [29]. Although several prophages have been identified in Paracoccus species' genomes, only two phages (NgoΦ6 and vB_Pma-S_IMEP1), are known to infect Paracoccus species [31]. Thus, to better understand the diversity and biology of bacteriophages and their potential effects on their hosts, in particular from haloalkaline environments, we isolated and characterized three phages from EARV soda lakes including a novel phage infecting a Paracoccus species.
Sampling, Bacterial isolation and culturing
Medium A (broth and agar) was used for bacterial isolation and culturing [32]. Medium A broth contained 1% of glucose, 0.5% of peptone, 0.5% of yeast extract, 5% of NaCl, 0.1% of K 2 HPO 4 and 0.02% of MgSO 4 .7H 2 O. Medium A agar was prepared from medium A broth with the addition of 1.5% of bacteriological agar. Soft agar used in plaque assay required 0.75% of bacteriological agar. All components were dissolved in water and then adjusted to pH9 using 10 N NaOH. Unless stated otherwise, all strains were cultured at 37°C. Bacteria were isolated for this study from soil sediments of both LM and LS stored at 4°C. Ten grams of sediment from each sample were suspended in 100 ml of medium A broth and diluted 10 fold with water. One hundred microlitre volumes of each dilution were spread on medium A agar plates, which were incubated for 24 h at 37°C. After visual inspection of bacterial isolates for varying colony morphologies, colonies were picked from each plate. Bacterial strains were stored in medium A broth containing 50% glycerol at -80°C until required. Isolates ERV9 and HS3, which are part of the IMBM strain collection, were previously isolated from LS sediment on medium A adjusted to pH9. One gram of soil from each site was serially diluted in 1 × PBS (phosphate buffered saline) and dilutions (10 -2 -10 -7 ) were plated. Plates were incubated for 8-10 weeks at 37°C.
Phage isolation and assays
Fifty grams of each soil sediment was mixed with 100 ml of medium A broth and incubated at 37°C on a shaking platform at 120 rpm for 24 h. Fifty millilitre aliquots were removed and centrifuged at 5000 × g for 15 min. The suspensions were filtered first through a 0.45 μm, followed by 0.22 μm syringe filter. The filtrates were used for phage-host infection test plaque assays [33] using two-layer agar plates. The soft agar layer contained 100 μl of mid-log cultures of the newly isolated bacteria mixed with 100 μl of filtrate. Plates were incubated at 37°C for 24 h. A single plaque was picked using a sterile 1 ml pipette tip and sub-cultured using the same host strain. This phage purification process was repeated 3 times. After phage purification, phage stocks were stored in medium A containing 50% glycerol at -80°C for long term storage.
One step growth curves were determined as described by [34] with slight modification. Bacterial host strains were cultured overnight in 5 ml of medium A broth at 37°C at 120 rpm on a shaking platform. Two hundred microliters of each overnight culture was inoculated in 50 ml of medium A broth and incubated at 37°C at 120 rpm on a shaking platform until the cell density of the cultures reached approximately 1x10 8 CFU/ml. One millilitre aliquots of each bacterial culture were mixed in microfuge tubes with 0.1 multiplicity of infection (MOI) (MOI = Plaque forming units (pfu) of virus used for infection/number of cells) of phage, in triplicate, and incubated at 37°C at 120 rpm on a shaking platform for 10 min allowing the phage to adsorb to the bacterial host. Cells were centrifuged at 6000 × g for 10 min to remove the unadsorbed phage. Supernatants were removed and the pellets were resuspended in 1 ml of medium A broth. Fifty microliters of the resuspended cultures were transferred to 50 ml of medium A and mixed well. A one millilitre aliquot of each culture was transferred into a microfuge tube (time was noted as T = 0) and the rest (±49 ml) of the triplicate cultures were incubated at 37°C with aeration (120 rpm) on a shaking platform. Samples were taken every 30 min for 6.5 h. Plaque forming units (PFU) were determined by the plaque assay.
Phages were prepared for TEM visualization as previously described [35]. Phage lysates (10 ml) were centrifuged at 25,000 × g for 1 h in an Eppendorf 5417R centrifuge. Supernatants were discarded and the pellets were suspended in 1 ml of 0.1 M ammonium acetate solution, then incubated at 37°C at 120 rpm on a shaking platform for 16 h to allow the phage pellets to resuspend. Centrifugation and resuspension steps were repeated twice. Each phage suspension was resuspended in a final volume of 20 μl 0.1 M ammonium acetate after the last washing step. TEM images were taken with an FEI Tecnai F20 Field Emission Gun operated at 200 kV at the University of Cape Town's Electron Microscopy Unit. Two microliters of each phage suspension were placed onto a carbon coated copper grid (200 mesh), washed with distilled water and stained with 2% uranyl acetate. The samples were observed at 50 000 X magnification.
DNA extraction, PCR and sequence analysis
Approximately 100 ml of phage lysate was filter sterilized using 0.45 μm followed by 0.22 μm syringe filters. This was followed by addition of 7.5 ml 20% (wt/vol) PEG8000 to every 30 ml of phage lysate and overnight storage at 4°C. Phage lysates were centrifuged at 13,000 × g for 30 min. The supernatants were discarded and the pellets resuspended in 1 ml SM buffer. SM buffer was prepared using 20 ml of 5 M NaCl, 8.5 ml of 1 M MgSO 4 , 50 ml of Tris-HCl (pH 7.5) and 10 ml of 1% gelatin solution. A 5 μl volume of DNAseI at 1 mg/ml and 5 μl of RNAseA at 12.5 mg/ml concentration were added to 1 ml volume of phage suspensions in SM buffer to remove bacterial DNA and RNA. The reactions were incubated at 37°C for 30 min. Following the addition of 10 μl of Proteinase K at 10 mg/ml and 20 μl of 20% SDS, the reactions were incubated at 55°C for 1 h to allow for the disruption of the phage capsids. Phenol/ chloroform DNA extraction was used to extract phage DNA. An equal volume of phenol:chloroform:isoamyl alcohol (25:24:1) was added to the supernatants, and the reactions were centrifuged at 13,000 × g for 5 min. The upper layer was transferred to new tubes, and the process of adding phenol:chloroform:isoamyl alcohol, centrifugation and removal of the upper layer was repeated. An equal volume of chloroform:isoamyl alcohol (24:1) was mixed with the upper layer containing the sample and centrifuged. The upper layer was transferred into 1.5 ml eppendorf tubes. A 45 μl volume of 3 M sodium acetate (pH5.2) and a 500 μl of 100% isopropanol were added, and the solution was incubated overnight at -20°C to precipitate the DNA. The pellet was collected by centrifugation at 14,000 rpm for 20 min. The DNA pellet was washed twice using 1 ml of 70% ethanol. The DNA pellet was air dried at room temperature, resuspended in 30 μl of 1 X TE buffer and stored at -20°C.
The 16S rRNA gene amplification was carried out using universal bacterial primers E9F 5'-GAGTTTG ATCCTGGCTCAG-3' [36] and U1510R 5'-GGTTACC TTGTTACGACTT-3' [37] to identify bacterial isolates. The PCR mix included 5 μl of 10X DreamTaq buffer, 5 μl of 2 mM dNTPs, 2 μl of both 1 mM forward and reverse primers, 100 ng of DNA and 1.25 U DreamTaq Polymerase. Each reaction was adjusted to a final volume of 50 μl with nuclease free water and amplified in an automated thermal cycler (Thermo Hybaid). The PCR conditions were: initial denaturation at 95°C for 3 min, followed by 35 cycles of denaturation at 95°C for 30s, annealing at 55°C for 30s and extension at 72°C for 1 min, with a final extension at 72°C for 10 min. DNA fragments of approximately 1500 bp were generated and visualised by electrophoresis on a 1% agarose gel.
Sanger sequencing was performed using an ABI PRISM® 377 automated DNA sequencer at the Central Analytical Facility of the University of Stellenbosch (South Africa). For 16S rRNA sequencing, primers E9F and U1510R were used. Next generation sequencing was performed using an Illumina MiSeq sequencer using the Nextera XT library preparation kit (Illumina) and a 10% phiX v3 spike as per the manufacturer's instructions (Preparation Guide, Part #15031942 Rev A May 2012) as well as the MiSeq Reagent kit V2 (500 cycle). One nanogram of uncloned, unamplified viral DNA was used to prepare one Nextera XT library with each phage barcoded for de-multiplexing after sequencing. Sequencing was performed at the Institute for Microbial Biotechnology and Metagenomics (IMBM), University of the Western Cape, Cape Town, South Africa. The raw reads were trimmed (bases with a Q-score less than 36 were trimmed from the 3'end) and de-multiplexed at the sequencing facility generating 2 × 250 bp reads, resulting in a set of paired (read pairs, forward and reverse) fastq files per phage.
Sequences were analysed using BioEdit Version 7.0 software and DNAMAN Version 4.13. The NCBI database was used for analysis of DNA sequences and homology searches. The Basic Local Alignment Search Tools (BLAST) programme was used to determine sequence similarity and identity to known sequences in the GenBank database using software from the National Centre for Biotechnology Information (www.ncbi.nlm.nih.gov/). De novo assembly of phage genomes was performed using CLC genomics workbench ver. 6.5 (CLC bio, Denmark). Annotation of phage genomes was performed by manual BLASTp searches of ORFs predicted by CLC as well as BLASTx searches for regions in which no ORFs were predicted. The complete genome sequence of all three phages is available on the GenBank database under accession numbers KR072689 (Shpa), KU640380 (Shbh1) and KU665491 (Mgbh1).
The software program TETRA was used to perform tetranucleotide usage deviation analysis [38,39]. tRNA genes were predicted using the tRNAscan-SE program (http://tinyurl.com/snbk2). Direct repeats were identified using REPFIND (http://tinyurl.com/zkh2pnc) with a 15 bp minimum repeat length. Inverted repeats were identified using UGENE (http://ugene.unipro.ru) using 20 bp minimum and 80% similarity as search parameters. Codon usage data for P. denitrificans was obtained from (http://tinyurl.com/zeqorau). Transmembrane regions were predicted using the TMHMM server (http://www.cbs.dtu.dk/services/TMHMM). Protein repeats were identified using the RADAR server (http:// tinyurl.com/pcpves3). CRISPR sequences were identified through BLAST searches using the full length genome sequence against the CRSIPR database (http://crispr.upsud.fr). The phylogenetic tree for terminase sequences from various phages and bacteria was created using MEGA6 [40]. All positions with less than 95% site coverage were eliminated. That is, fewer than 5% alignment gaps, missing data, and ambiguous bases were allowed at any position. There was a total of 239 positions in the final dataset.
Isolation of phages and basic characterization
Seven bacterial isolates (Table 1) from the IMBM culture collection, previously isolated from LS and LM were used to screen for phages. On challenging the isolates with sediment from both lakes clear plaques were produced on isolate HS3 with LS sediment and isolate MGK1 showed clear plaque formation with sediment from both lakes. Following single plaque purification, three distinct phages were identified and named Mgbh1 (MGK1-LM sediment; clear plaques), Shbh1 (MGK1-LS sediment; clear plaques) and Shpa (HS3-LS sediment; clear plaques). The phages were tested for their ability to infect the seven bacterial isolates from both lakes. Mgbh1 and Shbh1 infected MGK1. Shbh1 can additionally infect isolate ERV9 forming turbid plaques with characteristic "bulls eye" morphology, whereas Shpa could only infect HS3. This suggests that Shbh1 could be a broad-host range phage which promotes genetic exchange between hosts. "Bulls eye" plaques on ERV9 also suggests lysogeny of the host whereas it may be lytic on MGK1.
Phage preparations from all four isolations were visualized by TEM. Morphologically Shbh1 belongs to the family Myoviridae while Mgbh1 and Shpa belong to the family Siphoviridae (Table 2).
One step growth curve data showed that Mgbh1, Shpa and Shbh1 (with MGK1 as host), had large burst sizes producing >800 particles per cell ( Fig. 1 and Table 2). Shbh1 had a much smaller burst size when using ERV9 as host, which it likely lysogenizes and could explain the reduced burst size. It is noteworthy that none of the phages displayed a particularly acute burst, but rather a burst drawn out over 2.5 to 3 h and suggests that phage release from single infected cells could be asynchronous [41,42]. Turbid or clear plaques for Shbh1 on the two different hosts, could be the result of different transcription/translation rates as cultures grew at more or less the same rate according to their growth curves (not shown). This could be influenced by the growth conditions used (37°C and a fixed salt concentration and medium composition). As only one set of conditions was employed, the phage may behave differently in the two hosts.
Phage genomes Genome composition and phylogeny
The phage genomes varied in size ranging from 58.9 kb to 138 kb ( Table 2). The phage genomes all display the modular arrangement well documented in other phages from these families (Fig. 2). Shpa shares little overall nucleotide similarity with any phage on the NCBInr database with the highest identity being to small portions (38-394 bp at 75 to 92%) of Paracoccus, Rhodobacter and Rhizobium species' genomes. More distantly related is a section (±2.8 kb at 67% identity; 2576-5387 bp), encoding the major capsid protein and a protease, to the Silicibacter species TM1040 genome (CP000377.1). This region of the Silicibacter species genome encodes a putative prophage. The phage genomes appear to be compact, with only 9%, 12% and 13% non-coding regions in Shpa, Mgbh1, and Shbh1 respectively. Mgbh1 shares a region of ±3 kb (49269-52282 bp at 67% identity) with the Bacillus subtilis subspecies spizizenii W23 genome (CP002183.1) and much shorter sections (29-64 bp at 100%) with the B. halodurans C-125 genome sequence (BA000004.3). The ±3 kb area encodes the ribonucleotide reductase subunits and the nucleotide similarity is likely due to the flanking genomic region coding for a prophage as well as the need to conserve RNR function and regulation [43]. Shbh1 does share significant nucleotide similarity with well-known Bacillus species Myovirus phages namely SIOphi (70% identity over 32% of the genome), phiNIT1, phage Grass and to a lesser extent the more recently described phage, phiAGATE [44,45]. Neither Mgbh1 nor Shbh1 appears closely related to the only other sequenced alkaliphic Bacillus infecting phage BCJA1c [46] with only one open reading frame (ORF31 on Mgbh1) showing highest sequence identity to a homologue on that phage genome. The genome sequence of two Idiomarinaceae infecting phages Phi1M2-2 (NC_025471; 36844 bp) and 1 N2-2 (NC_025439; 34773 bp), isolated from LM, has been determined. However, none of the phages described here show similarity at the nucleotide or amino acid level to these viruses. Nucleotide sequence alignment of Shbh1 with several of its closest relatives show some conservation at the nucleotide level (Additional file 1: Figure S1). Notably, the first ±45000 bp shows little similarity to any of its relatives, with the exception of a block from ±10000 bp to ±17000 bp which contains a SpoEIII homolog. It also shares some similarity to the ORFs encoding structural proteins in related phages. However, there are four regions that show little or no homology to the structural proteins of closely related phages: 75678-76596 bp, 82861-86954 bp, 91464-95056 bp, 98815-1 02490 bp. The first of these regions lies in the tail tape measure protein, and likely reflects the differences in tail length between Shbh1 and its relatives. The other three regions occur in a region coding for putative tail fiber proteins and two other tail proteins without defined function. These difference may reflect that a unique cellular feature is targeted by the phage for binding to the host or may reflect the adaptation to a haloalkaline environment.
The G + C content of Shpa is in the same range as for its host (Paracoccus species G + C range 63.4-70.4%), while the G + C content for Shbh1 is slightly lower than for B. halodurans (43.7%) or B. pseudofirmus (40.3%) which is often observed for phage-host pairs [47]. In the case of Mgbh1 the G + C content is slightly above that of its host, B. halodurans. On average it is found that phages have a G + C content 4% lower than their hosts [47], with fewer examples of phage with G + C content higher than their host. Deviation in G + C content between phages and their hosts has been used to indicate that the phage does not regularly infect, or has evolved in a particular host, as G + C content tends to ameliorate over time between phages and their hosts [48,49]. This, together with the potential broad-host range nature of the phage suggests that Shbh1 may not prefer infecting in situ the hosts on which it was isolated here. While other factors such as the availability of a suitable attB site and regulation of gene expression will play a part in whether or not the phage is lytic or lysogenic on a particular host, it is worth noting that the G + C difference for Shbh1 compared with B. halodurans is greater than between Shbh1 and B. pseudofirmus. This agrees with the observation of Rocha that lytic phages are often higher in AT content than lysogenic phage compared with the host. Tetranucleotide usage deviation (TUD) analysis gave a value of 0.64 when comparing Shbh1 to both host genomes (B. halodurans and B. pseudofirmus). The correlation coefficient was 0.65 when comparing Shbh1 and Mgbh1 to each other. The TUD value was 0.46 when comparing the Mgbh1 genome to that of MGK1. This lower TUD correlation perhaps together with a slightly higher G + C content, for Mgbh1, could suggest that the host used here may not be the host that this phage has regularly infected or has co-evolved with for a long time. TUD analysis gave a Pearson's correlation coefficient of 0.74 and 0.69 when comparing Shpa to the two sequenced Paracoccus genomes.
The terminase large subunit identified on Mgbh1 shows highest similarity to proteins from Bacillus (phBC6A51) and Paenibacillus (Tripp) phages ( Fig. 3 and Additional file 2: Table S1) when performing a BLASTp search against only Caudovirales sequences on the NCBI database. However, it is most closely related to many terminase sequences found in genome sequences of a variety of Bacillus species when searching against the NCBInr database and Metavir classification identifies Bacillus subtilis phage Grass as the closest relative. This would suggest that Mgbh1 is related to Bacillus prophages rather than lytic phages which infect these hosts. It may also point to the large amount of information missing from current databases, and the biological relationship described should help to more accurately describe the phylogeny of many unknown phages.
A small ORF just upstream of the putative Mgbh1 TerL shows weak similarity to two terminase small subunits (TerS) on the NCBI database using a BLASTp search against all classified and unclassified Caudovirales sequences from Lactobacillus phages phiJL-1 and phi jlb1. It also has a helix-turn-helix motif, involved in DNA binding, similar to other terminase small subunits [50]. The protein also models well, when using the automated model building feature in SWISSMODEL, with an rmsd value of 0.157 Å for Cα to 2AO9. 2AO9 is a homo 18-mer of a protein from B. cereus phage phBC6A51, which also lies directly upstream of its TerL. Taken together, this suggests that ORF13 can tentatively be assigned the small subunit terminase for Mgbh1. The putative large terminase subunit (ORF98) from Shbh1 is most closely related to those from Bacillus myoviruses Grass, phiNIT1, Bcp1, vB_BceM-Bc431v3 and clusters with phages from the B. cereus group (Fig. 3 and Additional file 3: Table S2). Metavir identified phiAGATE as Fig. 3 A condensed (50% cutoff) neighbor-joining tree of 117 large terminase subunit amino acid sequences from tailed phages and bacteria. The percentage of replicate trees in which the associated taxa clustered together in the bootstrap test (1000 replicates) is shown next to the branches [70,71]. The names of the phages or bacterial sequences are shown at the right of each branch. Coloured boxes demarcate related groups of terminases with similar packaging strategies. Accession numbers for aligned sequences are: Bcp1 -YP_009031337; Bth -the closest relative, with PHAST suggesting phage Grass as the closest relative. A small ORF (136aa) is present directly upstream of ORF98 but could not be identified as the terminase small subunit based on BLASTp searches alone, showing how different these proteins are from their characterized homologues. The read coverage of Shbh1 indicates that the genome has long terminal repeats indicated by a region of higher than (3234x) average coverage (1714x) from ±26700 bp-30500 bp, while PAUSE3 analysis indicated two points of significant read build up in this region at bases 29638 bp and 30054 bp.
When compared with the NCBInr database, Shpa's putative terminase large subunit (Fig. 3 and Affitional file 4: Table S3) is most closely related to a terminase sequence of a prophage on an Oceanibulbus species (77% identity over 99% of the sequence) genome which, like Paracoccus, belongs to the family Rhodobacteraceae. For Shpa, BLASTp searches against the NCBInr database also returned hits to terminase-like sequences in non-Paracoccus bacterial genome sequences, rather than terminase-like sequences in prophages of Paracoccus species. Twenty-six Paracoccus genomes have been sequenced thus far including a Paracoccus halophilus but only one (vB_PmaS_IMEP1) of their phages has been sequenced. This suggests that the phage described here does not infect a wide range of Paracoccus species or has not encountered the hosts sequenced thus far and, to date, appears to be unique to this environment. The closest match when comparing it to just viral sequences is to Tetraselmis viridis (phycodnavirus) infecting virus S20 (Fig. 3) and it also shows little similarity with the terminase from the only other described Paracoccus phage, vB_PmaS_IMEP1. S20 is thought to be a phage of bacteria which co-culture with algae [51].
Based on the analysis presented in Fig. 3 we predict that Shpa could have 3' overhanging cohesive ends, while Shbh1 terminase groups with myoviruses for which a mechanism has yet to be defined (SPO1, phage GRASS and phiNIT1) and Mgbh1 likely uses a headful mechanism.
Structural proteins
Major capsid and portal proteins could be identified for all three phages, while head to tail joining proteins could be identified in Shpa and Mgbh1. Phage Shbh1 encodes putative tail fiber proteins (ORF126, 135 and 137) which contain repeated protein sequences similar to what has been identified in the long and short tail fiber proteins (gp34 and gp12) of phage T4 [52]. As Shbh1 is not too dissimilar from mesophilic Bacillus infecting phages, it's structural proteins may give insight into the adaptation of proteins in general or specifically phage structural proteins to high pH, high salt environments. Similarly, ORFs 28, 29 and 30 of Mgbh1 show repeated protein sequences and likewise a 129 bp direct repeat was identified close to the C-terminal of ORF19 in Shpa that is responsible for a repeated amino acid sequence. ORF19 encodes a putative tail fiber protein which ends with the stop codon at 15621 bp. However, homology to a choice-of-anchor A domain protein (gp21 Burkholderia phage phi644-2, YP_001111100.1, 1101 bp) starts immediately downstream (15622 bp) of the stop codon in reading frame 3 and continues until 16330 bp. It may be that read through translation results in the full length protein being produced.
Mobile genetic elements
An intact IS605 element (ORF53) was identified on Mgbh1, but did not have an IS200 element associated with it. A 33 bp direct repeat was identified 90 bp upstream (39260-39293; 39326-39359) of the IS605 element on Mgbh1. IS605 and IS200 elements identified on Shbh1 appear truncated and likely inactive. No repeats of significance could be identified in the immediate vicinity of the insertion sequence elements on Shbh1. The location of the IS elements suggests that these are areas which tolerate disruption to a certain degree (DNA replication and repair) possibly due to these functions being complemented by host factors and similarly suggests that areas related to expression of proteins involved with phage structural components are more sensitive to interruption.
Nucleotide metabolism, replication and gene expression
Ribonucleotide reductases (RNR) were identified in both Mgbh1 and Shbh1. An activator protein could also be identified on Mgbh1 (ORF 62; nrdI), but not on Shbh1. This suggests that the RNR encoded on Mgbh1 belongs to class Ib while the one on Shbh1 belongs to class Ia RNRs. The separation of the two subunits of RNR by a homing endonuclease such as is identified on Shbh1 has been described before, although in this case does not appear to interrupt the reading frame of either subunit [53]. No RNRs were detected on Shpa.
Mgbh1 and Shbh1 both encode thymidylate synthases. It has been demonstrated that TS1 type thymidylate synthases are unique to the Bastille group of Bacillus infecting phages and may be used as a phylogenetic marker [54]. According to the identity cut-offs defined by Asare, the thymidylate synthase of Shbh1 (57% identity to that of Bastille and E-value of 1 × 10 -111 ) demonstrates that it belongs to the Bastille group, whereas Mgbh1 does not. Shbh1 however appears to lack a dihydrofolate reductase homolog, often found between two and six ORFs downstream of the thymidylate synthase, in Bastille phages, which was also identified as another marker gene for this group. It also does not encode a metal dependent beta-lactamase homolog as has been found for the other members of this group, but does encode two putative metal dependent enzymes: ORF13 and 20, encoding a metallophosphoesterase and metalloendopeptidase membrane proteins respectively. The Shbh1 SpoIIIE homolog (ORF 25), another hallmark of this group of phages, is located eight and five ORFs downstream of these metallo-enzymes respectively which is an arrangement also observed in other Bastille phages.
Both Mgbh1 and Shbh1 encode DNA polymerases (ORF 57 and 160), whereas Shpa does not. Both of these enzymes should have 3'-5' exonuclease activity, however the DNA polymerase identified on Shbh1 also contains an N-terminal uracil DNA glycosylase (UDG) domain similar to that found on the Bacillus phage SPO1 DNA polymerase. Although characterized as a part of DNA excision repair processes, the presence of this domain in DNA polymerases has been suggested to aid in polymerase processivity [55]. ORFs likely involved with DNA replication of Shpa are ORFs 33, 34, 38, 45 and 46, which include a helicase, primase, single stranded DNA binding protein and a protein containing a ParBc endonuclease-like domain [56].
GC skew analysis of Shbh1 shows a possible replication origin (global minimum in GC skew) and terminus (global maximum in GC skew) at ±60kbp and 0/138 kbp respectively (Additional file 5: Figure S2). An intricate inverted and direct repeat structure at the ±60kbp site also suggests a replication or transcription regulation site at this position. The genome of Shbh1 is terminally redundant with 336 bp direct terminal repeats found at the ends of the sequence. A second local maximum (±30kbp) corresponds to an area where the direction of transcription changes. A putative replication origin and terminus was also identified for Shpa at the 0/38kbp and at ±21kbp respectively (Additional file 6: Figure S4). The GC skew plot for Mgbh1 is similar to those of genetic elements which replicate solely through a rolling circle mechanism, while those that go through both theta and RC stages show defined minima [57] (Additional file 7: Figure S3). No repeats of significance could be identified for Mgbh1.
ORF17 on Mgbh1 does not have an identifiable start codon, however the remainder of the protein appears to be related to phage proteins on the NCBI database, which are also described as partial proteins. It may be that ORF16 and ORF17 are co-translated. ORF27 and 45 on Mgbh1 may have earlier start sites (16911 bp, 33220 bp) and be produced through stutter or read through translation. The Imm_39 domain containing protein on Shbh1 has a stop codon in the reading frame and the full length protein may be produced as a result of read through translation. Three 27 bp direct repeats were identified on Shpa from 17204 bp to 17284 bp which may serve as a transcription regulation site.
No tRNAs could be identified for Shbh1 or Mgbh1. It has been suggested that a reason for the presence of, especially large numbers of tRNAs, on broad host range phage genomes is to compensate for different codon usage patterns in host bacteria. Shbh1 infects at least two different Bacillus species, yet harbours no tRNAs, suggesting very similar codon usage profiles in these hosts (9.8% mean difference in codon usage). Shpa encodes one tRNA (Trp; 37819-37889 bp). An analysis of the proteins encoded by Shpa shows that the tryptophan content is 1.3 times as high as for proteins in P. denitrificans, giving a possible reason as to why this tRNA is retained on the phage [58].
Lysis and Lysogeny
ORF107 on Shbh1 appears to be a cro-like regulator and identifies the area upstream of it, which includes inverted and direct repeats, as a region likely involved in control of the lysis/lysogeny balance. The "bulls eye" plaque morphology for Shbh1 on ERV9 is suggestive of the phage being lysogenic in this host, however there was no easily identifiable integrase on Shbh1. No integrase related proteins could be identified on Shpa either. There is however a resolvase (ORF152) and a recombinase (ORF164), on Shbh1, which may serve as integrase for the phage. A holin-like protein (ORF167) was identified on Shbh1, but separated from the endolysin-like protein (ORF94) by 77 kbp and appears to lack the dual start motif described for other holins [59]. This holin-like protein has two predicted transmembrane regions (24-46aa; 56-75aa), classifying this as a ClassII holin [60].
It is of interest that the terminase large subunit of Mgbh1 is most similar to sequences which are likely from lysogenic phages in the genome sequences of Bacillus species, but not to terminases of Bacillus infecting phage genome sequences deposited on the NCBI database. Together with the observation that clear plaques are formed on the MGK1 host, this could suggest that the phage is a lytic version of a phage(s) that, more often than not, display a lysogenic lifestyle.
A 65 bp nucleotide sequence (29421-29486 bp) with 100% DNA identity to a sequence adjacent to the B. halodurans 5S rDNA was identified on Mgbh1 located inside ORF37 just upstream of the putative recombinase/integrase (ORF38). This may suggest that the phage inserts itself at this position on the B. halodurans chromosome when lysogenizing the host.
A holin could not be identified on Shpa, however a peptidoglycan recognition protein (PGRP) domain protein (ORF27) was identified, and it is known that endolysin-like proteins have such domains [61,62]. It also displays homology to lysozyme-like proteins likely involved in cell wall degradation, either during infection or cell lysis for release of progeny. Shpa seems to lack the genes associated with lysogeny (integrase, recombinases), and together with the observation of clear plaques on HS3 suggests that it too is lytic.
Phage signatures on currently available genomes and metagenomes
A BLASTn search of Mgbh1 against the CRISPR database identified three perfectly conserved spacer regions on the genome sequence of B. halodurans C-125, and are all part of the same CRISPR array (Table 3). This suggests that Mgbh1 or a very closely related phage could have infected B. halodurans strains isolated from various regions around the world (C-125 from soil in Japan) [63]. Six other imperfect match spacers were also identified in the C-125 genome. The nucleotide sequence differences between these spacers and phage genome sequence could represent adaptation of the phage to host resistance. It has been shown that even single nucleotide changes in proto-spacer regions incorporated by bacteria as CRISPR spacers can result in phage becoming virulent again [64]. For Shbh1 two potential CRISPR spacers could be identified on the B. halodurans C-125 genome (Table 3). No CRISPR spacers could be identified for Shpa, when searching current databases, and this is likely due to there being few Paracoccus genomes on these databases (including the host species identified here) or could indicate that the phage is not widespread.
Two soda lake metagenome datasets (Kulunda Steppe Russia and Soda Lake California) are currently available and these were investigated for the presence of the phages described here [65] (SRR3306837). From the Soda lake metagenomic dataset 111 reads mapped to the Shpa genome at 80% similarity and 60% length fraction or reads, covering 2771 bp of the genome, and 34 reads mapped to Shbh1. This could indicate the presence of these phages or related phages in this environment. There were no sequences similar to either Mgbh1 or Shbh1 identified in the Kulunda Steppe metagenome, however two contigs showed regions with nucleotide similarity to the terminase large subunit from Shpa (1613 bp; 68% identity with 97% query coverage). When a BLASTx was performed of these sequences against the NCBInr database, the best hit was within a Rhodobacter sphaeroides genome. To further investigate whether or not the contigs represented phages or prophages, these were analysed by PHAST. Neither contig appears to contain more than just the one ORF with similarity to the large terminase subunit and neither appears to be a phage or prophage, making it unlikely that these represent related phages. Thus, this suggests that Shpa may not be present in that environment.
Conclusion
Here we've described three novel phages isolated from halo-alkaline environments, and as very few phages that infect alkaliphic and more specifically haloalkaliphilic microorganisms have been described, these three phages are rather unique [46,66,67]. It is unknown exactly what the role of the host organisms identified here is in their respective environments, however the presence of possibly lytic phages such as Shpa and Shbh1, could have a significant impact on their hosts and therefore Subscript sequence corresponds to phage DNA sequence while superscript corresponds to the sequence of the bacterium the environment in which they find themselves. The impact that phages can have in natural settings is well understood, but their impact in anthropogenic settings is less clear, where they could either be the cause of a problem or be a solution for it. Paracoccus species have been isolated from a wide range of environments, including biofilters used in the treatment of waste gases from animal rendering plants, groundwater contaminated with dichloromethane, a sulfide-oxidizing, denitrifying fluidized-bed reactor, wastewater from semiconductor manufacturing processes and in and around child care facilities [68,69]. One aspect to the study of phages and their interactions with their hosts is the potential for phage derived products and novel genetic systems. These can be in the form of phage host pairs (phage display, genetic systems, phage therapy; food preservation), endolysin/lysozyme (phage therapy; biofouling), DNA/RNA polymerases, DNA/RNA ligases for research and includes the chance of discovering new interactions such as CRISPR systems which have been developed into excellent genome editing tools [70]. The endolysin proteins of all three phages could be useful in treating fouling of filters by these or closely related organisms and may perform best in high pH/high salt environments. Although soda lakes make up a small percentage of aquatic environments, the microorganisms which are found in these environments can help us further delineate the limits at which life occurs and the addition of these three phages to the databases will hopefully aide in this process.
|
v3-fos-license
|
2024-01-10T16:09:26.582Z
|
2024-01-01T00:00:00.000
|
266881550
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2024/05/e3sconf_incasst2024_03005.pdf",
"pdf_hash": "580e6801f201282d33962984775a0a28c9182295",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2601",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"sha1": "d8cadda017387e48cd8f120340145aeae83a1df9",
"year": 2024
}
|
pes2o/s2orc
|
Design and implementation of 232.2 KWp rooftop on grid solar power plant
. The need for large amounts of electrical energy and the negative impact of carbon emissions have encouraged all countries to develop renewable energy sources and reduce fossil energy sources which produce a lot of carbon emissions. One promising source of renewable energy is solar energy, which can be converted into electricity through Solar Power Plants. The Indonesian government has committed to increasing the renewable energy mix by 57% by 2035 and reducing carbon emissions by 29 % by 2030. To support the Indonesian government's commitment, the Widya Mandala Foundation has built Solar Power Plants on all campuses of Widya Mandala Surabaya Catholic University. This article intends to communicate the design and implementation of a 232.2 KWp rooftop and on grid Solar Power Plant located on one of these campuses. The Solar Power Plant has been designed and implemented using 430 units of 540 Wp solar panels, 2 units of 110 KWac inverters. It would produce 363.5 MWh of electricity per year. Based on the implementation results, the following output was obtained: from May to December 2022, was generated 221.788 MWh of electricity and 88.7 ton of CO2 avoided. Then from January 2023 to May 2023, was produced 145,400 MWh of electricity and 58.2 ton of CO2 was avoided. The implementation of 232,2 KWp Solar Power Plant is a significant step toward environmentally friendly energy. It also contributes to carbon footprint reduction and support sustainable development goal.
Introduction
Electrical energy is a vital basic need and contribute greatly to meeting human needs in this era of globalization.Without the fulfillment of electrical energy, of course, it will hamper human activities along with the current development of information technology.Research and development as well as the use of new and renewable energy are currently a strategic issue, especially in breaking down the dwindling use of fossil energy, the carbon emission footprint and the long-term environmental impacts [1].The rate at which solar energy falls to the earth's surface is 120 petawatts, this means that this amount of energy in one day can meet the energy needs of the whole world for more than 20 years.This is of course a very large resource and energy potential to be developed [2].Some of the obstacles encountered in the field include network flexibility, limited technology development, incomplete and clear pricing schemes [3].Today the world is faced with the problem of increasing demand for electrical energy and the negative impact of carbon emissions from fossil fuel power plants.Research and development in the field of solar power plant as well as its techno-economic analysis have also been carried out in various agencies, local governments and various countries [4][5][6][7].To overcome this problem, countries in the world have issued policies related to the use of renewable energy or environmental policies.For example, the state of Indonesia and the state of the Vatican.
Indonesia follows international energy policies, namely reducing greenhouse gas emissions, transforming towards new renewable energy, and accelerating the economy based on green technology.The National Energy Council has drawn up a roadmap for the energy transition towards net zero emission in 2060.Every year the new renewable energy target will increase, starting from 2025 with a target of 23 percent to 2060 with a target of 66 percent.Meanwhile the the second encyclical of Pope Francis, Laodato Si, was released which deplores environmental damage and global warming, and invites all people around the world to take "integrated and immediate global action" [8].
Utilization of solar energy
One promising source of renewable energy is solar energy, which can be converted into electricity through solar power plants.Electricity from solar energy is friendly to the environment because it does not produce CO2 emissions.
The potential for solar energy in the world is enormous.In Indonesia, the potential for solar energy is around 4.8 KWh/m2 or equivalent of 207.8 GWp, and only around 148 MWp has been utilized [1].
Based on the technology, there are three main types of solar power plant systems, namely on grid, hybrid and off grid.All the three have advantages and disadvantages [2].According to the method of placing solar modules, solar power plants are divided into: ground-based solar power plants, rooftop solar power plants (located on flat, pitched and other types of roofs), facade solar power plants, BIPV solar power plants, solar carports, floating solar power plants, and mobile (or portable) solar power plants [9].Installing a solar power plant requires an open space such as a rooftop for its installation so that it is hoped that the capture of solar radiation can be optimal [10].
In developed countries the utilization of solar energy into electrical energy through solar power plants is already very large.There has been a lot of research on the implementation of rooftop-based and on-grid solar power plants.His research results have also been widely published, including research reports [11][12][13][14][15][16][17].From the report it is known that the main components of rooftop-based and on-grid solar power plants are solar panel modules, on-grid inverters, PV generation meters, and net meters.
In building a solar power plant, generally, it is first designed and simulated before a real solar power plant is built.The goal is to avoid wastage of costs due to construction errors.A number of computer software have been reported to be used for design and simulation.Among them are PVsyst, PVGIS, PV Watt, PVForm, SolarPro, PV-DesignPro [11,18].
Given the large number of buildings with large roofs in Indonesia, the potential for using rooftop-based electricity is enormous.Many building owners in Indonesia have implemented on-grid and rooftop-based solar power plants.There have also been apublication about ongrid and rooftop solar power plants [10].
However, there are still many buildings that have not used their roofs as places for solar power plants.On the other hand, the Indonesian government's program to accelerate the energy transition to environmentally friendly renewable energy needs support.
Solar power plant at Widya Mandala Surabaya Catholic University
The Widya Mandala Foundation is part of the Catholic church which is based in the Vatican and is also one of the institutions belonging to the Indonesian nation.The Widya Mandala Foundation is called upon to participate in realizing the Vatican's call and for Indonesian state policy as in [8].For this reason, in 2021-2022 the Widya Mandala Foundation has built the on-grid rooftop solar power plant of more than 500 KWp spread across five campuses of Widya Mandala Surabaya Catholic University, namely the Dinoyo 42 Surabaya campus, the Dinoyo 48 campus, Surabaya, the Kalijudan campus, Surabaya, the Madiun campus., and the Pakuwon City Surabaya campus.
This article intends to communicate the design and implementation of a 232.2 KWp rooftop and on grid Solar Power Plant located on Widya Mandala Surabaya Catholic University Pakuwon City campus.The purpose of the presentation is to present the results of the design and implementation, as well as the performance of the solar power plant.In the next sections of this article will be explained the design and implementation method, performance measurement, and implementation results of the solar power plant.
Solar power plant performance parameters
Parameter indicators to determine the performance of an on grid solar power plant system have been made by the International Energy Agency (IEA) [19].
According to the IEA, there are many parameters of solar power plant performance that reflect the overall solar power plant performance.Some of them are performance ratio, energy yield, resource yield, capacity utilization factor, inverter efficiency, system efficiency.Meanwhile.energy output, system efficiency, reference yield, final yield, performance ratio, annual capacity factor and CO2 emissions avoided are the elements of performance evaluated used by [19] in evaluation on grid PV system in Casablanca, Morocco.The Performance ratio is the ratio of the energy effectively produced (used), with respect to the energy which would be produced if the system was continuously working at its nominal STC efficiency.The PR is defined in the norm IEC EN 61724 [20].
Design and implementation method
In order to build the solar power plant in the Widya Mandala Catholic University Building, Pakuwon City campus, the following steps are taken: 1) Building data collection, 2) Design and simulation of the solar power plant with PVSyst software, 3) implementation, and 4) commissioning test.
Building data collection
To obtain the required building data as input for the design of an on-grid rooftop solar power plant, data was collected on the Pakuwon City campus building.
Based on the results of data collection, the Pakuwono City campus building has 10 floors including the roof.The roof area is 4280m2.Electrical power installed in the building of 1576KVA comes from grid utility.PT.PLN The building is located at -7.27 o S latitude, 7m altitude, 112.81 o E longitude, and UTC+7 time zone.Global irradiation at the location based on Meteonorm 8.0 (2010-2014) is 1916.4kWh/m2/year.
Design and simulation with PVSyst
To design and simulate the required solar power plant PVSyst V7.2.8 software is used.As PVSyst input, the building data mentioned in 2.1 is used.To obtain the required number of solar panels, a solar panel module made by JA Solar model JA M72-S30-540MR with a power capacity of 540Wp per unit was used, while an inverter made by SMA model Sunny Tripower Core 2 was used to determine the number of inverters.The panel positions were placed in two orientations.Orientations 1 and 2 have a tilt/azimuth of 10/55 o and 10/25 o respectively.
Based on the data above, we obtained 430 PV modules with a total power of 232.2 KWp, 2 inverters with a total power of 220 kWac, and 8 arrays.The array arrangement, orientation, and connections to the inverter are shown in table 1.
Implementation
The results of the PVSyst 7.2.8 simulation are then implemented.For implementation, selected PV modules, inverters and KWh meters is used.Figure 1 shows the layout PV module array of the 232.2KWp on grid rooftop solar power plant.The specification other system explained in 2.3.1 to 2.3.3.
PV module
This system uses 430 units of PV module, brand of JA Solar 540Wp, made in China.The specification of the module as shown in table 2. The modules are placed on the rooftop of building A and building B of Pakuwon City Campus according to the configuration mentioned in table 1.
Inverter
The system uses 2 inverters, type of SMA Sunny Tripower Core 2 transformerless, made in Germany.Inverter is integrated with PV generation meter.The specification of the inverter as shown in table 3.
On grid meter
The system uses on grid meter made in PT EDMI Manufacturing Indonesia Cikarang provided by the grid utility.The specification of the meter is shown in table 4.This KWh meter serves to measure the amount of import and export power between solar power plant and utility grid.Besides, the system uses aluminum extrusion material for mounting structure, Modbus function as communication protocol for Monitoring and Control Unit, one unit of AC combiner, DC side cable, AC side cable, grounding cable and cable management.
Commissioning test
To find out that the on rid of the solar power plant system has met the installation requirements, a commissioning test is held.For this reason, a list of parts of the installation that need to be checked is made.
The parts examined include PV modules, on grid inverters, Monitoring and Control Units, AC combiners, DC side cables, AC side cables, grounding cables, and cable management.The parts measured include the PV module strings output, inverter input and output, insulation resistance, and grounding resistance.It is found that three phase inverter output voltage (Voc) is around 410-413 V, and output current (Isc) is around 118-120A.
Performance measurement
The 232.2KWp on grid rooftop solar power plant has been established since May 11, 2022.To find out its performance, it is necessary to take a number of data by measurement.Data was taken for the period May 2002 to May 2023.
The tool used to measure and monitor the solar power plant is the SMA Energy Meter App version 1.19.173R.With this tool the performance of the solar power plant can be monitored and measured remotely every day.This application stores measurement data from the time it was founded until now.
Based on the data collected the solar power plant performance will be analysed.
Results and discussion
In this section will be presented measured data of 232,2 KWp Solar Power Plant that collected in period 2022 year from May to December and in period 2023 from January to May.Based on the data will be analyzed solar power plant performance using energy produced, reference yield, final yield, performance ratio, annual capacity factor , energy balance, and CO2 emissions avoided indicators.
Energy production
Figure 3 shows energy produce every month from May 2022 to May 2023.The figure also shows the expectation energy produced every month.Total energy produced from May to December 2022 is 221.788MWh, while energy produced from January to May 2023 is 145,400 MWh.
From Figure 3 can be seen that total yield is closed with yield expectation.Factor that may influence this difference are weather and variability in sunlight intensity.
Performance ratio
Based on PR definition and data shown in figure 3, the PR of the solar power plant from May 2022 to 2023 is 93% This figure is higher than PVSyt simulation output which is 80,99%.A high PR indicates that the solar power plant is operating effectively and efficiently.
Annual capacity factor
Capacity factor is defined as the actual electricity production divided by the maximum possible electricity output of a power plant, over a period of time [10].
According to PVSyst nominal energy produced at STC efficiency 20,9% will be 427,6 MWh, while total energy yield from May 2022 to 2023 367,188.So, the annual Capacity factor of the solar power plant from May 2022 to May 2023 is 85,8%.This value is less than performance ratio of the solar power plant.It is indicates that efficiency of the system in converting solar radiation into energy is high.
Energy balance
Energy balance an on grid solar power plant shows the amount energy produced by solar power plant, energy consumption, energy supplied from the grid utility and energy feed in to grid utility.
Figure 4 shows the monthly energy balance of the on grid solar power plant.From May 2022 until May 2023 total energy generation by the solar power plant is 367.188MWh, energy supplied by grid utility to building is 1,525.788MWh, total energy consumption in building is 1,877.661MWh, energy fed into grid is 14,372 MWh.Energy produced by the solar power plant that directly consumed in building is 351.884MW.This data shows that the solar power plant palys a role in reducing dependence on utility network by producing energy that can be used directly in building.Even though some energy is sold to the grid, total energy consumption in buildings is still exceeds the total energy produced by the solar power plant, so there is still dependence on supply from utility grid.
CO2 avoided
Fossil fuel power plant will produce 0.6 kg CO2 emission per 1 kW electricity generated.Solar power plant make it possible to avid this emission when they are used to generate the same amount of electricity [21].
Figure 5 shows CO2 avoided by the on grid solar power plant.The amount of CO2 avoided increase or decrease proportional to amount electricity generated.Total CO2 avoided from May 2022 to May 2023 is 146.9 ton or 0,63 kg per kW.Data shows that the amount of CO2 avoided increases or decreases in proportion on the amount of electrical energy produced by solar power plant.This is in accordance with the principle that the more energy produced by solar power plant, the more CO2 is avoided due to the reduction the use of fossil fuel-based energy.
Analysis of this data shows that the solar power plant plays an important role in reducing greenhouse gas emissions, especially CO2.In other words, the more energy produced by the solar power plant the greater its contribution to the environment and the efforts to reduce the impact of climate change.The high avoided CO2 per KW ratio also shows that the solar power plant has good efficiency in reducing carbon emissions.
Conclusion
In this article has been explained about design and implementation a 232,2 KWp on grid rooftop solar power plant located at Widya Mandala Surabaya Catholic University Pakuwon campus.
Fig. 1 .
Fig. 1.Layout PV module array of 232.2 KWp rooftop and on grid Solar Power Plant located at WMSCU Pakuwon City campus.
Fig. 5 .
Fig. 5. CO2 avoided monthly by the solar power plant from May 2022 until May 2023.
Table 1 .
Array arrangement, orientation and connection to the inverter.
Table 4 .
On grid KWh meter.
|
v3-fos-license
|
2021-01-08T14:36:07.976Z
|
2021-01-07T00:00:00.000
|
230988706
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://evodevojournal.biomedcentral.com/track/pdf/10.1186/s13227-020-00171-w",
"pdf_hash": "ee08747c6d84acb78fcbf7d989b33dfb0f7ad107",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2605",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"sha1": "08c9e60060d4709787a022650be991bcef3c7854",
"year": 2021
}
|
pes2o/s2orc
|
Insights into how development and life-history dynamics shape the evolution of venom
Venomous animals are a striking example of the convergent evolution of a complex trait. These animals have independently evolved an apparatus that synthesizes, stores, and secretes a mixture of toxic compounds to the target animal through the infliction of a wound. Among these distantly related animals, some can modulate and compartmentalize functionally distinct venoms related to predation and defense. A process to separate distinct venoms can occur within and across complex life cycles as well as more streamlined ontogenies, depending on their life-history requirements. Moreover, the morphological and cellular complexity of the venom apparatus likely facilitates the functional diversity of venom deployed within a given life stage. Intersexual variation of venoms has also evolved further contributing to the massive diversity of toxic compounds characterized in these animals. These changes in the biochemical phenotype of venom can directly affect the fitness of these animals, having important implications in their diet, behavior, and mating biology. In this review, we explore the current literature that is unraveling the temporal dynamics of the venom system that are required by these animals to meet their ecological functions. These recent findings have important consequences in understanding the evolution and development of a convergent complex trait and its organismal and ecological implications.
Introduction
Venom has fascinated humanity for thousands of years as fragile, small, and physically weak animals can deploy toxic cocktails that threaten the life of much larger and powerful animals, including humans [1]. This toxic mixture of chemicals is produced by one animal and is introduced via a wound infliction into another animal, causing upon its introduction an array of physiological and biochemical imbalances in the attacked animal [2]. The dominant proportion of these compounds found in venom is often proteinaceous and encoded by the animal's genome [3]. The genes encoding toxin peptides are incredibly diverse, with many even having a restricted distribution to a specific lineage [4,5]. Taken together with evidence that toxins regularly undergo rapid evolution under the strong influence of natural selection, venom has emerged as a model for extreme evolutionary trends and novelty [3,[6][7][8].
Changes in toxin expression may also combine to generate distinct venom profiles localized to specific tissues, life stages, or sexes that are essential for ecological functions, such as prey capture and defense [8][9][10]. The venom system itself is dynamic across the life history of venomous animals, undergoing both morphological and biochemical transitions that coincide with shifts in biotic interactions. Additional levels of complexity are also present with multiple different venom profiles capable of being produced within a given life stage.
Recently reviewed in Schendel et al. [10], the venom apparatus can contribute to the dynamic nature of venom deployed by animals through a process of modulation and compartmentalization of toxin expression. This process relies on morphological complexity that allows for the separation of venom among anatomically distinct venom glands, cellular spatial heterogeneity within a venom gland, or even being distributed throughout an organism through the decentralization of the entire venom system [10]. In concert, evidence of venom variation among males and females has also been reported, highlighting that the developmental processes related to sex determination and differentiation contribute to generating an animal's venom phenotype. Strikingly, this process for the variation of the venom system spatially, temporally, or intersexually has independently evolved multiple times among distantly related animals (Fig. 1).
These venom system dynamics are an important and novel link between venom, evolution, and development. Here we review evidence of the process involved in developing the venom system and explore evidence for the spatial, temporal, and intersexual variation in toxin expression. Further, we propose that the study of developmental aspects of venom systems and their evolution can now advance by linking to the discipline of evolutionary developmental biology (evo-devo).
Developmental dynamics of the venom system
The generation of a novel venom system requires substantial innovations, at the very minimum it requires the recruitment and evolution of cells that will produce toxins and a mechanism to inflict wounds and deliver venom via these wounds. Such evolutionary innovations at the cellular and morphological levels would always require vast changes at the molecular and genetic levels to enable them. Here we will review evidence of changes to the venom system both morphologically and biochemically across ontogeny.
Venom apparatus development
Snakes are among the most studied venomous animals, largely due to the significant adverse effect of their bites on human health (reviewed in Gutiérrez et al. [11]). While venom composition has been the subject of the majority of these studies [12], research investigating the development of their venom apparatus is attracting considerable attention from evolutionary and developmental Fig. 1 The convergent evolution for the separation of venom composition in animals. Lineages with known venomous taxa are depicted in blue. Boxes on branches highlight the evolution of venomous lineages that exhibit venom heterogeneity among morphological (spatial) and cellular structures, life history (temporal), and sexes (intersexual). The spatial separation of venom is predicted according to Table 2 provided in Schendel et al. [10], given evidence of animals that have a morphologically complex venom apparatus and putative multifunctional toxins profiles biologists [8,10]. This system typically consists of a gland loaded with venom and delivered using specialized fangs [13][14][15]. These fangs are often located on the maxilla and are distinct from the other tooth-bearing bones [16]. Broadly, the snake fang phenotype is highly heterogeneous, differing in its location in the jaw as well as other various characteristics, including tooth morphology that can be either grooved, hollow, or tubular [14,[17][18][19].
Significant insights into the origin and evolution of the fang phenotype were revealed using developmental genetics. Specifically, Vonk et al. [14] performed in situ hybridization of the sonic hedgehog (SHH) gene on serial sections of snake embryos to reconstruct threedimensionally the development of snake fangs. The findings from this work revealed that front and rear fangs share striking similarities in their morphogenesis, both of which develop from the posterior end of the upper jaw. During front-fang development, ontogenetic allometry occurs which displaces the fang from its posterior developmental origin, transitioning to the anterior position in adults [14]. In contrast, rear-fanged snakes retain their posterior positioning which develops from an independent posterior dental lamina [14]. This work, among others, provided support that front and rear fangs are homologous and likely evolved from a rear-fanged ancestor [14,[17][18][19]. The subsequent radiation of snakes led to multiple independent gains and losses of various fang phenotypes [19]. Recent work has highlighted that the evolution of the rear-fang phenotype in snakes is highly dynamic, exhibiting extreme heterogeneity compared to the front-fang phenotype which appears to be much more stable [19]. The acquisition and evolution of venom in snakes have likely shaped fang morphology, specifically with fangs from colubrid snakes transitioning more anteriorly [19].
To date, our understanding of the development and evolution of the venom gland has remained largely unresolved in most venomous species. A recent review by Zancolli and Casewell [8] highlights that venomous lineages share a common trait in which they possess specialized epithelial cells that synthesize, store, and eventually secrete venom components. The collective organization of these cells can form a conspicuous venom gland that is found in most venomous animals [8]. In snakes, the venom gland forms during development from oral tissue, suggesting it is derived from the salivary glands [15]. However, an alternative hypothesis has been suggested that the venom gland is derived from the pancreas [15]. Evidence for this is supported by the expression of a microRNA (miR-375) in the venom gland of the king cobra that is also found in the pancreas of other vertebrates [20]. However, further evidence is needed to confirm this hypothesis, such as the co-option of the pancreatic gene-regulatory network to the venom gland. To gather such insights, functional assays and genomic studies investigating the mechanisms related to the development of the venom system are needed. This requires novel techniques and technologies that until now were not accessible.
Organoids are a revolutionary new technique that has been developed to enable the recapitulation of essential features, tissues, and organs into 3D biological structures [21]. This requires defined growth factor conditions from adult stem cells (ASCs). Advancements in the use of serum-free medium containing R-spondin, the BMP (bone morphogenic protein) inhibitor Noggin, and EGF (epidermal growth factor) enabled the growth of mouse intestinal ASCs into an epithelial organoid [22]. Following this breakthrough, additional R-spondin-based protocols have been implemented to recapitulate both healthy and diseased mammalian epithelia, including growing mammalian salivary gland organoids [23,24]. These insights led to researchers being able to recapitulate the snake venom gland as an organoid [25]. This was achieved by first dissociating snake venom glands and embedding them into basement membrane extract. The initial expansion of organoids was made possible by supplying a medium containing a "generic" mammalian organoid cocktail. Further expansion was induced using R-spondin, Noggin, EGF, the small molecule TGF (Transforming growth factor) beta inhibitor A83-01, PGE2 (Prostaglandin E2), and FGF10 (fibroblast growth factor 10). Strikingly, this "expansion medium" controls the same cellular signaling pathways that are required for mammalian epithelial organoids. This provides evidence that many of the same factors controlling the development of mammalian epithelium are also active in reptiles and were probably recruited as whole developmental modules into the venom system. Exploring whether these factors also control the development of other vertebrate venom glands remains to be tested. The development of the first venom gland organoid suggests that we are on the precipice of exciting breakthroughs in understanding the evolution and development of venom glands.
Temporal variability of toxin expression across ontogeny
The formation of the complete and functional venom apparatus allows for the utilization of venom to collectively function in ecological roles, such as predation and defense [26][27][28][29]. In some venomous animals, the formation of the venom system occurs at a juvenile stage that may have unique biotic interactions compared to adults. Here we will review evidence of venomous animals that have evolved the ability to express different toxins at the juvenile and adult life stage.
In multiple snake species, variations in venom composition have been associated with ontogeny [30]. This was first documented in detail by Mackessy in 1988 [31]. In this seminal work, the ontogenetic variation in venom composition was examined in rattlesnakes of various lengths [31]. In both Crotalus helleri and Crotalus oreganus, increased protease activity is positively correlated with size, and toxicity is more pronounced in juveniles [31]. The separation of venom pharmacology among ontogenetic changes may have evolved due to changes in diet requirements. This was supported by analysis of the gut contents of museum specimens of the Crotalus species, with lizards contributing to a major proportion of the diet in juvenile snakes, whereas mammals are the primary diet of adults [31]. The resulting juvenile rattlesnake venom composition is one of high toxicity and with low protease activity that efficiently targets lizards and small rodents. It is proposed that adult snakes that target larger mammals require protease activity to digest their prey effectively [31].
Similar patterns of ontogenetic variation have also been observed in other rattlesnake species. For example, the transcriptomes from Crotalus adamanteus adult and juvenile venom glands were sequenced from five populations, revealing that 12 from 59 toxin transcripts exhibit significant differential expression across ontogeny [32]. From these 12 differentially expressed toxins, three and nine toxins were upregulated in juveniles and adults, respectively. While similar total levels of snake venom metalloproteinases were expressed in adults and juveniles, paralog-specific expression was observed to be restricted to ontogenetic stages [32]. In adults, specific paralogs of phospholipases A 2 were upregulated, along with Bradykinin-potentiating and C-type natriuretic peptides, nerve growth factor, and snake venom serine proteinases. Consistent with Mackessy [31], juvenile venom was also identified to be more toxic to small rodents [31,32]. This provides evidence that the pharmacological plasticity of venom may be driven by temporal changes in the expression of toxin-encoding genes. Further evidence from species in the Crotalus genus report that similar patterns are also observed. For example, the venom proteome of 6-week-old Crotalus simus is predominantly composed of neurotoxins, while the major adult venom components are snake venom metalloproteinases [28]. These ontogenetic differences in the production of toxins generating phenotypically divergent profiles result in adult and newborn venom being hemorrhagic and neurotoxic, respectively.
Conspicuous changes in venom composition are also observed across ontogeny in other snake species, including pit vipers (genus Bothrops) [33,34], as well as brown snakes (genus Pseudonaja) [35,36]. For example, Bothrops venoms showed differential toxicity and pharmacology in newborn and juvenile with higher lethality in mice compared to adults [33]. This is likely due to newborns and juveniles having increased hemorrhagic, edema-forming, and coagulant activities. In Bothrops jararaca, the newborn venom is highly lethal to chicks (Gallus gallus), whereas the adult venom has a slightly higher lethal activity in mice [34]. Ontogenetic changes in the venom composition among species of brown snakes (Pseudonaja) have also been reported, revealing shifts in the functional activity of the venom profile during the transition from juveniles to adults [35,36]. For example, Cipriani et al. [35] revealed that many species of Pseudonaja transitioned from expressing non-coagulopathic venom in juveniles to coagulopathic venom as adults. Again, these ontogenetic shifts in venom activity correlate with dietary preference dynamics across life history, with most juvenile brown snakes preferring reptiles as prey and transitioning to become more generalized predators in adults [35,36]. Differences between young and adult snake venom profiles can also be found in distantly related snake species from the Colubridae family. For example, in the rear-fanged snake Boiga irregularis, venom underwent an ontogenetic shift in enzyme activities and toxicity, with younger snakes producing more toxic venoms with lower protease activities [37].
The organoid of the Cape coral snake (Aspidelaps lubricus cowlesi) also revealed interesting insights into the temporal expression of toxins [25]. As the organoid is exposed to different cocktails of media, it first undergoes an expansion phase, then differentiates to generate mature and functional cell types [25]. While all toxins increased their expression across these phases with different mediums, the CRISP (cysteine-rich secretory protein) toxin underwent an inverse pattern and its expression levels dropped when transitioning from the expansion to the differentiation phases. These results hint toward potential temporal venom dynamics previously unreported, with the CRISP toxin potentially being expressed during early life stages.
Evidence of venom composition changes between juveniles and adults is also reported in the tarantula, Phlogius crassipes [38]. The venom profiles from four ontogenetic stages of this species were examined according to cephalothorax length using gel electrophoresis and mass spectrometry [38]. This revealed that some potential toxins are expressed only in a specific ontogenetic stage; however, the function of the toxins remains to be characterized. Whether this is unique to tarantulas among spiders remains to be tested. In concert, ontogenetic differences in venom composition have also been reported in the Brazilian spider (Phoneutria nigriventer), with the venom profile shifting to become predominantly composed of low-molecular weight proteins in adults [39]. This shift in venom composition likely contributes to adult venom having increased lethality in mice [39]. Evidence of ontogenetic differences in the expression of toxins in animals has also been reported between dramatically different life stages, such as gametes, developing larvae, and adults, in addition to more nuanced life stages, i.e., juveniles and adults.
Venom apparatus development across a complex life cycle
Venomous animals that undergo a complex life cycle rely on the coordination of the venom system with transitions in their life stages. Dynamic morphological and biochemical shifts must coincide with changes to their ecological requirements, such as predator-prey interactions. To date, this phenomenon has been explored in detail in Nematostella vectensis, which can complete its full life cycle in the lab [40,41].
In members of the phylum Cnidaria (corals, hydroids, sea anemones, and jellyfish), there is no centralized venom gland; instead, various types of cnidocytes ("stinging cells, " a synapomorphy that typifies this phylum) have evolved. Cnidocytes harbor the cnidocyst, arguably the most morphologically complex organelle known to date, which is a harpoon-like structure that discharges at an incredible speed and force, punctures the cuticle and/or epidermis of the stung animal, and delivers venom [42][43][44]. Numerous types of cnidocysts have been characterized, some of which have a restricted distribution among specific cnidarian lineages. For example, spirocysts are unique to Anthozoa (corals and sea anemones) and are used to entangle the target animals using thread-like organelles. Contrastingly, nematocysts are organelles that serve as a microinjector to deliver the venom and have a much broader distribution in Cnidaria (reviewed Kass-Simon and Scappaticci [45]). This suggests that nematocysts are likely the ancestral cnidocyst [42]. Studies on the model cnidarians, N. vectensis (Anthozoa), Hydra magnipapillata, and Hydra vulgaris (Hydrozoa), have provided unparalleled insights into the development and functions of the venom apparatus components. For example, in Hydra, the maturation of cnidocytes occurs following their differentiation from interstitial cells (i-cells; for reviews see [46,47]). These i-cells are hydrozoan-specific progenitor cells found throughout the mid-gastric region of the ectoderm [46][47][48]. The specialized organelle, the cnidocyst, develops within a post-Golgi vesicle during differentiation from i-cell to cnidocyte [46,47]. The cnidocyst compromises multiple structural proteins that generate the tubule, harpoon, and capsule wall [42,47,49,50]. To date, many of these proteins are cnidarian specific, such as minicollagens and nematogalectins, and are regulated through posttranscriptional and posttranslational modifications, such as alternative splicing and preprotein cleavage, respectively [42,47,49,50].
Additionally, recent studies have been revealing insights into the development of the three different cnidocyte cell types characterized in N. vectensis. These include two types of nematocytes (basitrichous isorhizas and microbasic p-mastigophores) and spirocytes [51]. The distribution and density of cnidocytes in N. vectensis vary across tissue and development, for example, basitrichous isorhizas are found in high density as early as the planula stage, whereas spirocytes can be found predominantly in tentacles after the primary polyp stage [51]. The development of cnidocytes in N. vectensis is driven by transcription factors, such as SoxB2 which is expressed in a population of progenitor cells that can give rise to both neurons and cnidocytes [52], among others [53][54][55]. Interestingly, the homologous bilaterian SoxB genes are involved in neurogenesis as well [56], indicating that this role is conserved for hundreds of millions of years. Further, the existence of a common progenitor cell of neurons and cnidocytes in multiple cnidarians as well as the recent finding that nematocyte neurotoxins can be recruited from neurons in N. vectensis and possibly other cnidarians [57] support the notion that cnidocytes might be highly derived neurons [58,59].
Many other venomous invertebrates also develop across a complex life cycle, undergoing metamorphosis and transitioning from a larval form to a juvenile and eventually becoming an adult. To date, understanding how the venom apparatus develops during these transitions remains an open question. An example that explores this process was reported by culturing the feeding larvae of Conus lividus [60]. Subsequent serial histological sections were performed by dissecting the foregut during larval and metamorphic stages to trace the development of the venom gland [60]. These results provide support for the hypothesis of homology between the venom gland and the mid-esophageal gland of other gastropods. The development of the venom gland may also differ depending on whether the cone snail feeds at the larval stage. Results suggest that the venom gland of Conus anemone, which has a non-feeding larval stage, may develop through a different process that involves the out-pocketing of the ventral glandular region of the foregut [61]. While these different processes occur in generating the venom glands of C. anemone and C. lividus, they both share similarities in their formation and accumulation of secretion granules within the presumptive venom gland prior to larval metamorphosis [41,42]. This suggests that these cone snails begin loading their venom gland before transitioning to juvenile snails. Whether this can be used and injected remains to be elucidated. While histological and morphological assays are providing key insights into the development of the convoluted venom apparatus in cone snails, the molecular pathways that control this process have yet to be characterized.
In other invertebrates, the completion of the venom apparatus coincides with their feeding requirements. For example, the staining of Strigamia maritima embryo using DAPI revealed that the venom apparatus is likely formed during early postembryonic development [62]. This is consistent with evidence that in the early stages of post-hatching, S. maritima is incapable of feeding using their forcipules, which are derived from a pair of walking legs. This is also similar for Scolopendra subspinipes mutilans [62,63] in which the venom gland is first observed eight days after the molt and during the transition from the postembryonic II to the fetus stage [64]. As the centipede continues to transition from the fetus stage, the preadult centipede develops well-formed forcipules (which are heavily sclerotized and fully functional) and a complete venom duct. At this stage, the centipede is capable of feeding with only the venom duct eventually becoming detached from the endocuticle of the exoskeleton.
Evidence of a developed venom apparatus at the larval stage of spiders has also been reported. In Phoneutria nigriventer, scanning electron microscopy revealed that at the larval stage that precedes the spider's eclosion from the cocoon, the venom apparatus has developed a bilaterally symmetrical pair of ducts, chelicerae, and venom glands that display their characteristic shape and are surrounded by a layer of muscle [65]. This suggests that the venom apparatus has completely formed at this early life stage. While this precedes the animal's ability to capture prey, the venom system may play a role in defense during this early life stage. As predation becomes necessary, the venom glands of P. nigriventer begin to transition internally to the prosoma of the adult [65]. Whether this transition helps mediate the spider's ability to use venom for prey capture and feeding remains to be tested.
Temporal variation of toxin expression across complex life cycles
N. vectensis was established in the last two decades as an important lab model in the field of evolutionary developmental biology [40,52,66,67]. During the life cycle of N. vectensis (Fig. 2), both males and females release gametes to the water via spawning [68]. Following fertilization, the zygote cleavage begins forming a blastula, and subsequent gastrulation is completed in less than 24 h postfertilization (hpf ). A planula larva emerges from the egg package 48-72 hpf and starts swimming in the water. 6 to 7 days after fertilization, the planula settles in a soft substrate and starts to metamorphose into a primary polyp and sexual maturation takes about 4 months under lab condition.
Toxins can be delivered using two different cell types: nematocytes which develop as early as 48 hpf in the swimming planula [58], and gland cells loaded with venom components and found even earlier at the gastrula stage (Fig. 2a). At least four different types of gland cells have been identified across the life history of N. vectensis, from as early as the gastrula stage [70]. This diversity of gland cells is supported by recent single-cell RNA sequencing (scRNA-seq) revealing that multiple gland cell populations express different toxins at different developmental stages [71]. Temporal dynamics in toxin expression have also been investigated using experimental approaches for a few key toxin gene families [70,[72][73][74].
Strikingly, Nv1 is the major venom component of the adult polyp's venom profile (Fig. 2b-d) yet absent in the larval stages [69,73,75]. This toxin is produced in massive amounts which is likely a consequence of the highly conserved copies found in tandem in the genome, with more than 10 copies identified in the genome [70,75]. The abundance of Nv1 distinctly at the polyp stage is even more striking, given that the multiple Nv1 loci are transcribed at all developmental stages of N. vectensis; however, proper splicing of these transcripts is restricted to the polyp stage [73]. This is achieved through intron retention in the early life stages, a posttranscriptional regulatory mechanism in which functional Nv1 synthesis is restricted after the polyp stage and absent from the embryo and planula stages [73]. The production of Nv1 coincides with the requirement to capture prey, while venom produced in earlier life stages are likely specialized for defensive purposes, as the sea anemone does not feed before the primary polyp stage. The specialization of venom profiles has been attributed, at least partially, to the molecular mechanism of gene duplication, which has resulted in the diversification of toxins with divergent temporal expression and target specificity [76].
The recent characterization of Nv1 paralogs has revealed a pattern of functional specialization divergent from other members of this gene family [76]. Specifically, Nv4 and Nv5 are expressed in early life stages, confirmed both quantitatively, (nCounter and LC-MS/MS) and qualitatively (transgenesis and immunostaining). At the protein level, Nv4 and Nv5 have specialized to be lethal to zebrafish larvae but harmless to arthropods, whereas Nv1 shows highly lethal against insects [73]. This pattern is supported in ecological studies in which natural fish predators avoid feeding on eggs and planulae of the anemone [70]. The evolution of the Nv1 gene family has ultimately led to the adult-specific expression of Nv1 coinciding with prey capture needs, and Nv4 and Nv5 expression in early life stages required for specialized defensive functions. Other toxin-encoding genes, such as NvePtx1 and NEP3, have also been attributed to contributing to the resistance toward fish predators in the early life stages [70].
NvePtx1, a homolog of a known potassium channel blocking toxin, is expressed dynamically across the life cycle of N. vectensis [70,74]. Both quantitative and qualitative approaches at both the RNA and protein levels revealed evidence of NvePTx1 being expressed in gland cells early in development and subsequently downregulated following the transition to the polyp stage [70]. Nematocytes are also used to deliver venom during the early life stages of N. vectensis. Specifically, NEP3 is expressed in nematocytes across development (Fig. 2eg), starting as early as the planula stage [70]. While the spatial expression patterns of NvePtx1 and NEP3 are distinct, their expression in early life stages supports their utilization as defensive toxins [63]. These findings suggest that venomous animals with a complex life cycle that experience different ecological interactions may produce vastly different venoms in distinct life stages. Congruently, the temporal expression of toxins across complex life cycles has also been reported in multiple diverse taxa and will be reviewed here.
In the reef-building coral Acropora millepora, different members of the small cysteine-rich peptides (SCRiPs) neurotoxin gene family are upregulated at different developmental stages [77,78]. Specifically, this family of neurotoxins exhibit dynamic temporal expression with SCRiP3 being upregulated in the post-settlement stage, SCRiP2 upregulated in the pre-settlement stage, and SCRiP-like upregulated in the adult [77,78]. Furthermore, the expression of toxins in early life stages is present among other cnidarians [9]. This is evident with some pore-forming toxins expressed specifically in the embryo of Hydra vulgaris [79]. Evidence of ontogenetic differences in venom profiles has also been reported [70]. UE unfertilized eggs, G gastrula, P planula, M metamorphosis, PP primary polyp, AP Adult polyp in cubozoans, with the Australian box jellyfish Carukia barnesi showing proteinaceous components of the venom extract having different molecular weights specific to immature and mature animals [80]. Furthermore, these differences are correlated to changes in diet preference in which young and adult medusae have invertebrate and vertebrate prey preference. The findings of toxins expressed early in development from distantly related species across Cnidaria suggests that this process is conserved among this venomous phylum. While this is likely conserved among cnidarians, similar patterns are observed in other venomous lineages, suggesting its convergent evolution.
Evidence of toxin expression across life stages has also been reported in cone snails [81]. In Conus victoriae, sequencing captured venom mRNA expression in embryos, revealing five novel O-and two α-conotoxin transcripts [81]. In addition to these novel toxins, the expression of a known adult toxin Vc1.1 was also captured in the embryo. Functional assays revealed that the embryonic α-conotoxins have different neuronal nicotinic receptor targets, suggesting that they may have specialized functions or prey specificity [81]. Further systematic studies investigating the venom profile in early life stages is required to determine whether cone snail embryos and newly hatched juveniles may synthesize defense-specific venom essential to deter predators as observed in N. vectensis. In addition to venomous individuals that undergo dynamic toxin expression temporally, some also can generate distinct venom profiles spatially.
Heterogeneity and compartmentalization of toxin production and its impact in venom profiles
A major insight into the separation of venom within a given life stage was reported in the scorpion, Parabuthus transvaalicus [82]. Upon stimulation, this scorpion initially secretes a prevenom cocktail that is transparent, with further stimulation resulting in a different secretion that is cloudy and white in color [82]. The components of these two distinct venom profiles vary in their combinations of salts and peptides. The prevenom is rich in potassium (K + ) salts and contains some peptides that block voltage-gated K + channels, resulting in local depolarization that ensures severe pain and toxicity in the target which is essential for defense purposes [82]. Venom secreted after the prevenom consists predominantly of peptides and proteins and is reported to have a less severe pain response, yet maintains a high potency and lethality to both mice and insects [82]. The separation of these two venoms suggests that the prevenom has evolved to be highly specialized for roles related to defense.
Multiple recent studies have reported a similar process for the separation of distinct venom profiles within a given life stage. Advancement in molecular techniques is revealing that this separation of venom is driven by the compartmentalization of toxin expression at the gross organ and tissue levels. For example, Dutertre et al. [83] revealed that cone snails can dynamically transition their venom composition in response to predatory or defensive stimuli (Fig. 3a). The defensive stimulus induces the production of high levels of paralytic toxins that efficiently block neuromuscular receptors in vertebrates, while the predatory stimulus induces the production of distinct venom with a composition enriched in predatory-specific toxins that are mostly inactive in vertebrates [83]. Evidence supports that this envenomation strategy is an ecologically important trait, with a defense-specific venom conserved among worm, mollusk, and fish-hunting cone snails [83][84][85]. These distinct venom profiles are produced through the regional heterogeneity in toxin expression. Specifically, the distal and proximal regions of the venom duct generate the predatory and defensive specific venoms, respectively [83].
The work by Post et al. [25] revealed a similar pattern of regional heterogeneity in toxin expression in the Cape coral snake, Aspidelaps lubricus cowlesi. This was achieved by dissecting its embryonic venom glands into proximal (located near the duct) and distal regions to generate region-specific organoids. Analysis of the toxin expression in region-specific organoids by scRNA-seq identified that C-type lectins are enriched in the proximal organoids, whereas distal organoids cells predominantly produced Kunitz-type protease inhibitors and three-finger toxins [25]. This is consistent with previous work that observed in the king cobra, C-type lectins are expressed in serous cells located in the proximal region of the accessory gland [20]. Whether this can be evoked through behavioral response or specific stimulus is beyond the limits of organoid research. Indeed, such insights would require work in a more organismal context. Recent advancements in the in situ mapping of toxins in the venom gland may allow for such potential insights.
Using a novel mass spectrometry imaging (MSI) method, Hamilton et al. [86] revealed the spatial distribution of venom activity across the snake venom gland. The venom glands of the brown forest cobra (Naja subfulva) [87] are rich in enzymatically active phospholipases A2 (PLA 2 ) and sections exposed to phospholipid substrates produced high-resolution maps of phospholipase activity and specificity [86]. This novel method supports the heterogeneous distribution of venom components, including the PLA 2 , and three-finger toxins [86]. Intriguingly, the distribution of these venom components showed that their abundances are non-overlapping, in which the abundance of three-finger toxins in the posterior region of the gland has limited PLA 2 activity [86].
The assassin bug (Pristhesancus plagipennis) is also capable of modulating the composition of their venom in a context-dependent manner, similar to that observed in the cone snail [88]. The assassin bug separates functionally distinct venom through the compartmentalization of toxin expression to different anatomical regions (Fig. 3b). This is evident from their complex venom system consisting of three distinct glands: the anterior main gland (AMG); posterior main gland (PMG); and accessory gland (AG). Using a combination of transcriptomics and proteomics, it was revealed that the AMG and PMG venom is generated following harassment and electrostimulation, respectively [88]. Specifically, the venom specific to the PMG potently paralyzes and kills prey insects, while the AMG-specific venom alternatively does not paralyze prey insects, further supporting its use for defense [88]. While the assassin bug uses distinct glands for the separation of venom, recent evidence from multiple different venomous lineages is revealing that this process may also occur at the cellular resolution.
Heterogeneity among venom-secreting cells
The dynamic expression of toxins among cells within a venom gland likely provides the cellular complexity required to generate functionally distinct venom profiles. Cellular compartmentalization of snake venom has only recently been elucidated following the development of venom gland organoids and scRNA-seq [25]. Previous work exploring the cellular diversity of the snake venom gland characterized four morphologically distinct cell types, with only one being the dominant cell type used to secrete venom [89]. Analysis using scRNA-seq revealed that specific toxins were strongly enriched in distinct populations of cells [25]. The evidence of cellular heterogeneity in toxin expression suggests that this organ may be more complex than previously characterized by morphology.
In cnidarians, the toxin delivery system is a complex of non-linked cells, involving multiple different cell types distributed heterogeneously throughout the organism [42,45,69,90]. Cnidaria is the only phylum that shares a venomous ancestor, with members characterized by the presence of cnidocytes. These typifying cells are highly heterogeneous among cnidarians in their morphology and functions which range from prey capture, defense, as well as locomotion [45]. In N. vectensis, venom is also produced in gland cells as was initially revealed by the localization of Nv1 to these specialized ectodermal cells [69]. Recent evidence is reporting the compartmentalization of venom components among these highly specialized cells.
Nematocyte heterogeneity has also been observed among tissues in Actinia tenebrosa, with differences coinciding with changes in the expression of toxin-encoding genes. Morphological structures in A. tenebrosa with a high density of nematocytes include tentacles used in prey capture and defense, mesenteric filaments used in digestion and killing of prey, and acrorhagi used solely in intraspecific aggressive encounters [90,91]. Nematocysts found in the acrorhagi consist predominately of holotrichs, whereas the tentacles and mesenteric filaments contain a higher proportion of basitrich nematocysts [92]. Acrorhagi are unique to the Actinioidea family and found to produce a distinct venom profile compared to tentacles and mesenteric filaments [91,93]. Toxins with a restricted expression to the acrorhagi include acrorhagin I and II and are consistent with previous work that isolated these toxins from acrorhagi in the closely related species, Actinia equina [94]. This provides a correlation between the morphological type of a nematocyst and the expression of specific toxins. While this toxin is lethal against crustaceans [94], given the ecological function of the morphological structure, it is localized to, this toxin might also have specialized action against sea anemones, specifically those from the Actinia genus.
Differences in the expression of toxins among tissues and populations of cells have also been reported in other sea anemone species [95]. For example, in Heteractis magnifica, different members of a single pore-forming toxin family were found in single cells isolated from within and among different morphological structures (tentacle and body column, [95]). The cellular compartmentalization of toxins is also found in other cnidarians. For example, a study in Hydra revealed that two members of a single pore-forming toxin family are expressed in two morphologically distinct types of nematocytes and this trend was extended by a recent analysis of scRNAseq data from this species [96,97].
Tissue-specific variation of toxin-encoding genes and venom-secreting cells has been reported in multiple sea anemones species [5,70,98], providing evidence for the cellular and biochemical complexity of their venom system (Fig. 3c). This is clear in N. vectensis, with some members of the NEP3 family showing patterns of expression localized to distinct cells and areas of the organism [70]. For example, members of the NEP3 family (NEP3, NEP3-like, and NEP4) are expressed in nematocytes in the tentacles and outer body of N. vectensis [70]. In contrast, another NEP3 paralog, NEP8, is absent from the tentacles and outer body wall but specifically expressed in pharyngeal nematocytes, suggesting it is involved specifically in the paralysis of swallowed prey [70]. This supports that different populations of nematocytes among tissues express distinct venom components.
Further findings into the dynamic expression of toxins in N. vectensis is providing evidence that the molecular diversity of nematocytes and gland cells exceeds their morphological diversity [70]. This is evident for the NEP3 toxin which appears to be expressed in only a certain nematocyte subpopulation, even among neighboring nematocytes within the same tissue ( Fig. 2e-g). Furthermore, potential differences among subpopulations of gland cells have also been reported in N. vectensis. Using in situ hybridization and transgenic animals (that express a fluorescent reporter under the promoter of the toxinencoding gene NvePTx1), it was revealed that at least two distinct types of ectodermal gland cells are present in the N. vectensis planula, one large and elongated and another small and round. Congruently, Nv4, and Nv5, toxin paralogs of Nv1, are produced in different types of gland cells in this early life stage that also differs in size [76]. These findings are supported by scRNA-seq which revealed multiple populations of glands cells to be present in this species [71].
Interestingly, there seem to be significant lineage-specific differences in venom localization in sea anemones, suggesting these systems constantly evolve. For example, the Nv1 homolog Av2 (also called ATX-II), which is the major neurotoxic component of the venom of the snakelocks anemone Anemonia viridis (also called Anemonia sulcata or this might be a species complex) is expressed in both ectodermal gland cells and nematocytes [69,99]. This additional site of expression seems to be lineage specific as another species, Anthopleura elegantissima which is closely related to A. viridis, expresses its sodium channel modulator toxins only in gland cells, similarly to the distantly related N. vectensis [69]. Coincidently, the lineage-specific expression of Av2 in nematocytes is correlated with a gene fusion event that resulted in the loci encoding this neurotoxin acquiring a new genomic sequence that may hold regulatory functions [100]. However, this correlation requires further investigation regarding whether this novel sequence can truly drive nematocyte-specific expression in sea anemones.
Additional examples of lineage-specific changes in the localization of venom expression in sea anemones have also been reported in a transcriptomic study [98]. Specifically, the expression of the same toxin family localized to different body regions in three different sea anemone species was reported [98]. These findings suggest that toxins might shift their expression domains along with the evolution of sea anemone species, reflecting different ecological conditions and interactions. Ultimately this modularity may allow the fast evolution of the spatial regulation of toxin expression, as each module (cell type) can more easily change its content or location across evolution, without affecting all venom components. This highlights a potential relationship between cellular complexity and the complexity of venom composition.
Intersexual variation of the venom system
Sexual dimorphism of the venom system is another example of venom variation within a given life stage. Distinct differences in morphology or behavior between sexes of the same species are widespread among animals, including those that are venomous. Sexual dimorphism occurs through the coordination of different signals related to sex determination and differentiation. Sex determination is the primary signal guiding the embryo to develop as either male or female [101][102][103]. Sexual differentiation occurs following subsequent signals that further directs the primary sex-determining signal to the development of specific traits that are sexually dimorphic [104][105][106]. These signals can either be environmental or genetic [107][108][109][110]. The evolution and conservation of these signals are currently being resolved through comparative genomics and experimentally by using developmental genetics. Research investigating the intersexual variation of the venom system can contribute by providing robust support for the fitness implications of these traits among sexes.
Sexual dimorphism of the venom apparatus
Variation in the venom system among sexes has been reported across various species, including mammals, snakes, spiders, scorpions, centipedes, fish, and sea anemones (Fig. 1). One of the most striking examples of intersexual variation of the venom system is observed in the platypus (Ornithorhynchus anatinus). In this animal, only males inject venom through spurs that are connected to venom glands [111][112][113]. The function of platypus envenomation is suggested to be highly specialized, being used predominantly during mating season in aggressive encounters with other males invading territory [111][112][113].
A striking example of sexual dimorphism of the venom system in invertebrates is present among aculeate hymenopterans (wasps, ants, and bees). The venom system of this hymenopteran group arose from modifications of the ovipositors (the female reproductive organ which had ancestral functions related to parasitism) to become a devoted venom injection apparatus [114][115][116]. Subsequently, this venom system underwent functional diversification, having roles related to both predation and defense [114][115][116]. This highlights that venom sexual dimorphism can impact an animal's capacity and strategy to defend against predators and capture prey.
Evidence of more subtle venom system sexual dimorphism is reported in scorpions from the genus Centruroides. For example, females have overall larger bodies and shorter metasoma (tail segments implicated in venom delivery), while males' bodies are smaller their metasomal segments are larger [117,118]. Moreover, a combination of light and transmission electron microscopy revealed that the morphometrics and morphology of male and female telsons (stinger) and venom glands differ significantly [119]. These findings highlight that male telsons are larger both cross-sectionally and volumetrically. Cell-type variation was also observed among sexes, [119] with females mostly having granule-filled cells, whereas males predominately have cells containing dissolvable vesicles. This cell type found in males is hypothesized to contribute to the observed transparent venom, characterized as "prevenom" similar to that identified by Inceoglu et al. [82]. The intersexual variation in the visual qualities of venom liquid is likely related to differences in toxin expression and venom composition.
Intersexual variation of toxin expression
The majority of studies investigating the sexual dimorphism of venom composition have been reported in scorpions and spiders [39,[120][121][122][123][124][125][126][127]. For example, intersexual variation in venom yield and toxicity has been observed in the Venezuelan scorpion, Tityus nororientalis [128]. Specifically, it was found that males have significantly higher venom yield (2.39 mg/individual) compared to female scorpions (0.98 mg/individual); however, female venom was significantly more toxic in mice. This difference in toxicity is correlated with variation in the venom composition among sexes; however, the specific toxins related to this different toxicity remain to be characterized [128].
The venom profile of the Hentz striped scorpion (Centruroides hentzi) revealed significant intersexual variation within and among populations [123]. Specifically, females contribute more significantly to the variation of venom between populations. In contrast, within-population venom variation is mostly driven by differences in the venom profile of males [123]. This variation within and among populations is likely contributed in part to sexspecific venom differences. This supports that selection is likely acting on the venom profile of male and female scorpions differently and contributing to the observable intraspecific variation in the venom of C. hentzi [123].
Understanding venom variation among sexes and how it relates to differences in ecological niche or courtship behavior is essential to understanding the biology of these venomous animals. Insights into this were explored in Hawaiian spiders from the genus Tetragnatha that utilizes different prey capture methods [122]. This comparative analysis compared adult females that spin orb webs and adult males that capture prey by wandering. In addition, other species where both sexes capture prey by wandering were also investigated [122]. Unexpectedly, differences in venom composition between males and females were observed in the species in which both capture prey by wandering [122]. This was evident with male venom composition consisting of predominately high-molecular weight components that were absent in females. In contrast, low-molecular weight components dominate the venom profile of females. The functions related to the intersexual variation of venom composition may be attributed to differences in feeding ecology or behavior as well as, mating biology, such as sexual stimulation, nuptial gifts, and/or mate recognition [122]. Further evidence of intersexual venom variation in spiders is reported for the Australian Northern (Missulena pruinosa) and Eastern (Missulena bradleyi) mouse spiders [120]. In these spiders, females from both species have a greater venom yield. Additionally, differences in prey specificity of the venom were also reported, with only the male M. bradleyi having vertebrate-specific toxicity. Sexual dimorphism of the venom system is also reported in venomous arthropods beyond arachnids.
The venom profile of the eastern bark centipedes (Hemiscolopendra marginata) exhibits significant sexual dimorphism that is driven by sex-biased gene expression [129]. This sex-biased gene expression results in males having a greater abundance of ion channel-modulating toxins, whereas γ-glutamyl transferases and CAP toxins were the most abundantly expressed components of the female venom profile. This work by Nystrom et al. [129] was the first to characterize sexual dimorphism in centipede venom and may help explain more broadly the venom variation within and among centipede species [130].
Sexual dimorphism has also been observed in venomous vertebrates, such as fish and snakes [131]. For example, this was reported in the Cano toadfish, Thalassophryne maculosa, which showed that among sexes, there was a difference in biochemical properties and protein abundance [132]. Concomitantly, in the Brazilian lancehead, Bothrops moojeni, differences in protein abundance and activity among sexes were also reported [133]. The intersexual variation of the venom system has also been described in Cnidaria, the oldest extant venomous lineage.
Among cnidarians, the intersexual variation of venom has only been reported in sea anemones. Specifically, it was revealed in N. vectensis that NvePTx1 has divergent expression profiles among sexes in adults [70]. While this toxin exhibits restricted expression to the early life stages, it begins to be expressed again only in adult females localized to round structures in the mesenteries likely to be the ovaries where the eggs are formed [70,74]. Strikingly, this sexual dimorphic expression of NvePTx1 functions to maternally deposit this toxin into eggs during gametogenesis and sexual reproduction. The maternal deposition of toxins has ecological significance, with N. vectensis eggs loaded with toxins, resulting in the avoidance of killifish (Fundulus heteroclitus) predation [63]. Congruently, Nv4 and Nv5 are also loaded into the egg through maternal deposition and share similar sex-biased gene expression in female mesenteric filaments [76].
A similar pattern is observed in Anemonia viridis where two toxin transcripts (Av2 and Av7) are found to be highly expressed in the oocyte-rich ovaries [100]. This suggests that these toxins are also maternally deposited in eggs. This is further supported by evidence of the intronless copies of Av2 and Av7 integrated into the genome through a process of retrotransposition [100]. The presence of these processed pseudogenes in the genome of somatic cells could only occur if the parental genes are spliced and expressed in gametes, gonads, or at an early embryonic stage [134]. Whether the maternal deposition of Av2 and Av7 protects the eggs of A. viridis remains to be tested. These examples of eggs being loaded with toxins can be described as transgenerational protection and provides striking evidence of how a sexual dimorphism can have a direct effect on fitness. This highlights how venom can be leveraged to understand how genotype to phenotype affects fitness. Concordantly, it allows for the direct testing of the fitness effects associated with intersexual and ontogenetic dynamics, which has critical implications in both evolutionary and developmental biology.
Ecological significance and consequence for venom variation within and across life-history stages
Selective forces acting on the temporal separation of venom may be attributed to venom yield limitations. While venom yield varies within and among taxa, there is a finite limit to the venom load that is deliverable. For venom to effectively manipulate the target animal's physiology, a minimum dose is required. Therefore, if venom load is finite and each component has a required proportion needed for an effect, combining all venom components into a single mixture may be disadvantageous. The separation of venom components is an elegant mechanism that would allow concentration optima while maintaining venom yield. This need to separate venom components due to limitations in venom yield may be more dramatic in earlier life stages in which venom yield is likely significantly less. For example, N. vectensis, in its earlier life stages relies on specific venom components for defense but also does not feed [70]. This metabolic limitation would greatly affect its ability to replenish its venom components. Furthermore, N. vectensis relies on both gland cells and nematocytes, with the latter cell type known to be single use [45]. Triggering and firing the stinging organelles in these cells essentially means their destruction and hence requires the production of new nematocytes and greatly increases the metabolic costs needed to replenish this defensive venom system. In these early life stages, the cost of venom production would likely be significantly higher than that in adults. Therefore, the superfluous expression of functionally unnecessary toxins could be highly detrimental to the fitness of the organism, with selection acting strongly on the temporal expression of functionally specialized toxins.
The selection in modulating toxin expression across life history may be driven by biochemical necessity. Toxins are required to be secreted following their synthesis and this demands that these peptides and proteins are soluble. Protein solubility is determined by the concentration, conformation, and quaternary structure among other factors [135][136][137]. Given that proteins have the capacity to convert into amyloid-like fibrils, such protein aggregation can lead to the generation of insoluble proteins leading to an inability to be secreted, as well as potentially causing cell death [137]. While aggregation can be a consequence of the overexpression of a protein, additionally, some proteins are also inherently more aggregation-prone due to biochemical properties (such as having high beta-sheet confirmation) [137,138]. Furthermore, the overexpression of cysteine-rich peptides is also associated with enriched rates of aggregation [139]. It is hypothesized that both spontaneous intermolecular and non-specific intramolecular disulfide bond formation among proteins existing in high concentrations can lead to protein aggregation. In general, genes encoding aggregation-prone proteins are more likely to be harmful when overexpressed within a cell. This is significant as many neurotoxins found in venom are cysteine rich [2,3,140]. Congruently, copy number variation associated with disease and dosage-sensitive genes provides context for the need to limit the overexpression of specific genes, as an increase in gene copy number is correlated with increased protein product [141]. To avoid potential aggregation, toxin expression must be tightly regulated among populations of cells to prevent catastrophic outcomes, such as cell death. Similar evolutionary constraints have been hypothesized to drive the birth and evolution of young genes due to their enrichment in intrinsic structural disorder domains which minimize protein aggregation [142]. While this remains to be resolved [143], strong selective forces may be acting on the translational dynamics of venom components among the cells to minimize protein aggregation.
In venomous taxa, novel morphological (venom apparatus) and genetic innovations (toxin genes) co-evolve to meet the ecological requirements of an organism. Understanding the steps that lead to the evolution of a complete venom system can give important insights into the evolution and development of novelty. Intriguingly, recent findings on the evolution of the venom system in blennies have provided insights into the evolutionary steps that lead to the development of a complete venom system [144]. This work by Casewell et al. [144] revealed that enlarged canine teeth (fangs) originated at the base of the Nemophini radiation which enabled them the ability for predatory feeding. Subsequently, the evolution of deep anterior grooves and their coupling to venom secretory tissue provide Meiacanthus spp. with toxic venom that they effectively utilize for defense.
In addition to understanding the evolution of novelty, the trajectories that lead to complexity are also being unraveled. Comparative analysis of multiple venomous centipede species from two diverse families provided insights into the evolution of cellular and biochemical complexity of the venom system. The three species (Thereuopoda longicornis, Scolopendra morsitans, and Ethmostigmus rubripes) were all found to have lowmolecular weight (< 10 kDa) toxins at varying abundances among the secretory units in the venom gland [145]. These findings support the hypothesis of previous work that the centipede venom gland is a composite of semiautonomous subglands [145,146]. The heterogeneous toxin expression of different secretory units suggests the separation and specialization of few, highly abundant venom components among subglands. The study by Undheim et al. [145] also revealed that the diversity of venom composition correlates with the venom gland's cellular complexity [145]. Specifically, the T. longicornis gland contains ∼1,000 individual secretory units, compared to the venom glands of S. morsitans and E. rubripes which contain 10-to 100-fold more secretory units. The venoms from S. morsitans and E. rubripes are also observed to be much more complex [145]. Potentially, the evolution of venom complexity may be driven by the combinatorial expression of toxin genes between diverse secretory units. It is plausible that the evolution of greater gland complexity through the amplification of secretory units facilitated the biochemical diversification of the centipede venom arsenal. Furthermore, individual peptide masses identified as toxins appear to be localized to distinct regions along the length of the venom gland. This observation is strikingly similar to that reported in cone snails [83]. However, their capacity to compartmentalize venom components related to predation and defense remains to be tested.
The distribution and evolution of intersexual variation of venom is currently highly patchy among venomous lineages. While this may be due to the limited number of studies investigating this phenomenon, further work is therefore required to see if this is in fact common among venomous animals. From an evo-devo perspective, this suggests that these animals would have convergently evolved gene regulatory networks capable of separating venom among sexes, and whether these were recurrently co-opted from the same network might provide insights more broadly into the mechanisms that underlie convergent evolution of venom. The function of intersexual venom variation is largely attributed to having functions related to mating behavior and sexual stimulation, nuptial gifts, and/or mate recognition and aggression among conspecifics [39,[120][121][122][123][124][125][126][127]. Progeny protection is another interesting function related to intersexual venom variation, such as eggs being loaded with venom from the mother, or males having a higher protein yield and potency for guarding eggs against predators and conspecifics. Furthermore, theoretical and empirical evidence suggests that males and females should be under selection for different dietary preferences and resource utilization that maximize their sex-specific fitness [147,148]. This can be explained by evidence that different dietary requirements are needed to maximize their fertility may not be the same. For example, in fruit flies, females benefit most from foods that contain lots of protein, while males are more fertile when they eat foods that are rich in carbohydrates [149,150]. The function of venom in many animals is related to prey capture which would directly affect the animal's diet. Differences in prey preferences leading to sex-specific dietary requirements may explain the intersexual venom variation observed in some lineages.
Future prospects and concluding remarks
Evolutionary developmental biology is a field that utilizes comparative biology approaches in order to understand the evolution of developmental processes [151,152]. The recent link between developmental processes and venom dynamics brings together venom research and evo-devo. Furthermore, the comparative biology approaches that are at the very core of evo-devo as a discipline are also highly relevant for the studies of how venom systems and venom dynamics evolve. Thus, these two fields, which until only a few years ago seemed to be completely detached from one another, are coming closer to one another and we propose that venom research, especially in evolutionary terms, can gain much from adopting the practices and mindset that typify evo-devo.
We believe that another shared feature that brings these two fields together is the "omics" (genomics, transcriptomics, and proteomics) revolution that strongly affected all of the biological disciplines, but truly transformed both evo-devo and venom research (especially the sub-discipline called "venomics" see review Sunagar et al. [153]) and brought them closer to one another from a technological point of view. A major commonality between the fields that made "omics" so valuable for them is the focus on non-model organisms. The ability to use "omics" enabled studying those organisms that were difficult to study due to technological limitations. Indeed, a major bottleneck in the study of venom systems via an evo-devo perspective is the quite restricted accessibility of many venomous animals in the early developmental stages of their lives as well as the very limited toolbox available for the genetic manipulation of venomous animals. In this respect, several cnidarian species are an outlier for more than a decade [154][155][156][157][158][159][160]. However, other venomous species, such as the house spider or parasitic wasps, become amenable for genetic manipulation, expanding the possibility of studying the developmental evolution of venom systems in non-cnidarian species [161][162][163][164]. Moreover, the genetic manipulation revolution of the last several years where new techniques based on the CRISPR/Cas9 system enable the genetic engineering of essentially any eukaryotic organism that can be grown or obtained as a zygote could revolutionize this neglected aspect of studying venom systems evolution [154-157, 159, 161, 165]. One noticeable example is Cas9-based mutagenesis of the honeybee, Apis mellifera [165], arguably the venomous animal which holds the greatest economical and agricultural importance. Even when such advanced genetic tools remain unavailable for many venomous species, the ability to compare venom systems at the morphological, biochemical, and genetic levels can be highly informative for understanding this evolutionary innovation in different lineages. Altogether, we believe that the fusion of venom research with the comparative frame of mind of evo-devo results in an exciting development that can teach us about the important aspects of venom evolution in a novel perspective that is lacking from a field that traditionally focused on pharmacological and even translational aspects and less on evolution or the temporal dimension that can hide significant and fascinating biological complexities.
|
v3-fos-license
|
2023-08-30T15:15:12.718Z
|
2023-08-25T00:00:00.000
|
261330694
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2227-9032/11/17/2397/pdf?version=1693270099",
"pdf_hash": "c7d6dc8c80e5e21aefe849931decfc2d2b69ca58",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2606",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "a697751948bd636a06b6c8c3a3ca9df6562a86e0",
"year": 2023
}
|
pes2o/s2orc
|
Identifying Factors to Facilitate the Implementation of Decision-Making Tools to Promote Self-Management of Chronic Diseases into Routine Healthcare Practice: A Qualitative Study
This study, as part of the COMPAR-EU project, utilized a mixed-methods approach involving 37 individual, semi-structured interviews and one focus group with 7 participants to investigate the factors influencing the implementation and use of self-management interventions (SMIs) decision tools in clinical practice. The interviews and focus group discussions were guided by a tailored interview and focus group guideline developed based on the Tailored Implementation for Chronic Diseases (TICD) framework. The data were analyzed using a directed qualitative content analysis, with a deductive coding system based on the TICD framework and an inductive coding process. A rapid analysis technique was employed to summarize and synthesize the findings. The study identified five main dimensions and facilitators for implementation: decision tool factors, individual health professional factors, interaction factors, organizational factors, and social, political, and legal factors. The findings highlight the importance of structured implementation through SMI decision support tools, emphasizing the need to understand their benefits, secure organizational resources, and gain political support for sustainable implementation. Overall, this study employed a systematic approach, combining qualitative methods and comprehensive analysis, to gain insights into the factors influencing the implementation of SMIs’ decision-support tools in clinical practice.
Introduction
It is well-documented that the healthcare sector has a poor record for the adoption of innovations [1,2].The healthcare sector is particularly slow in adopting Information and Communication Technologies, a feature typically ascribed to human and organizational factors [3].Scholars of diffusion of innovation in healthcare have documented the inherent complexities in the spread and adoption of innovations, detailing both push and pull factors and exploring the role of evidence in driving professionals' adoption of innovation.The summary of this is clear: "scientific evidence is important but is not sufficient in itself to ensure that an innovation diffuses into practice" [4].
These consistent findings underline the fact that developing and sharing evidencebased decision tools is not in itself sufficient to ensure their adoption.There is a clear need to explore how their implementation can be ensured in practice.Healthcare settings are prone to implementation challenges, given the autonomy of the medical profession and complex hierarchical structures.Middle management has an important role in the implementation of healthcare innovations, which in turn are influenced by top managers [5].Both midlevel and top management levels interact with the self-governing parallel structures of medical work.In addition, decision-making tools aiming at improving the patient journey across the whole patient pathway require the collaboration of-or at least alignment oforganizational processes between organizations such as hospitals and primary care centers.Previous studies highlighted the particular roles and perceptions of doctors, nurses, and managers in making decisions to adopt healthcare decision aids [6,7].Awareness of the evidence of the potential impact on patient care and efficiency, as well as opportunities for system integration, are factors frequently identified.In addition to expected barriers such as costs, learning curves, IT integration, usability, and literacy requirements, studies also indicate that implementation is possible and can add substantial value to both patient care and managerial efficiency [8,9].This is particularly the case in the context of self-management interventions, which extend beyond the actions of individuals or single organizations.Self-management interventions (SMIs) are supportive interventions aimed at increasing patients' skills and confidence in their ability to manage long-term conditions [10].Self-management interventions can be characterized in relation to the intervention characteristics (e.g., support techniques, delivery methods, provider type, location, recipient), target population, expected self-management behaviors (e.g., lifestyle behavior, clinical management, psychological management, social management, working with health or social care providers) and in relation to outcomes of SMIs (including empowerment, adherence, clinical outcomes, quality of life, perceptions/experiences, health care utilization, or costs) [11].In the COMPAR-EU Project we comprehensively assessed the evidence of self-management interventions and developed a series of decision-aids and implementation tools (Box 1).
Given the pressure on healthcare systems through the rise of chronic diseases, which require effective self-management, the implementation of self-management interventions across the full patient pathway is of paramount importance.For SMIs, an implementation model which is suitable for primary care may not be appropriate for hospital settings, and a model focused purely on HCPs or on managers is too limited.A mixed approach is required, involving different information gathered from different types of organizations.This includes hospitals and community-based providers who provide care and support to patients with relevant chronic conditions.
With the aim of exploring how self-management-decision tools can be implemented into routine healthcare settings, ensuring effective use of evidence on SMIs, this study aimed to investigate the implementation factors for a specific suite of SMI decision aids from the perspective of healthcare decision-makers and professionals in hospital settings and primary care.
Box 1.The COMPAR-EU Project [12].COMPAR-EU is a multimethod, interdisciplinary project that contributes to bridging the gap between current knowledge and practice of self-management interventions (SMIs).COMPAR-EU aims to identify, compare, and rank the most effective and cost-effective self-management interventions (SMIs) for adults in Europe living with one of the four high-priority chronic conditions: type 2 diabetes mellitus (T2DM), obesity, chronic obstructive pulmonary disease (COPD), and heart failure.The project provides support for policymakers, guideline developers, and professionals to make informed decisions on the adoption of the most suitable self-management interventions through an IT platform featuring decision-making tools adapted to the needs of a wide range of end users (including researchers, patients, and industry).COMPAR-EU launched in January 2018 and was completed in December 2022, contributing the following outputs: (i) an externally validated taxonomy composed of 132 components, classified in four domains (intervention characteristics, expected patient (or carer) self-management behaviors, type of outcomes and target population characteristics); (ii) Core Outcome Sets (COS) for each disease, including 16 outcomes for COPD, 16 for Heart Failure, 13 for T2DM and 15 for Obesity; (iii) extraction and descriptive results for each disease based on 698 studies for Diabetes, 252 studies for COPD, 288 studies for Heart Failure and 517 studies for Obesity; (iv) comparative effectiveness analysis based on a series of pairwise meta-analyses, network meta-analysis (NMAs) and component NMAs (CNMA) for all outcomes across all four diseases; (v) contextual analysis addressing information on equity, acceptability and feasibility; general information on contextual factors on the level of patients, professionals, their interaction and the health care organization for those interested in implementation; (vi) cost effectiveness conceptual models have been created for each chronic condition including risk factors or intermediate variables relevant for SMIs and final outcomes; (vii) business plans and a sustainability strategy was developed based on a multiprong approach including qualitative interviews with managers and clinicians, the focus group with clinical representatives from EU countries, workshops with industry representatives and a hackathon event.The majority of the COMPAR-EU end-products are available on the online COMPAR-EU platform: www.self-management.eu(accessed on 29 June 2023).Watch the introductory video about the decision aids: https://youtu.be/_nqy6s79ZcY(accessed on 29 June 2023)
Materials and Methods
To make the evidence of SMIs available and understandable for different stakeholders (clinicians, policymakers and researchers, and patients), the COMPAR-EU project developed an interactive platform including three types of decision-making tools based on the GRADE approach (Grading of Recommendations Assessment, Development, and Evaluation), a method to assess the certainty in evidence and strength of recommendations:
•
Interactive Summary of Findings tables (iSoF): these presentations will provide information in different formats about the quality of evidence and magnitude of relative and absolute effects for each of the core outcomes identified;
•
Evidence to Decision frameworks (EtD): using semiautomatic templates, interactive EtD frameworks will be completed for a number of priority questions that will take into account the magnitude of desirable and undesirable effects, stakeholder views on the importance of different outcomes, information on resource use and cost-effectiveness, impact on equity, and other aspects like acceptability or feasibility of the interventions.
The frameworks include draft recommendations that could be then applied or adapted to different settings;
•
Patient Decision Aids (PtDA) were developed in plain language for all selected situations identified in the previous phases of the study.The aids were produced in six languages (English, French, German, Spanish, Dutch, and Greek) and included evidence to guide decision-making toward patient needs.
Study Design
This study has an explorative qualitative mixed-methods design using semi-structured interviews with decision makers (DMs) and health care professionals (HCPs) from Germany and Spain and conducting a focus group with DMs and HCPs from other COMPAR-EU countries.The design is based on a protocol developed for this specific design and published a priory in Open Science Framework (OSF) [13].This publication includes further background information on the rationale, design choices, sampling, and analytical strategy.
Setting, Sample, and Recruitment Process
The study was carried out between March 2022 and October 2022.We sampled interviewees from Germany and Spain for maximal contextual variation (with different health system organizational and purchasing contexts for SMI implementation).
As background work, we conducted a review of governance and accountability systems to identify organizational enablers.In this process, we identified Germany and Spain amongst the countries participating in the project as those with rather distinct governance and accountability systems (for example, in terms of health system financing, provider organization, payment systems of doctors, and patient registration) that allow investigating maximum variation with regard to SMI implementation factors.As it was not feasible to conduct this large number of interviews in all countries participating in the COMPAR-EU project, Germany and Spain were therefore chosen as settings for this study.The background to this assessment and details of the sampling approach are described in more detail in the Open Science Framework protocol [13].
Interviewees were sampled with regard to country, institution (hospital vs. primary care), experience with chronic care management, position (decision-maker vs. professional), and age and gender.The focus group included seven participants from other European countries: Netherlands (n = 2), Greece (n = 2), Belgium (n = 1), Czech Republic (n = 1), and Portugal (n = 1).The focus group comprised the same professional groups of HCPs (n = 4) and DMs (n = 3) and settings as in the interviews.Focus group participants were recruited internationally from other COMPAR-EU countries, namely Greece, Belgium, and the Netherlands, were planned to include 5-7 people per group (with 10 invited per session with the assumption that not everyone will attend), consisting of representatives where age and gender should be a balanced mix and who possessed a good command of English.Interviewees were recruited through a specialized agency.Focus group participants were recruited by the COMPAR-EU project partners through local contacts and existing panels of respondents.
Data Collection and Data Management
We conducted interviews with German subjects in German, interviews with Spanish subjects in Spanish, and the multi-country focus group in English.Two researchers from the respective partners in Spain and Germany conducted the interviews.The focus group was conducted by a German partner.Both the interviews and the focus group were held online via Zoom.The participants received information about the project, a declaration of consent, and a 6-min video about the decision tools via email before the interview.In addition, all participants were shown a brief video about the three types of decision-making tools that were developed within the COMPAR-EU Project: Interactive Summary of Findings tables (iSoF), Evidence to Decision frameworks (EtD), Patient Decision Aids (PtDA), illustrating the use of these tools on the COMPAR-EU web platform.
An interview guide was developed, including open-ended questions (Supplementary Material Document S1).The content and structure were guided by the Tailored Implementation for Chronic Diseases (TICD) framework and a realist review [14].The interview guide was divided into ten parts, which were framed with introductory and concluding questions.The guide concluded with questions about the most important implementation factors of decision tools as well as the need for their use in the future in the healthcare system.As translating the interview guide into different languages is considered a difficult task because interviewers from different countries may have different views and experiences [15], the international research team met several times to adapt and translate the interview guide to ensure the cultural relevance of the questions and common understanding between both teams.We report on our methodological approach according to the COREQ Checklist [16].
Qualitative Content Analysis
A qualitative-directed content analysis (QCA) based on the work of Hsieh and Shannon [17] and Gale et al. [18] was conducted.We deductively developed a coding system based on the TICD framework [14].The coding system was inductively refined by including codes emerging from the interviews.Data analysis was conducted in the local language, and results were translated into English and reported back to both researcher teams.Each research team chose an appropriate analysis tool.The German team used MAXQDA2020 analysis software, while the Spanish team applied the NVivo 20 software (as licenses for these tools were available to the project partners).The results of the analysis were discussed in regular team meetings with anchor examples from both countries.Anchor examples from each country were translated into English and compiled in an Excel document.Each research team was responsible for the quality of the translation.
To achieve a structured approach for the analysis, the researchers followed a guideline of 16 steps for direct QCA developed by Assarroudi et al. [19].The 16 steps are the synthesis of the suggested methods of Hsieh and Shannon [17], Elo and Kyngäs [20], Zhang and Wildemuth [21], and Mayring [22].The steps were divided into three phases: 1. the preparation phase, 2. the organization phase, and 3. the reporting phase (Figure 1).In the first phase, the interview guide was developed, and the interviews were conducted and transcribed by edited verbatim transcription, i.e., word-by-word transcription edited for readability and clarity.
In the second phase, we followed three coding cycles: First coding cycle: Both research teams pretested the deductive initial coding system by analyzing two interviews independently.Each team discussed the new inductive codes on their own and set some coding rules.After that, both teams from Spain and Germany discussed which inductive codes should be included and agreed on general coding rules for further analysis.2nd coding cycle: Both research teams pretested the extended coding system by analyzing two more interviews independently, i.e., interviews other than those analyzed in the first cycle.Each team discussed the new inductive codes on their own and checked the intercoder reliability.After the discussion, researchers coded the remaining interviews and highlighted those quotes that did not match any code of the coding system.Again, the teams discussed new codes.At this stage, we specified anchor examples for each code.Third coding cycle: Each transcript was revisited for an iterative third cycle, and the new and existing codes were applied until no new themes or concepts emerged.At this stage, we checked the results for consistency.
In the third phase, we summarized the main message for each code based on all quotes assigned to it.For each code, we also extracted two representative quotes, i.e., one from DMs and one from HCPs.The selected quotes were translated into English.
The focus group was conducted to contextualize the results from the interviews in Germany and Spain with a broader panel, including participants from other countries.They were not analyzed at the same level of detail; rather, headline findings were summarized with a focus on whether divergent views emerged from the focus group compared to interviewee-reported findings.
Results
Our analysis included a total of 37 interviews.A total of 20 were held in Germany and 17 in Spain.The interviewees were divided into two groups: HCPs and DMs.A broad spectrum of different healthcare organizations was represented in the sample, with the majority being hospitals (n = 19, 51%).Male (n = 19, 51%) and female (n = 18, 49%) participants were evenly matched.Their age ranged from 32 to 65 years, and their healthcare work experience from 5 years to more than 30 years.The average duration of the interviews was 51 min, with a range of 36-65 min (see Table 1).We developed a coding system with five dimensions, 17 subdimensions, 50 codes, and 21 subcodes from this analysis.In total, 1591 text segments were assigned to the coding system.The key findings were structured in the five main dimensions of our coding system: 1. factors of decision tools; 2. individual healthcare professional factors; 3. factors of interaction; 4. organizational factors; and 5. sociopolitical and legal factors.
Use of Evidence
The participants pointed out that reliance on evidence is inevitable in clinical practice.While most DMs admitted to searching for scientific evidence only on demand for certain patients, HCPs stated that it is imperative to stay up to date on evidence-based medicine before making therapy decisions.Therefore, they were advised to attend clinical sessions and revisions of scientific evidence using clinical guidelines and publications.Most of the participants described accessibility to scientific evidence as an ongoing and easy process: Interviewees used databases from scientific societies, high-impact journals, online libraries, clinical trials of the pharmaceutical industry, and training from the corporate website as a source.Some DMs stated that accessibility to suitable evidence can be challenging as it involves a lot of research and is very time-consuming.German HCPs pointed out that they use their own and their colleagues' experiences from team discussions to stay informed about the new evidence.
Existing Patients and Target Group of Patients
DMs reported that most patients in primary care suffer from chronic diseases, whereas in hospitals, only a quarter of patients are chronically ill.All interviewees pointed out that the use of SMI decision tools is especially suitable for younger patients rather than older ones because patients have a greater affinity for technology; they are more familiar with the internet and have better access to it.According to the interviewed DMs, decision tools are especially suitable for the following patient categories: chronically ill patients with low comorbidity, patients with only one main diagnosis, introverted patients, patients with language barriers, patients who are between 30-50 years old, patients with a higher educational level and those who want to take action and improve their own well-being.In contrast, the use of decision tools would be less suitable for patients with low income, lower education levels, less internet literacy and access, patients above 65, and insufficient health literacy.
Use of Decision Tools
Many interviewees were not familiar with SMI decision tools by the time of the interview: I don't really know of any decision-making aids from my everyday life that would go in that direction.HCP 19; hospital; Germany; 7 While the German participants mentioned that they distribute flyers with treatment and therapy options to their patients and forward them to self-help groups, the Spanish participants use peer groups, motivational interviewing, patient empowerment, or qualityof-life questionnaires to involve their patients in therapy decisions.One German DM from primary care referred to the decision aid Arriba, and another one noted that he uses TheraKey Diabetes from BERLIN-CHEMIE and a self-developed decision tool.
According to the participants, decision tools would have to show a clear improvement in patient care and in the achievement of goals for patients and clinic staff: Prove that ultimately significant improvement in patient care and improvement in goal achievement, that's point one for me, for ultimately putting that in.(DM 15; primary care; Germany; 99) Interviewees believed that the tools were suitable for primary care in chronic diseases, patient empowerment, patient well-being, improving patient health, discovering the best interventions, and sharing these interventions with patients.In addition, HCPs confirmed that decision tools help empower patients and clinicians to improve follow-ups with their patients.
Regarding the technical usability of the COMPAR-EU tools, some participants mentioned that the design needs to be friendly, intuitive (i.e., easy to use), and time efficient.They demanded that the tools need to be accessible for patients who are not digitally affine.
Most of the participants stated that the SMI decision tools were appropriate for use in primary care.It was also mentioned that university outpatient clinics could implement these decision tools.Few participants said that decision tools could also be applied in hospitals and support physicians to look at the evidence of self-management tools in a structured way to support patients at discharge, offering an opportunity for integrated care: "I think leadership must be shared in this moment.I mean, in the hospital you have the head of a service, or the one who knows the most about that disease, which are units, but the patient comes from primary care, and that is, it's been my mantra for many years.We are here to help primary care and collaborate with them because they are the ones responsible for the patients."(DM 11; hospital; Spain; 247)
Knowledge and Skills
Participants' perceptions of their own knowledge regarding decision tools correlated closely with varying experiences in their work field, their engagement in their professional bodies, and their leadership responsibility within their healthcare institution.Participants who described themselves as highly engaged reported having experience with self-management programs or decision tools: "Oh, you know, I'm a chamber chairman in the district and my hobby is continuing education, continuing education of my colleagues [. ..].So I'm relatively fit, I get a lot of input."(DM 12; primary care; Germany; 7) DM and HCPs were reported to be aware of self-management measures for chronic conditions.While German DMs stated that they conduct shared decision-making (SDM) based on their experience and medical guidelines, German HCPs stated that they use similar tools in so-called "Disease Management Programmes".Spanish HCPs pointed out that they are aware of existing decision tools but do not use them in their daily practice.
Participants' perceptions of their own practice represented their opinion that chronic conditions are best-managed long-term through interventions that involve patients themselves.Explaining all treatment steps and providing patients continuously with scientifically proven information were perceived as essential.However, HCPs believe that patients often simply accept what the doctor suggests.They showed doubts that patients could use decision aids properly.
In order to use decision tools efficiently, both the majority of DMs and HCPs claimed that it is important for clinical staff to understand the principle of decision tools in detail and the meaning of self-management measures, to know the needs of the patients and to be able to explain them convincingly: "So first of all, they have to be so confident that they know these decision-making tools and how to use them, whatever.That they can communicate that."(HCP 14; primary care; Germany; 87) In addition, it was mentioned that motivation, personal responsibility, sensibility towards the patients, and affinity for technology are essential skills for the use of the tools.
Cognitions and Attitudes
DMs and HCPs saw decision tools for SMIs as an innovation.They stated that innovation means changes, new working processes, and often resistance from HCPs: "I think it would be an innovation.So, it's nothing that you can't imagine as a doctor or as a patient.Such a tool is actually obvious, but although it is an obvious measure, I don't know of any directly comparable one that is in daily use.And in this respect it is something new."(DM 17; hospital; Germany; 43) The tools should, therefore, convince the users, bring real evidence-based value, and should significantly help in a therapy decision and save time in the process.
Participants considered effectiveness and perceived benefit in the workflow of decision tools to be success factors for implementation.While some participants did not see the added value of decision aids at the time of the interview, both German and Spanish interviewees expected decision tools to become more important in the future.
German and Spanish DMs emphasized their intention and motivation for the use of decision tools based on the effect on patient outcomes.Treatment successes and recognizable progress would increase the motivation to continue: "In other words, feeling supported and having a script for how to do things helps.Because, at the same time, it structures the intervention.And in that way it can serve to evaluate you, to evaluate how things work.I think it's interesting and well, come on, it's something I always believe in.It is a methodology in which I like to work like this.Have, well, a process and see how, next step, next step, evaluation and see how it works."(HCP 4; hospital; Spain; 296) Some participants experienced uncertainty about the efficacy in clinical practice because they assumed patients might not follow their advice.Some interviewees already used decision aids, but they failed or did not lead to success.
Professional Behavior
According to most DMs, structured preparation of a doctor-patient discussion was perceived as essential for success.They believed that professional behavior consists of explaining several available options to the patient without overwhelming him.HCPs would assume a consultative role in which they would decide together with patients which treatment steps to follow next.They confirmed that the patient's condition determines what they can discuss with the patient and entrust them to do.Some of the HCPs would also involve family members or caregivers.
Regarding their capacity to plan change by using decision tools, some respondents indicated that they have the time to use decision aids for patients because the responsibility lies with the patient, and the clinicians should only be companions on the patient's healing journey.Others, however, were very critical of the capacities for decision support in healthcare institutions such as hospitals and primary care centers, arguing that there is no time and no financial incentive for it.Some interviewed DMs were also self-critical.They mentioned that sometimes they had no time to assess the current evidence, and decision-aid implementation projects had failed (DM 12; primary care; Germany; 9).Additionally, some were critical for not having tackled the aim of decision tools thoroughly enough.Some of the interviewed HCPs perceived the need to communicate with many different teams and characters in healthcare facilities as a challenge.
Interaction with Patients
Most interviewees stated that it is important to consider patient needs and characteristics (such as socioeconomic status, level of education, language skills, and access to digital devices) when implementing SMI decision tools.
Patients' beliefs and knowledge can influence the way these tools are used.They need to be motivated to set achievable goals and by sharing positive experiences in group patient training: "[. ..] of course, people have to be a little bit interested in their own health.And be willing to change something.Because that also means a bit of work for them to register and take care of it.And yes, if they are not motivated, then it will be difficult.But I think that if they realize that they can change something and have a positive influence on the disease, then that is of course motivation enough."(HCP 20; hospital; Germany; 31) One of the crucial factors in the successful implementation of decision tools is patient preparation.Patients can be prepared before a consultation by providing an email link to the tools or after consultation by having a nurse or medical assistant explain the tools: "[. ..] it would be great if you could invite them [patients] directly to a small training session, for example.Or simply distribute flyers where videos explaining the procedure can be found."(HCP 20; hospital; Germany; 66-69) At the same time, there are roles and responsibilities carried out directly by the patients.Patients need to actively ask questions, take responsibility for self-management of their condition and show compliance with their treatment.
Professional Interaction
Both DMs and HCPs stated that when it comes to professional interactions within the medical team, communication, information sharing, and experience need to be maintained by regular team meetings.One participant also mentioned that the HCPs implementing decision tools need to agree on fixed rules that they want to follow.
Within the team, team members need to develop a shared understanding of the benefits of the decision tools.All participants agreed that this could be achieved by showing the added value of the tools, emphasizing the evidence-based aspects of the tools, demonstrating positive experiences of other organizations, and highlighting patient benefits: "Our own colleagues from other hospitals, or another region, should explain to us the benefits that the tool brings.I think that that is the strategy we should follow.First, explain the purpose of the tool, then, have the experience of another place where we can see the health results that have been achieved with help of these tools.Show us the experience of patients that are using the tools [. ..]." (DM 8; hospital and primary care, Spain; 270) Additionally, several DMs from Spain suggested that healthcare organizations can establish an interdisciplinary group comprising administrative staff, social workers, physicians, and nurses that could oversee implementation and provide relevant support in implementing the tools.
Another important factor reported by the participants is building enthusiasm and support among the team members.Team members need to be involved in decision-making surrounding what kind of decision tools will be implemented in their organizations as well as in the processes of implementation of the decision tools.Further, enthusiasm can be increased by providing training and emphasizing that decision tools might reduce the team members' workload.
The referral processes need to be maintained between HCPs and other team members, i.e., communication and coordination between different professionals (physicians, nurses, nutritionists, and psychologists) within the same organization as well as between different care levels (primary and secondary care).The referral processes between patients and HCPs can be maintained if patients are informed and guided by HCPs throughout the whole treatment.HCPs should monitor how patients are feeling about self-management and make sure they are still happy with the intervention.
Both groups agreed that physicians and nurses should be informed about and engaged in the development of decision tools when implementing them into clinical workflow.Most HCPs stated that information and engagement about decision tools need to be maintained right from the beginning of the treatment.However, some Spanish HCPs also emphasized that they would prefer to be informed first when methods of distributing and using the decision tools are already tailored to clinical workflows.Here, the top management plays a key role: "I think medical directors are those who need to know the most.For his medical background, they are in contact with the heads of service, they know all the scientific commissions that depend on the medical direction.Participants pointed out that decision tools need to be presented to patients as early as possible during the diagnosis.In Germany, a portion of DMs mentioned that it is mandatory for hospitals to inform about self-management when patients are sent home and treated as outpatients.Here, decision tools could provide support.
More generally, participants stated that the processes of using decision tools need to be centralized and bundled beyond organizations so that digital interoperability at intersections with other systems can be achieved.This helps avoid working with several different tools unnecessarily, and it could help streamline daily processes.
Roles and Responsibilities
There are different roles and responsibilities that various team members assume when implementing decision tools (especially patient decision tools), e.g., decision-makers, physicians, nurses, and administrative staff.DMs have both the responsibility of presenting the tools to those who will implement them in their organization and a broader leadership role.Physicians were seen by participants as those who are responsible for identifying patients who can benefit from the use of decision tools, checking the evidence provided with the tools, and answering open questions arising when patients use the tools.They can coordinate tasks within the team.Some HCPs in Germany stated that in hospitals, ward physicians are more likely to support decision tools and that senior physicians or physicians at the middle management level are less interested in being involved in implementing innovations.In contrast, young physicians are more willing to implement the tools, as reported by Spanish HCPs.Nevertheless, both the Spanish and German participating groups agreed that the general practitioner plays a very important role as he or she often has a much closer relationship with patients than the other physicians and can monitor patients in everyday life.
The tool introduction can be delegated to nurses or administrative staff.Both groups can explain the tools and guide patients through them before or after the consultation, send them a link to the tools, and upload and update the results of decision tools in the patient information system.However, administrative staff and nurses must not give medical advice about self-management interventions to patients.Other team members to whom the task of tool introduction could be delegated are nutritionists, study nurses, data managers, social workers, psychologists, and cultural mediators.
All in all, both HCPs and DMs mentioned that the implementation process is a shared responsibility of the whole team, and they should agree together if and what tools will be considered.
Incentives and Resources
For the successful implementation of SMI decision tools, three important types of resources were mentioned by the interviewees that were needed but not always available in reality: time, financial, and personal resources.Both DMs and HCPs pointed out that patients need training on decision tools, which might be very time-consuming.However, there is only a limited amount of time in clinical consultation: "Often it only takes place between door and door due to time constraints.But if we had a little more time in the clinic to really have another discharge discussion with the patient, so to speak.That would also be a good moment to refer to such a program."(DM 17; hospital; Germany; 9) Spanish DMs claimed that there is a need for organizational change to provide time resources for introducing decision tools to patients properly.
Referring to personal resources, some participants claimed that there is a lack of personnel even though the staff limit per patient was raised.Most participants confirmed the need for leadership for the implementation of decision tools.
German participants believed that the use of decision tools is highly dependent on financial resources.If sufficient financial resources were available, the use of decisionsupport tools can be supported.Some Spanish DMs complained that it is difficult to receive financial resources, and some Spanish HCPs did not see the possibility of implementing such tools in primary care at the current time because of financial difficulties in their institutions.
"I'm going to be very sincere; I think we are in a critical moment in primary care in all of Spain.I mean, right now we are time wasting, we have very few tools and very little time to tend to patients.[. ..]It doesn't have to do, maybe, with what you are asking, but you want to evaluate a strategy, where probably the system starts to break in a few years and we will do what we did 40 years ago, which is visit the patients for a few minutes and not do any of the prevention and health promotion."(HCP 10; primary care; Spain; 165) Participants had different opinions regarding financial incentives and disincentives.Some DMs argued that financial incentives can have a positive and motivational effect on the successful implementation of decision tools, as organizations need to receive reimbursement for the time spent implementing decision tools.However, others believed that financial incentives could have less or no effect because clinicians should focus on patient health.In addition, if there were bonus payments, clinicians would have to pay higher taxes for that.Most of the interviewed HCPs were convinced that financial incentives, e.g., a bonus payment or voucher, could be offered to increase the use of decision tools for clinicians to feel fulfilled professionally.They expected cost savings when implementing decision tools into routine healthcare practice.
German DMs explained that, in the German healthcare system, the use of decision tools is not included in the primary care reimbursement plan or in hospitals.There is also no compensation for the prescription of SMIs for patients.Some suggested the use of decision tools in special units like diabetic clinics, where they could be a part of a complex treatment, and the reimbursement would be made through daily flat rates rather than diagnosisrelated group payment rates (DRGs).Additionally, new centers could be established that focus on treatment and follow-up questions regarding the self-management of chronic patients-similar to telemedicine heart failure centers in Germany.Others argued that in the primary care reimbursement model (EBM-System), it is possible to include a new EBM Code (fixed flat rate) for consultation, including decision tools or bonus payments.The use of decision tools could be a part of "Disease Management Programmes" or other new treatment programs for self-management and prevention that could be established in cooperation with healthcare insurance companies.Regarding the hospital reimbursement model (DRG-System), interviewees suggested including the use of decision tools in the DRG rate by increasing the case mix or introducing a new operation and procedure code (OPS in Germany).They highlighted that the reimbursement model affects the likelihood of using decision tools because HCPs' activities are determined based on payments and not on time spent on patients or how meaningful it is for patients.
"And I say the other quite brutal key in medicine is a reimbursement, so whether that is paid in some form or whatever.Whether that's somehow times-, whether that's reimbursed in some form so to speak yes.That is certainly something that would always be a trigger or a driving effect."(DM 18; primary care; Germany; 36) Most of the participants perceived that non-financial incentives for the use of decision tools could be created if a major improvement in patient care could be achieved or if clinicians saved time in patient consultations using decision tools.Another non-financial incentive might be a certificate for using decision tools.DMs denoted that not only the empowerment of patients but also of clinicians is very important.HCPs evaluated the improved collaborations between clinicians, new tasks, and responsibilities as an incentive, so this could help develop their careers further.
Participants emphasized a need for interoperability with other systems or applications that measure health care outcomes (blood glucose, blood pressure, weight).Some argued that decision tools should be integrated into the information system of the provider and the system should have a uniform interface.Furthermore, some German DMs emphasized the importance of homogeneity of the tools.Instead of offering many different tools to providers, there should be one tool for all types of patients.DMs requested different technical requirements for the use of decision tools, such as accessibility by phone, integrated videos for SMIs, and simple interface and navigation.HCPs believed that online consultation and a hotline for technical questions should be offered.All participants pointed out that the implementation of decision tools should happen with the patient in mind, and decision tools should be without content-based gaps for patients.
The participants emphasized the importance of providing continuous training to all professionals involved in the use of tools.They claimed that a continuing education system is required for all team members involved, such as nurses, medical assistants, physicians, or data managers: "I would present and do it, for example, as part of cardiology or internal medicine training events, quality circles, local congresses.So that's how medical innovations get into use."(DM 18; primary care; Germany; 62) DMs and HCPs presented different opinions about the impact of the use of decision tools on other healthcare institutions.DMs held the view that information could be noted in the doctor's letter and thus give other healthcare facilities an idea of the SMI status of the patient.HCPs hypothesized that this might save time.
Capacity of Organizational Change
DMs believe that the use of decision tools in everyday clinical practice is a question of authority and the associated power of persuasion.Further, they claimed that authority and persuasiveness would have an impact on the subsequent use of the tool by other team members.They emphasized that the use of new tools is more efficient if it is implemented and presented by opinion leaders.DMs stated that leaders need to believe in the project promote and incentivize adherence of decision tools by showing that the change is for the better.
"The middle management, which we call supervisors, have to always be in the know of anything that is being implemented, which doesn't mean that they are the ones who take leadership in these tools, because a head of service or a middle manager, do have a very wide vision, and they have a lot of knowledge in management and activities management, and numbers, but that doesn't always go hand in hand with leadership regarding implementation of new things."(DM 12; hospital; Spain; 313) Several HCPs pointed out that new rules, regulations, and technical requirements would have to be created for the implementation of decision tools.DMs referred to the corona pandemic, where regulations, rules, and guidelines have multiplied.Some feared that decision tools would be another major bureaucratic hurdle.Furthermore, some Spanish DMs criticized that there are overregulated health services with many workers and very bureaucratic and rigid management and coordination systems that hinder the optimal execution of regulations, standards, and policies.
Many HCPs mentioned that decision tools have a high priority because they could reduce the great time pressure in the context of more efficient work.On the other hand, DMs pointed out that decision tools cannot adapt to the clinical workload, and the time pressure in hospitals and practices is so high that they do not fit the required new processes: "No, I don't think it's a high priority for now.Let's just say that we have enough to deal with the normal challenges of everyday life.In this respect, it always has to be critically questioned."(DM 6; primary care; Germany; 39) According to the participants, monitoring and feedback play an important role in the successful implementation of decision tools.They highlighted the importance of the continuous short-and medium-term measurement of patient outcomes to prove that the tool is useful and worth to continue using it.In terms of assistance for organizational change, interviewees emphasized that external support is needed in addition to internal support from the users of decision tools.German participants suggested that external support can be offered by pharmaceutical, management, or insurance companies, whereas Spanish participants focused on national or autonomic healthcare systems and also perceived patient organizations and initiatives as important assistance to initiate organizational change for the use of decision tools.
3.5.Social, Political, and Legal Factors 3.5.1.Economic Constraints on the Healthcare Budget DMs in Germany emphasized that there is not any budget for the implementation of decision tools, but the economic pressure in the healthcare system may pressure budget allocation to self-management tools in the future: "If [implementation of decision tools] demands costs, then that's ultimately the responsibility of the healthcare system to implement that.In my opinion, the problem is that the healthcare system requests a lot of actions, but it is not accordingly supported."(DM 15; primary care, Germany, 67) Similarly, DMs in Spain mentioned that there are a lot of activities that are required by the public healthcare system but cause economic pressures.This means that although organizations could save some money for the implementation of decision tools, there are a lot of competing activities to which they need to allocate the budget.
Contracts
In Germany, a number of DMs argued that decision tools could be included in the contracts of so-called "Disease Management Programmes" accepted by the Ministry of Health in the whole of Germany.They could also be included in contracts between specific insurance companies and providers.Having several contracts with various insurance companies, however, may have a negative impact on successful implementation.While German participants saw problems with multiple contracts, in Spain, DMs worried about the open tendering process.If a provider requests a budget, e.g., for the implementation of decision support tools, public procurement law requires an open tender procedure, regardless of the budget amount.This is often a very administration-heavy and timeconsuming process.
Legislation, Legal Issues, and Data Protection Policy
Currently, the use of self-management decision tools is not included in the German Social Code (SGB V).In order to integrate decision tools into the healthcare setting, the state/Ministry of Health needs to present them as a prescribed overarching statutory concept and request that HCPs use these tools: "The system is learning; artificial intelligence will certainly lead to them getting smarter.The databases will become larger.When we finally have electronic patient records, that will certainly be supported institutionally, perhaps also in our country.I think that evidence is coming from this area."(DM 9; primary care; Germany, 19) While cooperation with commercial companies may bring about further legal issues that require clarifications, a general legally accepted procedure of the use of decision tools might be easy to implement and provide a degree of security.Both DMs and HCPs in Germany stated that data protection policy could lead to difficulties in implementation, as it entails much discussion unless specifically prescribed by a regulating authority.Spanish participants did not comment on legislation, legal issues, or data protection policy.
Influential People and Organizations
DMs mentioned several influential organizations that should be involved when implementing decision tools: hospitals, larger medical clinics that have successfully implemented decision tools; societies and associations of experts; Ministry of Health; relevant health care authorities; Federal Institute for Drugs and Medical Devices; and insurance companies and patient organizations.One German DM stated, however, that the inclusion of pharmaceutical companies might complicate the implementation because they might have more interest in economic aspects.
Healthcare System
In both Germany and Spain, DMs mentioned that the use of decision tools is not considered during the performance evaluation of healthcare systems.The German DMs think that the use of decision tools could be one of the factors used to measure performance within the healthcare system.The use of decision tools could be measured, for instance, through patient satisfaction surveys or simply by asking the patients if they were referred to decision tools in both hospitals and primary care practices.
In Germany, DMs argued that the use of decision tools is currently less consistent with the recommended ways of working in the healthcare system as digitalization is progressing slowly, and the current systems are not yet designed for patient interaction of that kind.Spanish DMs saw this differently.They noted that the approach to implementation of decision tools is aligned with the implementation strategies of other initiatives that are currently in place.Most of the participants reported that decision tools should be prioritized in the healthcare system because patients should take more responsibility for their own health and be more involved in SDM with their HCPs.Another incentive for digital innovation is the economic pressure created by unnecessary hospital admissions and consultations and the bottleneck among clinicians: "Based on the introduction of the DIGA [Digital healthcare applications], the interest for digital applications in healthcare will ultimately increase.And I think that in a few years, that's going to be a help tool, especially for patients with increasing medical needs, and the shortages in medical care [. ..].The tool means for patients a kind of shared decision which helps them to get their treatment or achieve their goal."(DM 15; primary care; Germany, 100-102) 3.5.6.Social Changes and Paradigm Digital tools supporting SDM between clinicians and patients align with future visions about the healthcare system, and as such, decision tools could lead to social change.They could empower patients in their own healthcare autonomy: "[. ..] there is still a paternalist attitude from health professionals towards patients.Patients follow physicians and nurses advises.I believe the step forward regarding patients' participation must be undertaken."(DM 8; hospital and primary care; Spain; 297) In addition, better use of data and methods such as artificial intelligence will lead to making these tools more efficient and precise and save time and resources for both patients and doctors.
Perspectives of Managers vs. Health Care Professionals
Overall, both DM and HCP addressed similar themes in relation to the five dimensions of the coding system and were broadly in agreement with the key barriers and facilitators for implementation, in particular in relation to awareness and training of professionals; decision aids as an innovation factor, the need of patient preparation, and allowing for sufficient time to address the output of the decision aid with the patient.Differences emerged on various points, such as the appraisal of the underlying scientific evidence of decision tools, where HCPs demonstrate a higher level of familiarity as compared with DM.In terms of the effects of using decision tools, HCP appeared to be more concerned with short-term efficacy, whereas DM demonstrated more interest in the longer-term outcomes and positive side effects on efficiencies.In this context, DM also reflected more often on the role of the organization (hospital vs. primary care) leading the implementation of the tool.Finally, different views were put forward regarding the use of financial incentives: whereas HCP provided a mixed assessment acknowledging both potential advantages and disadvantages, DM was overall positive about opportunities to link the adoption and implementation of decision tools to financial incentives.
Contextualization of the Results by the Focus Group
The focus group participants emphasized that, as a first step in the successful implementation of decision tools, it is important to identify a group of patients for whom decision tools are most interesting based on their willingness to change something about their condition.In that way, it is maintained that decision tools are introduced first to the patient group with the greatest likelihood of using them.Only then should decision tools be introduced for all other patients.Further, participants reported that using financial in-centives, including the use of decision tools in medical guidelines, and offering certification for those who successfully implement them in their settings can help increase the use of decision tools amongst the HCPs.The funders involved in the implementation process can differ within countries based on their healthcare system.While involving insurance companies from the beginning seems to be the most effective approach in insurance-based schemes, in public schemes, new tools are often piloted first, and organizations then apply for financial support.Another implementation factor was the early training of medical students.When medical students are trained to involve patients in SDM, such tools are more likely to be implemented (Table 2).The use of decision tools could be implemented best in the clinical guidelines (also top-down approach) and/or in accreditation systems; Social, political, and legal factors -Highly relevant because chronic conditions cause an increased burden on the population, and patients need to be empowered to take an active role as they are the experts on their own health (clinicians are experts of medical support); -With better use of data and better methods (such as AI), decision tools will also improve in usability and precision.
Discussion
This analysis shows that decision tools, such as the three developed tools as part of the COMPAR-EU project, can support the use of evidence about SMIs in healthcare practice.The implementation of self-management decision tools represents a digital innovation that stimulates and requires change and rethinking processes at different levels: individual, organizational, and system.The lack of use of SMIs and decision tools in practice is not only due to limited resources in different healthcare settings but also due to limited knowledge about the effectiveness of these interventions and tools.The use of new tools is more efficient when they are introduced and presented by opinion leaders.Digital innovations such as decision support require organizational resources such as time, personnel, and budget on the one hand and the right financial and non-financial incentives on the other.This depends not only on the resources and incentives provided internally but also on the support of stakeholders such as management or insurance companies and the healthcare system itself.Furthermore, the implementation of self-management decision tools can increase the autonomy of patients in therapy decisions and thus contribute to the socially and politically promoted paradigm shift in the doctor-patient relationship.
Based on our results, there are-in general-no major differences between implementation factors in hospitals or in primary and secondary care.Major deviations were also not identified when comparing interview and focus group results.These exploratory results provide a further understanding of the facilitators of the implementation of selfmanagement decision tools into healthcare practice.
This study builds on and aligns with other bodies of work examining the implementation of decision tools such as patient decision aids or other evidence-based tools [23][24][25][26][27]. Implementation is unlikely to take place if HCPs are not aware of the use of decision tools.Training HCPs to deliver decision tools is essential.HCPs need to recognize the added value and proven effectiveness of such tools (improving patients' quality of life and supporting decision-making) before using them in clinical practice [28].Involving the whole team, including physicians, nurses, administrative staff, and middle and top management, in the implementation of decision tools and conducting regular meetings to exchange experiences is also often referred to in other studies [8].Our study confirms this point and builds on this by showing that delegating a portion of the tasks to nurses and administrative staff (e.g., leading a group training for patients) can increase the responsibility and attractiveness of the profession.
Tol-Geerdink et al. [27] found out that almost all patients would accept if decision tools were introduced on the day of their diagnosis.This fits well with the results of our study to encourage the use of decision tools right from the beginning, either through online tools or involving groups, to reduce the time required by physicians to address questions [26].Other studies illustrate that lack of time is one of the most frequently cited barriers when engaging SDM [23,26].At the same time, it was also shown that use of decision tools can save time when HCPs hand out the tools to patients to use at home [26,27].Less complex decision tools provided in different versions based on health literacy and knowledge of patients can be used [11,14,25].
Our study illustrates that financial incentives for organizations might help implement decision tools for self-management.However, the literature shows mixed results on whether financial incentives have an impact on the behavior change of HCPs in providing self-management [23].In addition, financial incentives might only achieve a short-term change [26].Nevertheless, financial incentives can be impactful if introduced in a size big enough and in the whole system simultaneously [24].
External factors like national guidelines or regulations can support the implementation of SMI decision tools [11].The emergence of national governance and guidelines is already seen as an important driver elsewhere.For instance, there are several NICE guidelines in the United Kingdom that recommend SDM supported by decision aids [28,29] and guidelines urging the use of SDM for prostate cancer in the Netherlands [30].These might provide support for the implementation of SMI decision tools in the future.
Our study emphasizes that individual HCPs need to be aware of the added value of decision tools for both patients and HCPs.HCPs further need to prepare patients for the use of decision tools and encourage them to share their preferences about SMIs in medical consultations.Healthcare organizations need to show that the use of decision tools is one of their priorities.They need to provide a structure with time, financial, and personal support for teams to implement decision tools, including avoiding competing activities to be done at the same time.They also need to train teams to give them confidence in using decision tools, let opinion leaders explain their added value to their teams, and motivate them about their use.Governments, payers, and policymakers play an important role in providing incentives (financial or non-financial) and incorporating the use of self-management decision tools and the SDM approach in national guidelines and in already existing structures or programs for chronic patients.Additionally, they need to integrate working with self-management decision tools in the performance measurement of healthcare systems.They should support further development of such tools by encouraging data collection and the use of artificial intelligence.In that way, these tools can become continuously more efficient and achieve time and cost savings.Decision tool developers need to ensure that tools are accessible to patients with low health literacy but also provide opportunities for patients who want to learn more about SMIs.They also need to consider different levels of digital affinity of various groups of users and verify the interoperability of decision tools with other systems.
The 37 interviews evaluated in this study provided a suitable amount and quality of data, as the interviewees addressed all interview questions in an open, detailed, and focused manner.Since the hospitals, secondary care, and primary care practices were of different sizes organizational structure, and came from two different European countries and cultures, the answers to some questions differed in terms of positive or negative perceptions.After coding eight interviews, there were no more adjustments to the category system.We reached saturation with the current sample; however, increasing the number of interviews further might have led to some additional data enriching the current findings.A weakness of the current study is that the seven participants of the focus group were recruited internationally, and the group discussions were held in English; hence, non-native participants might have had inhibitions to participate in the debate or weaknesses in expressing themselves.Furthermore, the focus group was not transcribed and analyzed at the same level of rigor as the interviews.However, its purpose was to contextualize the interview findings rather than to provide detailed accounts of focus group participants' views on the subject matter.In that view, the focus group successfully validated the identified implementation factors from semi-structured interviews.
Conclusions
The aim of this study was to identify factors of successful implementation of selfmanagement decision tools in routine healthcare settings.This study identified the main facilitators who can guide those who are willing to implement decision tools in their organizations.The results of this study can be used to develop business plans focusing on evidence-based decision tools, thus ensuring research exploitation.In the future, different versions of business plans need to be adapted if there will be differences applicable to different health systems and provider types.The results will not only contribute to the development of an implementation strategy for decision tools but also increase the empirical evidence about the use and transferability of innovations in health information systems.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/healthcare11172397/s1,Document S1: Interview guideline.Institutional Review Board Statement: The project coordinator (Avedis Donabedian Research Institute) requested the overall ethical approval for the project to our local Clinical Research Ethics Committee (CEIC) (the University Institute for Primary Care Research-IDIAP Jordi Gol).Ethical approval was granted in March 2018.All participants were informed in advance and at the beginning of the interview about the aim of the research project and asked for their written consent to participate.The participants were reassured about confidentiality and anonymity.The authors of this paper have certified that they comply with the principles of ethical publishing and the principles of the Declaration of Helsinki.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Well, for me, I think it's easy, because of my clinical experience and years of work, you discard what you know does not have the strength of evidence and go to the consensus or recommendation system.[. ..].And well, I know the sources of evidence to use.(HCP 14; hospital; Spain; Row 202) [. ..] they can explain to us what they want to do, why, what situation we are in and what we hope to achieve with it."(DM 11; hospital; Spain; 312)
Funding:
This work was supported by European Union's Horizon 2020 research and innovation programme under grant agreement No 754936.
Table 2 .
Results of the focus group.
|
v3-fos-license
|
2024-06-28T05:18:29.943Z
|
2024-06-01T00:00:00.000
|
261557999
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1002/ece3.11575",
"pdf_hash": "1adc2ec74766184b773bdf07048e468f5d3f6189",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2607",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "1adc2ec74766184b773bdf07048e468f5d3f6189",
"year": 2024
}
|
pes2o/s2orc
|
Wall‐following – Phylogenetic context of an enhanced behaviour in stygomorphic Sinocyclocheilus (Cypriniformes: Cyprinidae) cavefishes
Abstract With 75 known species, the freshwater fish genus Sinocyclocheilus is the largest cavefish radiation in the world and shows multiple adaptations for cave‐dwelling (stygomorphic adaptations), which include a range of traits such as eye degeneration (normal‐eyed, micro‐eyed and eyeless), depigmentation of skin, and in some species, the presence of “horns”. Their behavioural adaptations to subterranean environments, however, are poorly understood. Wall‐following (WF) behaviour, where an organism remains in close contact with the boundary demarcating its habitat when in the dark, is a peculiar behaviour observed in a wide range of animals and is enhanced in cave dwellers. Hence, we hypothesise that wall‐following is also present in Sinocyclocheilus, possibly enhanced in eyeless species compared to eye bearing (normal‐/micro‐eyed species). Using 13 species representative of Sinocyclocheilus radiation and eye morphs, we designed a series of assays, based on pre‐existing methods for Astyanax mexicanus behavioural experiments, to examine wall‐following behaviour under three conditions. Our results indicate that eyeless species exhibit significantly enhanced intensities of WF compared to normal‐eyed species, with micro‐eyed forms demonstrating intermediate intensities in the WF distance. Using a mtDNA based dated phylogeny (chronogram with four clades A–D), we traced the degree of WF of these forms to outline common patterns. We show that the intensity of WF behaviour is higher in the subterranean clades compared to clades dominated by normal‐eyed free‐living species. We also found that eyeless species are highly sensitive to vibrations, whereas normal‐eyed species are the least sensitive. Since WF behaviour is presented to some degree in all Sinocyclocheilus species, and given that these fishes evolved in the late Miocene, we identify this behaviour as being ancestral with WF enhancement related to cave occupation. Results from this diversification‐scale study of cavefish behaviour suggest that enhanced wall‐following behaviour may be a convergent trait across all stygomorphic lineages.
| INTRODUC TI ON
Vertebrate lineages have evolved sensory systems and associated behaviours in order to adapt to new environments such as subterranean habitats.To occupy caves, species became adapted to the low availability of resources such as light, oxygen concentration and nutrients, leading to stygomorphic adaptations, including elongated appendages, lowered metabolism, specialised sensory systems, loss of eyes and pigmentation (Chen et al., 2020;Jeffery, 2019;Li et al., 2020;Ma, Gore, et al., 2020;Yoshizawa et al., 2012).A prominent stygomorphic convergent feature of cavefishes is the degeneration of eyes, compensated for by enhancements to the mechanosensory organs such as the neuromast lateral line system (Borowsky, 2013;Chen, Mao, et al., 2022;Ma, Herzog, et al., 2020).
A prominent swimming behaviour of cavefish is wall-following (WF, a form of thigmotaxis), where the fish senses the walls or boundaries of its cave environment in the absence of visual cues (Patton et al., 2010;Sharma et al., 2009).Although thigmotaxis has been reported in non-cavernicolous organisms introduced into a dark environment, this behaviour is putatively enhanced in cave dwellers (Niemiller & Soares, 2015;Norton, 2012;Sharma et al., 2009).
Wall-following behaviour has previously been observed in freshwater fish such as Astyanax mexicanus, Gasterosteus aculeatus and Danio rerio (Ginnaw et al., 2020;Johnson & Hamilton, 2017;Patton et al., 2010).A lion's share of this work has been on A. mexicanus, where some populations are cave-dwelling and exhibit distinct adaptations for cave life.Some eyeless populations are capable of moving through complex environments without colliding with objects and their larvae prefer using a frontal approach using their head (Lloyd et al., 2018).Hence, cavefish resort to continuous swimming in order to constantly receive information from the environment (Holbrook & de Perera, 2013;Windsor et al., 2008).
Once cavefish detect a cave wall, they continue following the wall (Patton et al., 2010), which suggests that wall-following in cavefish is both spontaneous and continuous.This behaviour is enhanced in cavefish compared to other animals.Past studies have proposed wall-following behaviour as a strategy for foraging and spatial exploration (Sharma et al., 2009).In their perpetually dark environments, cavefish have evolved better short-range senses (hydrodynamic imaging ability) (Hassan, 1985;Windsor, 2014), such as tactile sensing using the anterior part of their body (Sharma et al., 2009) and using mouth suction frequently to generate a suction flow to navigate non-visually (Holzman et al., 2014).This implies that wall-following might entail complex functions such as spatial orientation, seeking protection or refuge and obstacle avoidance.
Distinguishing between stationary and moving objects is of vital importance for cavefish due to the limited sensory perception in short distances within restricted cave environments.A study in A. mexicanus showed differences in WF behaviours under different boundary stimulations, observing that eyeless morphs swim nearly parallel to the wall, compared to sighted morphs.Under varying light conditions, eyeless morphs expressed a closer swimming distance to the wall while reaching a higher swimming speed compared to sighted morphs (Sharma et al., 2009).However, the evolution of wall-following behaviour (WF) in response to vision loss (morphological change) remains poorly understood.
With 75 species, the genus Sinocyclocheilus (Cyprinidae, Barbinae) represents the largest cavefish radiation in the world (Jiang et al., 2019;Mao et al., 2021).These species show substantial morphological variability and inhabit suitable habitats of the massive 62,000 km 2 south-western karstic landscape of China (Jiang et al., 2019;Romero et al., 2009;Xiao et al., 2005;Zhao & Zhang, 2009).They are phylogenetically well known, with four major clades (A-D), with clades B & C harbouring mostly the stygomorphic forms and clades A & D containing predominantly the surface-dwelling forms (Mao et al., 2021).
They represent an emerging model system for evolutionary novelty and show multiple adaptations for subterranean life.For instance, they demonstrate varying degrees of eye degeneration, from normal-eyed to micro-eyed and eyeless (Meng et al., 2013;Zhao et al., 2021), loss of pigmentation (Li et al., 2020;Luo et al., 2023), absence of circadian rhythms, slow metabolism (Yang et al., 2016;Zheng, 2017) and a horn-like structure on the head on some species, of which the function is not clear (He et al., 2013).Studies on their natural history suggest that Eyeless cavefish are less active than Normal-eyed species.
For instance, S. grahami (Normal-eyed, surface-dwelling) swim faster and farther, swimming at a speed 2-3 times greater than S. anshuiensis (Zheng, 2017).The neuromast system in Sinocyclocheilus has been shown to be asymmetric, is correlated with the degree of eye degeneration and is pronounced in the eyeless forms (Chen, Mao, et al., 2022).
Furthermore, for several species, the WF behaviour is also thought to be associated with neuromast asymmetry, with eyeless forms having the strongest WF behaviours (Chen, Li, & Madhava, 2022).Deeper exploration is needed to understand the phylogenetic context and elucidate the range of stimuli influencing WF behaviour.Furthermore, a study of Sinocyclocheilus showed eyeless morphs being attracted to different stimuli (Chen, Li, & Madhava, 2022).Given the large number of species, the genus Sinocyclocheilus offers an ideal system for an indepth analysis of behaviour across a cavefish radiation.
Despite being an emerging multi-species model system, a radiation-scale understanding of Sinocyclocheilus cavefishes' animal tracking, evolutionary convergence, exploratory behaviour, phylogeny, stygobitic, wallfollowing
T A X O N O M Y C L A S S I F I C A T I O N
Behavioural ecology swimming behaviour is still lacking.Here, we investigate the swimming behaviour of Sinocyclocheilus species in a phylogenetic context.
The species considered represent the three main habitat types and the three main eye-related morphologies: Normal-eyed (surface water bodies, surface-dwelling habit), Micro-eyed (cave-associated habitats, stygophilic habit) and Eyeless (cave habitat, stygobitic habit) in the context of Mao et al. (2021).Given that WF behaviour is arguably enhanced in A. mexicanus cavefish populations and it is potentially affected by the sensory organs (e.g.neuromasts in the lateral line system), we hypothesise that in Sinocyclocheilus species, WF is a shared, derived trait correlated with visual acuity -a characteristic for which we use eye morphs as a proxy.Hence, we predict that eyeless species will show the greatest intensity of WF behaviour, followed by micro-eyed and normal-eyed species, respectively.Furthermore, we predict that various (stable/vibrative) stimuli will elicit a distinct response correlated to the extent of eye degeneration condition.
| Fish collection and maintenance
All experimental fishes (13 Sinocyclocheilus species, with 3 individuals from each species, N = 39) were adults (Standard Length, SL: mean ± SD = 8.40 ± 1.28 cm) and collected from Yunnan and Guizhou Province and Guangxi Zhuang Autonomous Region of China between December, 2017 and September, 2020 (Table A1, Figure A1).Fish smaller than 8 cm were maintained in a Centralised Zebrafish aquarium system and housed in 1 L BPA-free plastic tanks with each tank receiving separate water delivery and drainage.Fish larger than 8 cm were maintained in groups in four large aquariums (90 × 50 × 50 cm, 300 L; 150 × 80 × 80 cm, 1000 L capacity), equipped with dedicated filtration and purification equipment.Fish were regularly maintained on Shenyangkangcai™ fish food every day, consisting of shrimp, squid, spring fish and seaweed.We classified the species according to gross eye morphology as follows: Normal-eyed -S.guilinensis, S. zhenfengensis, S. longibarbatus, S. macrophthalmus, S. oxycephalus, S. purpureus, S. maitianheensis; Micro-eyed -S.mashanensis, S. microphthalmus, S. bicornutus, S. multipunctatus; and Eyeless -S.tianlinensis, S. tianeensis.
| Experimental equipment and video recording
For all assays, an individual was tested in one 45 × 28 × 28 cm rectangular assay arena.We used the aquarium system water (pH: 7.0-8.0,conductivity 150-300 S/m, temperature 19 ± 1°C, dissolved oxygen 8.5 mg/L), with water quality simulating natural conditions as much as possible.The depth of water was shallow (10-15 cm) depending on the size of the fish, aiming to reduce their vertical excursions and to minimise depth of field errors with the movement tracking system (as explained below).To minimise stress factors associated with differences in water properties, we changed the system water in the tank after each assay.The fish were allowed a minimum of 10 min to acclimate and recover from the transfer process.Subsequently, the infrared illumination and digital video camera were activated.An infrared camera (Cannon XF 405) was set up about 1 m above the tank.
An auxiliary infrared light source (850 nm; HongGuang, HG-IR1206, GangDong) uniformly irradiates the assay arena.We used a 4 Mbps (YCC 4:2:0, 25p, 1280 × 720) system setting to capture video under the infrared light (Figure 1a).The experimental design also followed the methods outlined by Chen, Mao, et al. (2022).Due to our inability to visually track the cavefish in complete darkness, and in order to enhance the precision of software tracking, we adjusted our experimental settings.Noting that species with eyes ceased movement in total 0 Lux environments, all assays were conducted in a quiet, nearly dark room (1.7-5 Lux).We repeated each trial 3-10 times for each Sinocyclocheilus species under identical conditions.After the conclusions of the experimental trials, we checked the results and removed the videos in which cavefishes were completely stationary or could not be analysed by the limitations of the software.The results of each individual utilised in the model were designated as a random effect in the analysis (see Section 2).Except for when gravid, Sinocyclocheilus cavefishes do not show sexual dimorphism (Zhao & Zhang, 2009); together with their given rarity, we did not consider gender-related behavioural differences.All individuals survived the experimentation.
| Wall-following assays
Since wall-following behavioural assays are established for A. mexicanus (Patton et al., 2010;Sharma et al., 2009;Windsor et al., 2008), we followed these in our study.We defined wall-following behaviour as swimming along the wall within a distance of 0.5 SL, and this area was called the near-wall belt (range of wall-following, Figure 1b).
When fish maintained travelling a minimum distance for 2 SLs within the near-wall belt, we recorded it as wall-following behaviour.We used the EthoVision XT v.15 (Noldus IT, Wageningen, Netherlands) to track the swimming trajectories (Figure 1e,g,i).Standard Length (SL) and pectoral fin length (PFL) were measured using FIJI (https:// imagej.nih.gov) (Schneider et al., 2012).Swimming speeds less than 0.2 cm/s were set as immobility (resting) when analysing with EthoVision XT.We generated data for the following indicators in the near-wall belt: WF-Distance (the distance fish swimming past within the wall-following range), WF-Frequency (the frequency at which fish swam into wall-following range), WF-Time (the time that fish spent wall-following), WF-Resting Time (the resting time during one assay), WF-Speed (average fish swimming speed when wallfollowing) and WF-Max Speed (the maximum speed fish reached during one assay; Table A1; abbreviations summarised in Table A2).
| Quantification of reaction to stimuli
To understand whether wall-following is a fixed behaviour or whether it could be affected by various stimuli, we performed assays under three different stimuli: one without stimulation, one with a novel landmark and one with a vibration attraction (VA) setting.First, the unimpeded forward motion along the wall for 1c).We measured the following indicators of fish swimming within this range: S-Frequency (the frequency at which fish swam in the stimulation range), S-Time (the time fish spent in the stimulation range), S-Speed (the average fish swimming speed) and S-Max Speed (the maximum speed they reached during an assay; Table A2).
To distinguish how various stimuli affect the behaviour of Sinocyclocheilus cavefish, we also measured two additional indicators: the angle of approach (Approaching angle) and the distance of approach (Approaching distance) to stimulation, which were quantified using FIJI.Approaching distance was defined as the shortest distance between the edge of the fish's body and the stimulation (Figure 1d).The approaching angle was defined as the angle between a line extending down the midline of the fish and a line extending from the centre of stimulation again, following protocols established by Lloyd et al. (2018).In each assay, we recorded only the first three repeated approaches.Numbers from 1 to 3 represented the order of approaching within a given assay.For instance, "Angle 1" and "Distance 1" indicated the first time the fish were attracted by the stimulation.
| Analysing the behaviour in an evolutionary context
Previous studies have not provided definitive explanations regarding whether wall-following behaviour is an adaptation to cave environments or if it is associated with ancestral cave preferences (Patton et al., 2010;Sharma et al., 2009).Hence, we further analysed the evolution of wall-following behaviour from a phylogenetic perspective.We obtained two mtDNA fragments (Cytb and ND4) of 13 Sinocyclocheilus species and 5 outgroup species: Cyprinus carpio, Puntius ticto, Labeo batesii, Gymnocypris przewalskii and Gymnocypris eckloni from Genbank (Table A1).We edited and aligned sequences (d) Approaching angle and distance used for stimulation-approaching analysis.(e, g, i) Representative swimming trajectories for the three forms: Sinocyclocheilus tianlinensis (eyeless, stygobitic, fusiform), S. bicornutus (micro-eyed, stygophilic, fusiform) and S. macrophthalmus (normal-eyed, surface, compressiform) in 10 min with no stimulation assay's distance to the "near wall".The colours represent the duration the fish spent in each pixel with low wavelengths (red) indicating greater times spent and high wavelengths (blue) indicating lower times spent at each pixel.Eyeless and Micro-eyed species maintain wall-following for a longer duration than the Normal-eyed species.Trajectory charts were created using the EthoVision XT software.(f, h, j) The three species for which the behaviour is depicted here.
Given that interspecific trait variation is often confounded by phylogenetic autocorrelation, traditional statistical methods might be subject to biases, such as elevated false-positive rates, thus requiring specific methods such as phylogenetic generalised least squares (PGLS) (Garamszegi, 2014) regression analysis.We tested for phylogenetic signal in different traits (WF distance, WF speed, SE diameter (eye trait, Table 1, Tables A2 and A3), WF frequency, WF time and WF-max speed), which was estimated using Pagel's λ parameter in package "caper" (Freckleton et al., 2002;Orme et al., 2013).A result such as the value of λ nearing 1 denotes a stronger signal (Blomberg et al., 2003;Pagel, 1999).Given that there was no evidence of phylogenetic signal in tested traits (λ = 0.000; Table A3), we were justified in using non-phylogenetic statistical methods in all remaining analyses.
We included the individual fish as a random effect to account for the repeated measures of each fish.To account for individual differences, WF-Distance and WF-Speed were expressed in terms of SL and WF-Time was calculated as the percentage of testing time (%).We formulated generalised linear mixed models (GLMM) to understand the effects of association and coefficients in wallfollowing behaviour and stimulation-approaching behaviour (two separate models) in 13 Sinocyclocheilus species by using the R package "lme4" (Bates et al., 2014;Nakagawa et al., 2017).As fixed independent variables, we used fish morphology (Eye-morphs, Body shape, SL and PFL), wall-following measurements (WF-Frequency, WF-Time, WF-Resting Time, WFSpeed and WF-Max Speed) under: 10, 5 and 3 min assays; and the stimulation range behaviour measurements (S-Frequency, S-Time, S-Speed and S-Max Speed).WF-Distance was set as the response variable.Testing time was used as a random independent variable.For the model analysing approaching behaviour, we used approaching angle/distance as independent variables and stimulation as response variable.Since the data were diagnosed as over-dispersion, we selected negative binomial distributions for the final models (Boswell, 1979).
We used the package "MuMIn" to assess the AIC (Bartoń, 2019;Burnham & Anderson, 2002).We reported the fully averaged results of the models as a ΔAICc threshold of 2. If the 95% confidence interval of a parameter is higher than zero, we consider it as an important factor in explaining this model (Di Stefano, 2004;Nakagawa & Cuthill, 2007).
| RE SULTS
All Sinocyclocheilus cavefishes displayed a stereotypic "thigmotaxis" response to the new environment, revealed by an initial preference for wall-following of the tank.Individual species exhibited variation in their responses to unfamiliar and new environments, such as sudden changes in motionlessness or swimming at a significantly faster pace.These variations possibly highlight the species-specific adaptations and behavioural strategies that come into play when encountering novel surroundings.However, the nature of wall-following behaviour differed between species, depending on the eye morphs.
| Variables correlated with wall-following behaviour
WF-Frequency, WF-Time, S-Frequency, WF-Max Speed and S-Speed were the important variables affecting WF-Distance in Sinocyclocheilus species (Figure 2a, Table 1).Our results suggested that fish's standard length (SL) and WF-Resting Time were negatively correlated with WF-Distance, while WF-Frequency, WF-Max Speed, WF-Time, S-Frequency and S-Speed, were positively correlated with WF-Distance (Figure 2a).Our WF-Distance and stimulation model has R 2 marginal values as 0.67 and 0.15, respectively.We checked the spatial autocorrelation and found none.Only the second approaching distance (Distance2) influenced stimulation negatively (Figure 2b, Table 1).We found that all Sinocyclocheilus approached the stimulation at a narrow angle (<90°; Table A4).However, the approaching distance was significantly greater in surface fish than in cavefish (mean ± SD: Eyeless = 0.72 ± 0.91; Micro-eyed = 1.26 ± 1.07; Normal-eyed = 1.35 ± 1.08 SL), which suggested an ability to detect unknown objects at longer distances in Normal-eyed species.
| Wall-following ability is related to eye-morphs in Sinocyclocheilus
We found wall-following behaviour was ubiquitous across Sinocyclocheilus cavefishes but with clear patterns associated with the eye-morphs.Considering the species studied here, except S. purpureus, more than 50% of the time was spent following the wall (Figure 4, Time).Since the Normal-eyed group spent the shortest WF-Time, we found that eye-regressed groups spent a longer in WF-Time (mean ± SD: Eyeless = 69.92± 21.76, Micro-eyed = 72.49± 14.58, Normal-eyed = 54.42 ± 24.39%, Kruskal-Wallis test: H 2 = 27.55,p < 0.001; Table A5).The average results of WF-Distance in the eyeless group were the longest, the shortest in Normal-eyed group and the Micro-eyed group was in between (mean ± SD: Eyeless 190.18 ± 128.74, Micro-eyed 156.20 ± 98.79, Normal-eyed 134.35 ± 105.14 SL, Kruskal-Wallis test: H 2 = 9.21, p < 0.05; Figure 3).
F I G U R E 2
The effect of wall-following measurement parameters, with 95% confidence intervals, on the Wall-following distance of assaying Sinocyclocheilus species (a) and the results of approaching angle/distance related with stimulation (b).
TA B L E 1
The results of model showing the variables that influenced WF distance and the approaching angle/distance related with stimulation.Note: Coefficients calculated using the averaged modelled estimates (full averaging technique) method, as well as the associated adjusted SE for the coefficient.The 95% confidence interval (CI) of variables that do not bound 0 are in bold.The number from 1 to 3 represented the order of approaching Angle and Distance to the stimulation.Bold size showed the parameters with significantly differences.
Parameters
our results show that WF behaviour in Sinocyclocheilus is enhanced with eye degeneration, with the highest enhancement in the Eyeless forms.
| Phylogenetic context of wall-following
The maximum credibility tree of Sinocyclocheilus shows four major clades (A-D), as previously reported by Zhao and Zhang (2009) and Mao et al. (2021) A6).Clade B also showed the fasted WF-Speed, the longest WF-Time and the most WF-Frequency compared with other clades (Table A6).
| DISCUSS ION
Sinocyclocheilus diversification began with the advent of the polar ice caps and the rain-shadow effect of the Himalayas, as a result of which the Guangxi, Guizhou and Yunnan regions became drier in the late Miocene (Mao et al., 2021).This provided time for diversification across this vast karstic landscape where species acquired, to different degrees and depending on the habitat, many stygomorphic traits.Our study contributes to our understanding of this unfolding of events by revealing wall-following behaviour as a key evolutionary adaptation within this genus.This behaviour, also appearing in unrelated lineages such as A. mexicanus cavefish, paves the way for a deeper exploration of the forces driving this convergent evolution of a stygomorphic behaviour.Therefore, it is crucial to consider the variability within and among clades and the influence of ecological and evolutionary factors on wall-following behaviour.
Our analysis showed that wall-following intensity was the highest in Eyeless species and the lowest in Normal-eyed species.We mainly evaluated three aspects of wall-following intensity in the Sinocyclocheilus genus: time, speed and distance being the most important, as distance is a product of speed and time.However, we also considered speed and time independently, for they can explain more subtle aspects of behaviour (Hoke et al., 2012).We noted the enhancement in wall-following ability going from normal-eyed to eyeless species.Eyeless species exhibited the most enhanced wallfollowing behaviour (highest speed, distance and a longer time in WF behaviour); however, it is not consistently supported across all measured variables (such as WF-Time and WF-Max Speed), even though our analyses (Table A5) confirmed a statistically nonsignificant gradient from Eyeless to Normal-eyed species for these variables.In our analysis, we retained extreme results, not excluding outliers, as these data points could reveal interesting insights in future in-depth analyses involving a greater sampling of taxa.Some variations we observed in the data might be attributed to individual differences among species.For instance, the Micro-eyed species (S. microphthalmus) exhibited a greater variability comparable to some Eyeless species, denoting the complexity and diversity within the studied group.Moreover, the term "enhance" does not necessarily imply a linear relationship between the degree of eye development and the magnitude of wall-following behaviour.It reflects, instead, a The results of wall-following distance of 13 Sinocyclocheilus species across three tests.Ten minutes with no stimulation, 5 min with novel stimulation and 3 min with vibration attraction behaviour (VA).Different colours on boxes indicate the Eyeless, Micro-eyed and Normal-eyed morphs.The wall-following distance in eyeless groups were significantly higher than in Normaleyed groups.Wall-following distances are expressed in terms of standard length (SL).Details of the statistical analyses are available in Table A5.
behavioural divergence potentially influenced by various factors including sensory capabilities, ecological pressures and phylogenetic history.As such, our data does not strongly support the proposition that Micro-eyed species represent an "in-between" state in terms of wall-following behaviour variables and requires further scrutiny.
The experiments were conducted under near-dark conditions, as observing and recording cavefish behaviour proved challenging in zero-lux conditions (due to limitations of the fish-tracking software).
Additionally, while caring for the fish, it was noted that in absolute darkness, the Eyed cavefish ceased movement, becoming immobile; this was the reason for removing these videos following initial trials.
Interestingly, most of the cave entrances from which these cavefish were sampled were not in complete darkness, though some species were captured from completely dark environments well within caves (Zhao & Zhang, 2009).Therefore, the light values applied in our experiments corresponded with the natural low-light conditions observed in the field.
The cave habitats are generally considered to be low in terms of resources and predation pressure (Ajemian et al., 2015;Niemiller & Soares, 2015;Romero et al., 2009).Past studies on cavefish pointed out that swimming behaviour is important in the exploration of habitats in A. mexicanus (Teyke, 1985).In addition, wall-following behaviour is thought to be an extension of swimming for exploration (Sharma et al., 2009).The faster swimming speed of cavefish enables fish to acquire more information via the amplitude of water flow and allows the fish to explore the cave environment continuously (Teyke, 1988).Our findings on wall-following behaviour is also suggestive of different strategies for resource utilisation and risk avoidance.Continuous swimming behaviour associated with wallfollowing, as seen in Eyeless species, may have evolved to optimise resource utilisation in the resource-poor, predator-scarce cave environment.Continuous wall-following has already been shown in A. mexicanus, which has been explained in terms of exploratory spatial awareness (Patton et al., 2010;Yoshizawa, 2015).Hence, continuous swimming behaviour associated with wall-following may have evolved to enable fish to acquire more information via the amplitude of water flow and continuously explore their surroundings to optimise resource utilisation in the near absence of predation.On the contrary, the slower WF-Speed in Normal-eyed species might reflect a defensive strategy to reduce the risk of injury.This dual role of wall-following behaviour in exploration and defence underscores its ecological and evolutionary significance.In other groups, it has been shown that the narrow and fixed wall-following route could be used as escape routes should threats arise (Ajemian et al., 2015;Ginnaw et al., 2020;Sharma, 2008), while preserving energy to accelerate rapidly when the need arises.In fact, past studies found some other genera of cavefish might need to cope with the risk of predation pressures in cave habitats.For example, cave mollies (Poecilia mexicana) were more susceptible to predator attacks within the cave, even in a resource-rich habitat due to chemoautotrophic primary productivity (Horstkotte et al., 2010;Tobler, 2009;Tobler et al., 2007).Eyeless species approached stimulation at a narrow angle, while surface-living species showed a greater approaching distance than cavefish.This could indicate an ability to detect unknown objects earlier, at a longer distance, in Normal-eyed species, which reflects differences in sensory capabilities and risk assessment strategies among the species.This spatial exploratory behaviour might also be associated with cavefishes' enhanced olfactory and lateral line systems (Chen, Mao, et al., 2022;Fernandes et al., 2018;Kasumyan & Marusov, 2018;Lloyd et al., 2018).Past studies show that in Sinocyclocheilus genus, eyed species have more neuromasts compared to eyeless species, which has been interpreted as to the importance of non-visual sensory expansion to survive in darkness (Chen, Mao, et al., 2022).The lateral line system has evolved to enhance the sensitivity to water flow and enhance sensory-dependent behaviours to find food more efficiently in A. mexicanus cavefish (Espinasa et al., 2023;Lunsford et al., 2022).However, the differences between the visual and non-visual ability of non-visual organs still need to be further explored.
Studies on wall-following behaviour in other lineages highlight different functional explanations, such as risk avoidance in the threespine sticklebacks (Ginnaw et al., 2020) while in Somalian cavefish Phreatichthys andruzzii, as an exploratory strategy (Sovrano et al., 2018).Once a behaviour undergoes adaptation to cave environments, it is more likely to persist under relaxed selection pressures, potentially explaining the observed variations in functional explanations across different lineages (Hoke et al., 2012).
TA B L E A 4
The results of approaching angle and distance among the three eye-morphs.TA B L E A 6 (Continued) 10 min was observed to figure out whether WF behaviour occurs (Video S1).Second, we tested for responses to a novel landmark for 5 min by placing a dark opaque cylinder (diameter = 5 cm) in the centre of a rectangular arena, following the methods outlined in Burt de Perera and Braithwaite (2005) and Lloyd et al. (2018) (Video S2).Third, we assayed vibration attraction stimulation for 3 min, following the methods outlined in Jiang et al. (2019) and Fernandes et al. (2018) (Video S3).Vibrations were produced with an aeration pump (Jialu, LT-201S™, ZheJiang) working at 40-50 Hz placed in the centre of the arena.The 10 × 16 cm rectangular area around the stimulation was considered as the stimulation range (Figure
F
I G U R E 1 Diagram of the experimental apparatus and schematic diagram of measurements the representative trajectories of three species in the 10 min assay.(a) Diagram of the experimental tank and equipment.(b) Vertical view of wall-following assay.The wallfollowing range is shown as the grey area, while its width is 0.5 SL.(c) Novel landmark and VA assay's stimulation range are shown in yellow.
However, from Clade C to D, with a predominance of normal sighted species, wall-following behaviours decreased (the shortest WF-Distance: S. longibarbatus 99.53 ± 74.35 SL, the shortest WF-Time: S. purpureus 44.71 ± 20.52%, the lowest WF-Speed: S. longibarbatus 0.26 ± 0.10 SL/s; Figure4).But, we found one species to be an outlier to this pattern -S.macrophthalmus (Normal-eyed, stygobitic, Clade C) has a great WF-Distance as long as the Eyeless cavefishes.
F
Bayesian inference tree derived from the concatenated data of mt-DNA and the mean results calculated from 10 mintues' assay.Node values indicate clade posterior probability.Clade A-D are represented by four colours: Clade A -red, Clade Bblue, Clade C -yellow and Clade D -purple.The number represents the distance (SL), time (%) and speed of wall-following behaviour (SL/s), together with pictures of each species.
haviour in stygomorphic species, particularly those in Clade B, even though the patterns are not uniformly clear across all variables or clades.The intricate interplay of genetics, developmental plasticity, ecology and environment possibly shapes the evolution of behavioural traits in these species.The lack of strong phylogenetic correlation in the tested eye-morphs and behavioural traits in our study may be due to inadequate taxon sampling and the inclusion of species from only one of the clades that contain Eyeless species (Clade B).Due to rarity and sampling-related problems, we could not analyse wall-following behaviour of stygomorphic species from Clade D, such as S. anophthalmus.However, species within Clade B, namely S. tianlinensis and S. tianeensis, share a common ancestor; this shared ancestry is suggestive of the evolution of intense wallfollowing behaviour in eyeless species due to phylogenetic inertia.Moreover, the evolutionary convergence in the intensity of wallfollowing behaviour in unrelated lineages, as observed in our study, supports the idea that this behaviour has evolved in response to similar selective pressures.The prevalence of wall-following to various degrees across the phylogeny suggests that the trait is ancient.Due to the rarity of Sinocyclocheilus fish and their inaccessibility in deep caves and caverns, it took us 2 years to gather 13 species and a limited number of individuals for the behavioural assay (three individuals from each species).We also had to develop techniques to keep them alive.Given these circumstances, our study included a limited representation of Eyeless and Micro-eyed Sinocyclocheilus species from specific clades.We plan to address this in future studies by expanding our research to encompass a wider range of data from different clades and conducting a deeper analysis.Finally, our study provides a complex, multi-faceted picture of wall-following behaviour in Sinocyclocheilus species and its relationship with eye morphology and phylogenetic clades.Low intraspecific variability of wall-following suggested that this behaviour was fixed for this genus.Some comparable results from other cavefish lineage, such as in A. mexicanus cavefishes, show similar wall-following swimming patterns(Patton et al., 2010).Evolutionary convergence of wall-following supports the idea that this behaviour has evolved in response to similar selective pressures in evolutionarily unrelated lineages.The prevalence of wall-following, to various degrees across the phylogeny suggests that the trait is ancient and shared in Sinocyclocheilus cavefishes.The insights gained from such research can shed light on the broader patterns and processes of evolution in cave-dwelling organisms.Future research should continue to explore these relationships, taking into account the inherent variability and complexity of behavioural traits in these fascinating taxon.5 | CON CLUS IONOur diversification-scale behavioural assays show that Sinocyclocheilus have wall-following behaviours associated with cave-dwelling propensity.The Eyeless species showed the highest intensity of wall-following behaviour and Normal-eyed showed the least intensity, with Micro-eyed forms in between.Our study confirmed that wall-following is correlated with multiple factors, especially with wall-following frequency, time and eye-morphs.Though the determination of the exact function of wall-following needs further experimentation, we suggest that wall-following facilitates protection and foraging behaviour in Eyeless forms, and for defence in eyed species.We found that wall-following is enhanced in Clade B and C (regressed-eyed species) but reduced in Clade A and D (Normal-eyed species).However, our results do not show the phylogenetic correlation of wall-following behaviour, possibly due to inadequate taxon sampling.The convergence of wall-following with A. mexicanus cavefish suggests that this behaviour is an adaptation in response to selective regimes of subterranean environments.Our work will also form the foundation for further cave-related behavioural work on this emerging multi-species evolutionary model system.Conceptualization (lead); data curation (lead); formal analysis (lead); investigation (lead); methodology (lead); resources (equal); validation (equal); writing -original draft (lead); writing -review and editing (lead).Wen-Zhang Dai: Formal analysis (equal); investigation (equal); methodology (equal); validation (equal); writing -review and editing (equal).Xiang-Lin Li: Data curation (equal); formal analysis (equal); investigation (equal); validation (equal); writing -review and editing (equal).Ting-Ru Mao:
F
Each p-value adjustment method: Bonferroni.Bold size showed the parameters wih significant differences.
Phylogenetic signal estimated by maximum likelihood with lambda forced to be 0.
|
v3-fos-license
|
2018-04-03T03:06:38.823Z
|
2001-06-08T00:00:00.000
|
675800
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/276/23/19897.full.pdf",
"pdf_hash": "4d318b07dbbc58bc8ea8920899d831b09187a0eb",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2609",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "61fe702a8561abf2de521809a09530b57139b325",
"year": 2001
}
|
pes2o/s2orc
|
Independent Repression of a GC-rich Housekeeping Gene by Sp1 and MAZ Involves the Same cis-Elements*
The transcription factors Sp1 and MAZ (Myc-associated zinc finger protein) contain several zinc finger motifs, and each functions as both a positive and a negative regulator of gene expression. In this study, we characterized the extremely GC-rich promoter of the human gene for MAZ, which is known as a housekeeping gene. Unique symmetrical motifs in the promoter region (nucleotides −383 to −334) were essential for the expression of the gene for MAZ, whereas an upstream silencer element (nucleotides −784 to −612) was found to act in a position-dependent but orientation-independent manner. Sp1 and MAZ bound to the same cis-elements in the GC-rich promoter, apparently sharing DNA-binding sites. The relative extent of binding of Sp1 and MAZ to these cis-elements corresponded to the extent of negative regulation of the expression of the gene for MAZ in various lines of cells. Furthermore, novel repressive domains in both Sp1 (amino acids 622–788) and MAZ (amino acids 127–292) were identified. Suppression by Sp1 and suppression by MAZ were independent phenomena; histone deacetylases were involved in the autorepression by MAZ itself, whereas DNA methyltransferase 1 was associated with suppression by Sp1. Our results indicate that both deacetylation and methylation might be involved in the regulation of expression of a single gene via the actions of different zinc finger proteins that bind to the same cis-elements.
Regulation of the expression of many genes is mediated by the binding of transcription factors to cis-elements in their promoter regions. The promoter regions of many eukaryotic genes contain GC-rich sequences (1) and some of the most widely distributed promoter elements are GC boxes and related motifs. The zinc finger proteins Sp1 and MAZ 1 (Myc-associated zinc finger protein) are transcription factors that bind to GCrich sequences, namely GGGCGG and GGGAGGG, respectively, to activate the expression of various target genes.
Sp1 was originally characterized as a ubiquitous transcription factor, consisting of 778 amino acids, that recognized GC-rich sequences in the early promoter of simian virus 40 (2,3). The DNA-binding domain of Sp1 consists of three contiguous C2H2-type zinc fingers (4). The amino-terminal region contains two serine-and threonine-rich domains and two glutamine-rich domains, which are essential for transcriptional activity (5). The carboxyl-terminal domain of Sp1 is involved in synergistic activation and interactions with other transcription factors. Sp1 is considered to be a constitutively expressed transcription factor and has been implicated in the regulation of a wide variety of housekeeping genes, tissue-specific genes, and genes involved in the regulation of growth (6). Sp1 is a phosphorylated (7) and highly glycosylated protein (8). It interacts with many factors, such as the TATA box-binding protein, which is a major component of the general transcription machinery, and the TATA box-binding protein-associated factors dTAFII110 (9), hTAFII130 (10), and hTAFII55 (11). Other proteins, such as transcription factor YY1 (12,13), E2F (14,15), and p300 (16,17), have also been reported to associate with Sp1. Sp1-null mice embryos exhibited severely retarded growth and died within 10 days (18), after displaying a wide range of abnormalities. Some of the embryos appeared as an undifferentiated mass of cells, whereas others had all the typical hallmarks of early embryogenesis, such as a developing heart, eyes, optic vesicles, somites, erythroid cells, and extra-embryonic tissues (18). Thus, it is likely that Sp1 is essential for the differentiation of embryonal stem cells after day 10 of development.
MAZ was first identified as a transcription factor that bound to a GA box (GGGAGGG) at the ME1a1 site of the c-myc promoter and to the CT element of the c-myc gene (19 -21). It is a zinc finger protein with six C2H2-type zinc fingers at the carboxyl terminus, a proline-rich region, and three alanine repeats. It is expressed ubiquitously, albeit at different levels in different human tissues (22). It can regulate the expression of numerous genes, such as c-myc (19,20,23,24), genes for insulin I and II (25), the gene for CD4 (26), the gene for the serotonin receptor (27), and the gene for nitric-oxide synthase (28). MAZ might be involved in the termination of transcription by interrupting elongation by RNA polymerase II (29).
The promoter region of the gene for MAZ has features typical of the promoter of a housekeeping gene, namely a high GϩC content, a high frequency of CpG (where p stands for "phosphoric residue") dinucleotides, the absence of canonical TATA boxes, and multiple sites for initiation of transcription (30,31). Moreover, the gene is ubiquitously expressed in human tissues (22).
We have attempted to clarify some aspects of the relationship between the factors that bind to GC-rich cis-elements and the promoters of housekeeping genes with a high GϩC content. A previous study showed that Sp1 binds to GC-rich DNA sequences in nucleosomes (32). Moreover, the large coactivator complex known as CRSP (cofactor required for activation of Sp1) stimulates Sp1-mediated transcription (33). Both Sp1 and MAZ can exert positive and negative control over the expression of target genes. Thus, regulation by individual DNA-binding factors seems to be coordinated via recruitment of other factors that participate in the regulated expression of target genes and via recognition of the modification of nucleotide sequences, for example, by methylation or demethylation and acetylation or deacetylation (34 -37). The binding affinities of transcription factors for individual target sequences are likely to be essential parameters in the regulation of gene expression, together with the recruitment of related factors.
We demonstrate here a possible mechanism for regulation of the expression of the human gene for MAZ. The mechanism involves the recruitment of different repressors by two different DNA-binding factors, Sp1 and MAZ, that interact with the same cis-elements. Our results indicate that deacetylation and methylation might be involved in the regulation of a single gene via the binding of different zinc finger proteins.
MATERIALS AND METHODS
Plasmids-A series of DNA fragments from the MAZ promoter was excised with appropriate restriction enzymes. Each fragment was filled in and inserted into the HindIII site of pSV00CAT (38), via a HindIII linker, to generate pMAZCAT1, pMAZCAT2, pMAZCAT3, pMAZCAT4, and pMAZCAT5, respectively. Internal deletion mutants of the MAZ promoter were created by amplification by the polymerase chain reaction, ligation of the appropriate DNA fragments, and insertion into the HindIII site of pSV00CAT to generate pMAZCAT2-d, pMAZCAT3-wt, pMAZCAT3-⌬I, pMAZCAT3-⌬II, and pMAZCAT3-⌬III, respectively. Mutant forms of pMAZCAT3 were further generated by mutation of dinucleotides (AA to GG; TT to GG (see Fig. 2)) to generate a series of mutants, pMAZCAT3-f1-pMAZCAT3-f11. Mutations in the putative Sp1-binding sites and putative MAZ-binding sites in pMAZCAT3-wt were generated by converting the GC-rich motif GGGCGG to GGTTGG and the GC-rich motif GGGAGGG to GGTATGG (39,40). Amplification by polymerase chain reaction and ligation into the HindIII site of pSV00CAT generated pMAZCAT3-m1-pMAZCAT3-m8. pCMV-MAZ was constructed as described previously (22). pCMV-Sp1 and pCMV-DNMT1 were provided by R. Chiu and R. Raenish, respectively.
Cell Culture, Transfection, and Assay of Chroramphenicol Acetyltransferase (CAT) Activity-HeLa cells, 293 cells, and NIH3T3 cells were grown in Dulbecco's modified Eagle's medium that had been supplemented with 10% fetal bovine serum (Life Technologies, Inc.). NCI-H460 cells were grown in RPMI 1640 medium that had been supplemented with 10% fetal bovine serum. Cells were treated with tricostatin A (TSA) at a final concentration of 100 ng/ml and with 5-azacytidine at a final concentration of 1 mM. Cells were transfected with plasmid DNA using the FuGENE TM 6 transfection reagent (Roche Molecular Biochemicals) according to the protocol provided by the manufacturer. All plasmids were purified by ultracentrifugation before transfection, as described previously (41). Assays of CAT activity were performed as described elsewhere (21).
Gel Shift Assay-DNA probes were radiolabeled at their 5Ј-ends with polynucleotide kinase (New England BioLabs, Inc., Beverly, MA) and [␥-32 P]ATP. The DNA probes designated M, S, and MS corresponded to DNA fragments between nt Ϫ313 and Ϫ284, nt Ϫ232 and Ϫ216, and nt Ϫ151 and Ϫ137. The binding reaction was performed in 30 l of a buffer that contained 20 mM Tris-HCl (pH 7.5), 2 mM MgCl, 0.5 mM EDTA, 10% glycerol, 0.5 mM dithiothreitol, 25 mM NaCl, 1 g of poly(dI-dC), and an extract of HeLa cells or purified glutathione S-transferase (GST) fusion proteins. Reactions were incubated at 4°C for 40 min after addition of the labeled DNA probe. The incubation was continued for 30 min at room temperature after the addition of appropriate antibodies. Products of reactions were loaded onto a 5% non-denaturing polyacrylamide gel in 0.5ϫ TBE buffer (1ϫ TBE: 45 mM Tris-borate, 1 mM EDTA). Electrophoresis was performed at 100 V for 4 -6 h at 4°C.
Immunoprecipitation and Assay of Histone Deacetylase (HDAC) Activity-HeLa cells were cultured with or without TSA (100 ng/ml) for 48 h, and then proteins in cell extracts were immunoprecipitated with antibodies specific for HDACs (Santa Cruz Biotechnology, Santa Cruz, CA) or DNA methyltransferase 1 (DNMT1) (New England BioLabs, Inc.). Cell extracts were subjected to assays of HDAC activity using a histone deacetylase assay kit (Upstate Biotechnology, Lake Placid, NY) in accordance with the instructions from the manufacturer.
Unique Symmetric Elements in the Minimal MAZ Promoter
Are Essential for Transcriptional Activity-The promoter region of the human gene for MAZ has an extremely high GϩC content, namely 88.4%. Various GC-rich elements are present in the promoter region, including consensus Sp1-binding sites (GGGCGG) and consensus MAZ-binding sites (GGGAGGG) (Fig. 1). Some of these sites overlap one another. Assays of CAT activity using constructs with various deletions in the MAZ promoter demonstrated that the minimal promoter activity was localized between nt Ϫ383 and ϩ259 ( Fig. 2A). Internal deletion of the region between nt Ϫ383 and Ϫ248 (pMAZCAT2-d) resulted in a decrease in promoter activity. This result suggested that the region from nt Ϫ383 to Ϫ248 might be critical for minimal promoter activity. We next attempted to identify the elements that were essential for minimal promoter activity. Promoter activity was reduced with the construct that lacked the region between nt Ϫ383 and Ϫ334, whereas constructs with internal deletion of the region between nt Ϫ334 and Ϫ279 or between nt Ϫ279 and Ϫ248 did not have reduced promoter activity (Fig. 2B). Thus, the region from nt Ϫ383 to Ϫ334 was essential for the promoter activity. Four symmetric elements, namely two CAAC and two CTTC elements, were present in this region (Fig. 2C). CAAC and CTTC elements have also been found in other promoters, such as the promoter of the gene for the ␣-myosin heavy chain (42), the gene for hydroxymethylbilane synthase (43), the gene for myosin light chain 2 (44), and the gene for lactoferrin (45). Some of these sites have been shown to contribute to the activation of promoter activity. We used a series of CAT constructs with mutations in these elements to investigate whether these putative elements might activate the promoter of the gene for MAZ (Fig. 2D). The results of CAT assays demonstrated that promoter activity was reduced when each of the four elements was mutated (pMAZCAT3-f1, pMAZCAT3-f2, pMAZCAT3-f3, and pMAZCAT3-f4). The promoter activity was reduced still further when two of the four elements were mutated simultaneously (pMAZCAT3-f5, pMAZCAT3-f6, pMAZCAT3-f7, pMAZCAT3-f8, pMAZCAT3-f9, and pMAZCAT3-f10). The CAT activity fell to about 20% of that of the wild type when all of the four elements were mutated (pMAZCAT3-f11). These results indicated that the four symmetric elements were essential for the activity of the MAZ promoter.
Sp1 and MAZ Recognize the Same cis-Elements in the MAZ Promoter-To determine whether Sp1 and/or MAZ could bind to the various putative binding sites for both Sp1 and MAZ, we performed gel shift assays using extracts of HeLa cells and DNA probes derived from the MAZ promoter. The SM probe, nt Ϫ313 to Ϫ284, contained one putative Sp1-binding site and one putative MAZ-binding site, and these two sites partially overlapped. The M probe, nt Ϫ232 to Ϫ216, contained one putative MAZ-binding site; and the S probe, nt Ϫ153 to Ϫ137, contained one putative Sp1-binding site (Fig. 3A). We detected two prominent DNA-protein complexes with the SM probe that contained the overlapping binding sites for Sp1 and MAZ. The rapidly migrating band was more intense than the slowly migrating band (Fig. 3B, left panel). The retarded bands corresponding to B1 and B2 were shifted even further upon addition of antibodies against Sp1 and MAZ (Fig. 3B, lanes 2 and 4). Control antibodies did not affect the mobility of the DNAprotein complexes (Fig. 3B, lane 6). These results indicated that both Sp1 and MAZ specifically recognized the overlapping sites in the same cis-element. To examine the DNA binding specificity of Sp1 and MAZ, we used the purified GST-Sp1 and GST-MAZ in the assays. A DNA-protein complex was detected using GST-Sp1, and a supershifted band was detected in the presence of antibodies specific for Sp1 but not in the presence of antibodies specific for MAZ. Similarly, a DNA-protein complex was detected using GST-MAZ, and a supershifted band was detected in the presence of antibodies to MAZ but not in the presence of antibodies to Sp1 (Fig. 3B, lanes 7-12). Supershifted bands were also detected in the presence of antibodies against MAZ or Sp1 when we used the M probe or the S probe (Fig. 3C, lanes 2, 4, 8, and 10). The results indicated that both Sp1 and MAZ interacted with the putative MAZ-binding sites and the putative Sp1-binding sites in all three probes that we used. We also examined other GC-rich cis-elements in the MAZ promoter for recognition by Sp1 and MAZ. We found that Sp1 and MAZ bound to the same GC-rich cis-elements in other regions of the MAZ promoter (data not shown). Thus, it was clear that Sp1 and MAZ bound to the same GC-rich cis-elements in the MAZ promoter.
Both Sp1 and MAZ Repress the Activity of the MAZ Promoter through the Various cis-Elements-We next focused on the effects of Sp1 and MAZ in transactivation of the gene for MAZ. The reporter constructs were used to transfect HeLa cells in the presence or absence of an Sp1 or a MAZ expression vector. The promoter activity was inhibited significantly in the presence of ectopically expressed Sp1 (Fig. 4A) and MAZ (Fig. 4B), whereas ectopic expression of Sp1 and of MAZ had no effect on the transcription of pRSVCAT (19), the control plasmid. These results indicated that both Sp1 and MAZ repressed transcription of the gene for MAZ.
The binding sites for Sp1 and MAZ in the minimal promoter region (-303 to ϩ3) were mutated in an attempt to identify the cis-elements that were involved in the negative regulation FIG. 2. A symmetric element in the minimal MAZ promoter is essential for the promoter activity of the gene for MAZ. A, the minimal basal promoter of the gene for MAZ was located in the region between nt Ϫ383 and ϩ259. A summary is shown of the human MAZ-CAT deletion constructs and corresponding CAT activities. Numbering is relative to the major site of initiation of transcription (ϩ1). CAT, gene for CAT. CAT fusion plasmids were used to transfect HeLa cells, and CAT activity was measured as described under "Materials and Methods." Promoter activities of MAZ-CAT fusion genes are expressed relative to the activity of pMAZCAT1, which was taken arbitrarily as 1.0. All values in this and other figures are the averages of results from at least three experiments, and the standard deviation for each value is indicated. B, the region between nt Ϫ383 and Ϫ334 is critical for the promoter activity of the MAZ gene. Promoter activities of MAZ-CAT fusion genes are expressed relative to the activity of pMAZCAT3, which was taken arbitrarily as 1.0. C, unique symmetric elements were present in the region between nt Ϫ383 and Ϫ334. Boxes indicate four dinucleotide repeats. D, the symmetric elements are essential for the promoter activity of the gene for MAZ. The wild type CAAC and CTTC elements and the mutated element CGGC were examined for their effects on the promoter activity of the gene for MAZ. Promoter activities of MAZ-CAT fusion genes are expressed relative to the activity of pMAZCAT3, which was taken arbitrarily as 1.0. (Fig. 5). In the presence of ectopically expressed Sp1 or MAZ, repression of the MAZ promoter was detected in the presence of mutations in the region between nt Ϫ383 and Ϫ248 (pMAZCAT3-m1 and pMAZCAT3-m4). We then mutated other sites (pMAZCAT3-m2, pMAZCAT3-m3, pMAZCAT3-m5, pMAZCAT3-m6, and pMAZCAT3-m7), and again we observed reduced transcriptional activity. All the mutated constructs mentioned above contained wild type binding sites for Sp1 and/or MAZ. Thus, those binding sites for Sp1 and/or MAZ might still have been active in the negative regulation of the MAZ promoter. Our results indicated that most, if not all, of the binding sites for Sp1 and MAZ were involved in negative regulation of the expression of the gene for MAZ. This possibility was confirmed by studies of promoter activity with pMAZCAT3-m8, in which all the binding sites for both Sp1 and MAZ had been mutated. No repression by Sp1 or by MAZ was observed with pMAZCAT3-m8. These results strongly sug-gested that repression by Sp1 and/or MAZ was mediated by the DNA-binding sites for Sp1 and MAZ and that most or all of these sites were involved in the repressive activity.
Independent Repression by Sp1 and by MAZ Is Mediated by Their Respective Repression Domains-Sp1 and MAZ repressed the expression of the gene for MAZ by binding to the same cis-elements. Therefore, we next asked whether repression by Sp1 and by MAZ might be related. The results of a yeast two-hybrid assay and immunoprecipitation-Western blotting analysis showed that Sp1 did not interact with MAZ (data not shown), indicating that repression by Sp1 and repression by MAZ were independent.
A series of Sp1 and MAZ expression plasmids was constructed to identify the domains responsible for repression. These plasmids were used to cotransfect HeLa cells in combination with the reporter construct (pCATMAZ1), and then CAT assays were performed (Fig. 6, A and B). Repression of CAT activity was observed with constructs that did not encode domains in the amino-terminal region of Sp1 (amino acid positions 1-503; Fig. 6A). These results suggested that the aminoterminal region of Sp1 was not involved in repression. The promoter activity was released from repression when Sp1 without the carboxyl-terminal region (⌬622-C) was expressed. Moreover, repression of the promoter activity was diminished when Sp1 was expressed without the zinc finger domain (⌬531-605) that is essential for binding to DNA. Taken together, the results indicated that the carboxyl-terminal region of Sp1 (amino acids 622-778) was responsible for the repression and that the repression was also dependent on the DNA binding ability of Sp1.
The promoter activity was partially repressed when MAZ without amino acids 54 -195 (⌬54 -195) was expressed, and the activity of the promoter was completely released from repression when MAZ without amino acids 127-292 (⌬127-292) was expressed. Thus, it appeared that the region between amino acids 127 and 292 was responsible for repression of the expression of the gene for MAZ. Moreover, repression of the promoter was reduced when a mutant form of MAZ was expressed without the five zinc fingers in the carboxyl-terminal region (⌬317-441), which was essential for DNA binding activity. Taken together, the results demonstrated that amino acids 127-292 of MAZ were responsible for autorepression and that autorepression was also dependent on the DNA binding activity of MAZ. We concluded that independent repression by Sp1 and by MAZ was mediated by the repression domains of each protein and that the DNA binding activities of these zinc finger proteins were also essential for repression.
Recruitment of Histone Deacetylases by MAZ-HDACs are known to act as repressors in the regulation of the expression of many genes. We attempted to determine whether histone deacetylases might be involved in repression of the gene for MAZ. HeLa cells were transfected with pCMV-MAZ or pCMV-HDAC1 in the presence and absence of TSA, a specific inhibitor of histone deacetylases. We then monitored the CAT activity due to a reporter plasmid, pMAZCAT1, with which the cells had been cotransfected. Ecpotic expression of HDAC1 repressed the activity of the MAZ promoter, and such repression was overcome in the presence of TSA (Fig. 7A), indicating that histone deacetylases might be involved in repression by MAZ. This possibility was confirmed by measurement of the HDAC activity of proteins that were recruited by MAZ. The HDAC activity of a MAZ-specific immunoprecipitate was more than five times higher than that of the complex that was immunoprecipitated by the control IgG, and the activity of the former complex was repressed in the presence of TSA (Fig. 7B). We performed immunoprecipitation and Western blotting analysis using nuclear extracts from HeLa cells to determine whether histone deacetylases were included in the complex of proteins recruited by MAZ. Western blotting analysis indicated the presence of MAZ in immunoprecipitates of extracts obtained with antibodies specific for HDAC1, HDAC2, and HDAC3 (Fig. 7C). The proteins in the same extracts were also immunoprecipitated by antibodies specific for MAZ, and all three kinds of histone deacetylase were detected (Fig. 7C). These results implied that MAZ recruited proteins that included HDAC1, HDAC2, and HDAC3.
Association of DNMT1 with Repression by Sp1-HeLa cells were transfected with pCMV-Sp1 and pMAZCAT1 in the presence or absence of pCMV-HDAC1 and TSA, respectively. The results of CAT assays revealed that repression by Sp1 was insensitive to TSA and that ectopic expression of HDAC1 had no effect on repression by Sp1 (Fig. 8A), suggesting that repression by Sp1 might be HDAC-independent. Methylation is known to be important in the regulation of gene expression. Thus we examined whether methylation might be involved in repression by Sp1. HeLa cells were transfected with pCMV-Sp1 (or just with the reporter) in the presence or absence of 5-azacytidine, a specific inhibitor of methylation. Repression by Sp1 was released in the presence of 5-azacytidine using the wild type reporter but not the mutant reporter (pMAZCAT3-m8) and the control reporter, pRSVCAT (Fig. 8B). Furthermore, the forced expression of DNMT1 enhanced the repression of transcription by Sp1, whereas treatment with 5-azacytidine reversed the repression due to Sp1 and DNMT1 (Fig. 8C). We performed immunoprecipitation and Western blotting analysis using nuclear extracts from HeLa cells, and the results showed that DNMT1 was included in the complex of Sp1 and vice versa (Fig. 8D). Taken together, these results suggest that DNMT1 might be involved in the repression mediated by Sp1.
DISCUSSION
The promoter regions of human housekeeping genes are usually GC-rich, and, by definition, these genes are expressed ubiquitously, as is, for example, the human gene for MAZ (22,30). Many GC-rich cis-elements can be found in the promoters of housekeeping genes, and they might be expected to regulate the transcription of various genes. In this study, we analyzed the GC-rich promoter of the human gene for MAZ in an attempt to identify the role of GC-rich cis-elements in the regulation of transcription of this gene.
The minimal MAZ promoter was located between nt Ϫ383 and ϩ259 (Fig. 2A). We showed that a 135-base pair sequence, from nt Ϫ383 to Ϫ248 in the minimal promoter region of the gene for MAZ was associated with the promoter activity. Further studies indicated that the region from nt Ϫ383 to Ϫ334 was critical for the promoter activity (Fig. 2B). The GϩC content of this region is relatively low, and there are two CAAC elements and two CTTC elements within this region (Fig. 2C). The region containing these four elements is 33 base pairs long, with an average GϩC content of only 49%, the lowest GϩC content in the extremely GC-rich promoter region of the gene for MAZ. It has been reported that, in some promoters, the proximal upstream region is extremely GC-rich, whereas the distal region is AT-rich (46 -49). It has also been reported that a stretch of GC-rich sequences is followed by AT-rich sequences in some promoters (49). A specific cis-element in the promoter region of the c-myc gene is localized in an AT-rich domain that is flanked by GC-rich sequences (49). The cited studies suggest that relatively AT-rich elements in extremely GC-rich sequences might be recognition sites for transcription factors that are associated with the initiation of transcription (49). To determine whether these motifs are critical for the activity of the promoter of the gene for MAZ, we examined a series of constructs with mutations in these motifs. As shown in Fig. 2D, two symmetric CAAC elements and two CTTC symmetric elements were required for basal transcriptional activity, and the contribution of each element to the total transcriptional activity was lower than that of all the elements together. Both CAAC and CTTC elements have been found in the promoters of other genes, such as the gene for the ␣-myosin heavy chain, the gene for hydroxymethylbilane synthase, the gene for myosin light chain 2, and the gene for lactoferrin, and some of these sites have been shown to be important for promoter activity (42)(43)(44)(45). The factors that bind to the CAAC and/or CTTC elements in the minimal MAZ promoter remain to be identified.
The consensus sequence of MAZ-binding sites is very similar A, repression by Sp1 was independent of HDAC1. Transfections and CAT assays were performed using HeLa cells in the presence and absence of pCMV-Sp1 and pCMV-HDAC1. The cells were harvested after a 48-h treatment with TSA. Promoter activities of MAZ-CAT fusion genes are expressed relative to the activity of pMAZCAT3-wt in the absence of pCMV-Sp1, which was taken arbitrarily as 1.0. B, repression by Sp1 was sensitive to 5-azacytidine (5-aza). HeLa cells were transfected with pMAZCAT3-wt, pMAZCAT3-m8, and the control, pRSVCAT, and then stable clones were treated with 5-azacytidine for 72 h before assays of CAT activity. C, transfected HeLa cells with pMAZCAT3-wt were transfected with pCMV-Sp1 or pCMV-DNMT1 and incubated with or without 5-azacytidine for 72 h before assays of CAT activity. D, immunoprecipitation-Western blotting analysis. Proteins in extracts of HeLa cells were immunoprecipitated with antibodies against Sp1 or DNMT1, and then Western blotting analysis was performed. Mouse IgG was used as the negative control.
to that of Sp1-binding sites. The GC-rich minimal promoter of the gene for MAZ contains multiple binding sites for Sp1 and MAZ. We found that Sp1 bound to consensus Sp1-binding sites as well as to consensus MAZ-binding sites. Similarly, MAZ bound to the consensus binding sites for both MAZ and Sp1 (Fig. 3). The results of our gel shift assays indicated that both Sp1 and MAZ recognized the same cis-elements in the MAZ promoter. It has been reported that a GC-rich motif in the c-myc promoter region is a high affinity binding site for both MAZ and Sp1 (50). It has also been reported that Sp1 binds to a series of GC-rich nucleotide sequences as well as to the consensus Sp1-binding site (51). The fact that MAZ and Sp1 shared binding sites indicates that the regulatory activity of some GC-rich cis-elements is consistent with cooperative interactions by multiple transcription factors, such as zinc finger proteins, with the same or overlapping cis-elements.
The binding of both Sp1 and MAZ to the same cis-elements in the promoter region of the gene for MAZ might regulate transcription of the gene. Both Sp1 and MAZ suppressed transcription from the MAZ promoter (Fig. 4). There are seven consensus binding sites for Sp1 and nine consensus binding sites for MAZ in the minimal promoter region of the gene for MAZ. We tried to identify the cis-elements that are involved in repression of the transcription of the gene for MAZ, and we found that the extent of repression by Sp1 and by MAZ was reduced only when all of the consensus binding sites for Sp1 and MAZ had been mutated (pMAZCAT3-m8; Fig. 5). However, we did not detect enhanced expression of the mutated construct (pMAZCAT3-m8), as compared with that of the wild type construct, even when the possible consensus GC-rich motifs in the promoter regions of the MAZ gene were mutated (Fig. 5). We do not know the exact reason why the overexpression of Sp1 or MAZ stimulated the expression of pMAZCAT3-m8. One possible explanation is that the weak binding sites of Sp1 or MAZ are still present in pMAZCAT3-m8, which might be the other GC-rich sequences in the promoter region, and are functional for the residual activity of repression. In fact, it has been reported that Sp1 or MAZ binds other GC-rich elements besides the consensus motifs (27,51). Further studies are required for the identification of other motifs for Sp1 and MAZ. Alternatively, we cannot rule out the possibility that the overexpression of Sp1 and MAZ might titrate the coactivators or general transcription factors and result in the repression of the MAZ promoter. Further studies are required to answer these questions.
The activity of the promoter was repressed when any of the wild type binding sites remained (pMAZCAT3-m1-pMAZCAT3-m7; Fig. 5), indicating that almost all of the ciselements were involved in repression. Autorepression of the gene for MAZ by MAZ itself also indicates that negative feedback might possibly be involved in the control of the expression of housekeeping genes. Both the multiple GC-rich cis-elements and the upstream silencer element were involved in the negative regulation of the gene for MAZ, indicating that suppression of transcription of this gene is the major basal regulatory mechanism that controls its expression.
Both Sp1 and MAZ repressed the activity of the gene for MAZ through binding to the same cis-elements. We tried to determine whether the repression by Sp1 and by MAZ might be linked, but the results failed to reveal any interaction between Sp1 and MAZ (data not shown). Repression by Sp1 and repression by MAZ were independent phenomena, even though both involved the same GC-rich cis-elements. We identified novel repressive domains in both Sp1 and MAZ. The carboxyl-terminal region of Sp1 (amino acids 622-778) and amino acids 127-292 of MAZ were responsible for the respective repressive activities (Fig. 6, A and B). Moreover, repression was also dependent on the zinc finger domains of both Sp1 and MAZ, which were essential for binding to DNA (Fig. 6, A and B). It is possible that Sp1 and MAZ might bind to cis-elements through their zinc finger motifs, recruiting other factors through their repression domains.
Histone deacetylases act negatively to regulate the expression of many genes (52)(53)(54). Therefore, we examined whether histone deacetylases might be involved in repression of the gene for MAZ. HeLa cells were transfected with pCMV-Sp1 or pCMV-MAZ in the presence and absence of TSA, a specific inhibitor of histone deacetylases. Only repression by MAZ was released in the presence of TSA, whereas the repression by Sp1 was insensitive to treatment with TSA (Figs. 7A and 8A). Thus, it appears that histone deacetylases are involved in repression by MAZ. We confirmed this possibility by measuring HDAC activity of immunoprecipitated complexes that contained MAZ. The HDAC activity of complexes was about five times higher than that of control immunoprecipitates, and the HDAC activity of the former complexes was repressed in the presence of TSA (Fig. 7B). Immunoprecipitation and Western blotting analysis using nuclear extracts from HeLa cells indicated that MAZ could recruit proteins that included HDAC1, HDAC2, and HDAC3 to form a multiple protein complex (Fig. 7C).
We found that the action of Sp1 was insensitive to TSA and that HDAC1 had no effect on repression by Sp1 (Fig. 8A). Thus, repression mediated by Sp1 appeared to be independent of HDACs. It has been reported that methylation plays an important role in the suppression of transcription, and the interaction of Sp1 with MeCP2 has also been reported (55). We examined whether methylation might be involved in repression of the gene for MAZ and found that repression by Sp1 was sensitive to 5-azacytidine, a specific inhibitor of methylation (Fig. 8B). The ectopic expression of DNMT1 enhanced repression by Sp1, whereas 5-azacytidine reversed the repression induced by Sp1 and DNMT1 (Fig. 8C). Furthermore, it is highly possible that DNMT1 is recruited by Sp1 (Fig. 8D). Therefore, DNMT1 appeared to play a role in the repression mediated by Sp1.
We have demonstrated here a possible mechanism for the down-regulation and autorepression of a human housekeeping gene, namely the gene for MAZ, through the recruitment of different repressors by two different DNA-binding proteins, Sp1 and MAZ, which interact with the same cis-elements (Fig. 9). Our data suggest that different levels of suppression of the transcription of this housekeeping gene might be responsible for the different levels of expression of the gene in different tissues. Moreover, deacetylation and methylation appear to play distinct roles in the regulation of a single gene, namely the human gene for MAZ, in a process that is mediated by different DNA-binding transcription factors.
|
v3-fos-license
|
2021-07-25T06:17:02.027Z
|
2021-07-01T00:00:00.000
|
236210260
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6694/13/14/3532/pdf",
"pdf_hash": "759c50491c2908b3ac001f2f7518bf6aa1b4f915",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2610",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "a30abac42e085dd9e3d1d3386d690c6d724cc1cc",
"year": 2021
}
|
pes2o/s2orc
|
Deregulation of Transcriptional Enhancers in Cancer
Simple Summary One of the major challenges in cancer treatments is the dynamic adaptation of tumor cells to cancer therapies. In this regard, tumor cells can modify their response to environmental cues without altering their DNA sequence. This cell plasticity enables cells to undergo morphological and functional changes, for example, during the process of tumour metastasis or when acquiring resistance to cancer therapies. Central to cell plasticity, are the dynamic changes in gene expression that are controlled by a set of molecular switches called enhancers. Enhancers are DNA elements that determine when, where and to what extent genes should be switched on and off. Thus, defects in enhancer function can disrupt the gene expression program and can lead to tumour formation. Here, we review how enhancers control the activity of cancer-associated genes and how defects in these regulatory elements contribute to cell plasticity in cancer. Understanding enhancer (de)regulation can provide new strategies for modulating cell plasticity in tumour cells and can open new research avenues for cancer therapy. Abstract Epigenetic regulations can shape a cell’s identity by reversible modifications of the chromatin that ultimately control gene expression in response to internal and external cues. In this review, we first discuss the concept of cell plasticity in cancer, a process that is directly controlled by epigenetic mechanisms, with a particular focus on transcriptional enhancers as the cornerstone of epigenetic regulation. In the second part, we discuss mechanisms of enhancer deregulation in adult stem cells and epithelial-to-mesenchymal transition (EMT), as two paradigms of cell plasticity that are dependent on epigenetic regulation and serve as major sources of tumour heterogeneity. Finally, we review how genetic variations at enhancers and their epigenetic modifiers contribute to tumourigenesis, and we highlight examples of cancer drugs that target epigenetic modifications at enhancers.
Tumour Plasticity and Heterogeneity
Intratumour heterogeneity is one of the main features in cancer. It entails the presence of phenotypically and functionally distinct subpopulations of cells that can affect processes such as tumour invasion, metastasis or therapy resistance [1]. The impact of tumour heterogeneity is more evident when genetic markers are used as indicators of therapy regimens. For example, in oestrogen/progesterone-positive breast tumours, hormonal castration causes a primary shrinkage of the tumour mass; however, in many cases, recurrence is seen due to the presence of hormone refractory cells in the primary tumour [2][3][4][5]. The classic view of tumour heterogeneity is based on the "clonal evolution" theory of cancer, which suggests that intratumour heterogeneity is caused by the accumulation of genetic mutations in either the tumour bulk or its surrounding stroma, followed by the selection of clones that gain survival advantages. However, intratumour heterogeneity may also arise from processes that induce cell-state transitions without changing the genetic landscape.
Such cell plasticity is directly controlled by epigenetic mechanisms and can provide opportunities to reverse the tumour cell behaviour, for example, via differentiation-inducing therapies.
One of the processes that underlies tumour heterogeneity is the cellular differentiation of tumour cells that harbour a stem cell capacity. These cancer stem cells (CSCs) generate committed cells with a spectrum of phenotypes, that all share the same genetic background [6,7], making CSCs one of the main culprits of tumour heterogeneity [8,9]. Recent studies suggest that stemness can be considered as a "biological state" in which cells can enter or exit, indicating a robust cell plasticity within tumours [10,11]. Here, tumour cells can gain stem-like or differentiated phenotype depending on the intrinsic genetic triggers or the external environmental cues. For example, in luminal breast tumours, in which the majority of cells have epidermal characteristics, a small fraction of cells expresses the mesenchymal/basal markers (e.g., CD44) and show resemblance with the normal mammary stem cells. Of note, a homogenous population of luminal tumour cells, sorted based on low levels of CD44(CD44 low ), can regenerate the tumour bulk that contains CD44 high cells (constituting 10% of the tumour bulk). Thus, the luminal breast tumour cells can undergo de-differentiation to obtain a more basal CD44 high phenotype, suggesting a strong plasticity of cells within the tumour [11].
EMT is another major mechanism fuelling tumour plasticity [12][13][14]. EMT is a process of cell-state transition in which cells lose their apicobasal polarity and gain a mesenchymallike phenotype. In cancer, this transition includes a spectrum of states and can result in divergent clusters of hybrid cells showing a mixture of traits from the two ends of the epithelial-mesenchymal spectrum [15,16]. Recent findings indicate that this 'hybrid state' has a transcriptional profile with similarities to cancer stem cells and is governed by activation of the EMT-inducing factors similar to SNAIL and the stemness maintenance pathways, such as the canonical WNT signalling. A common feature of CSCs and cells in the hybrid state is their high tumourigenicity and stemness potentials, making EMT a major contributor to cancer [17][18][19]. Interestingly, forcing cells into acquiring a fully differentiated epithelial or mesenchymal phenotype leads to a drastic drop in tumourigenicity. For example, the constitutive overexpression of the mesenchymal master regulator ZEB1 in breast cancer cells with a hybrid-state phenotype induces a full mesenchymal profile and a decrease in tumourigenicity that is accompanied by a switch to non-canonical WNT signalling [17].
Another main source of tumour heterogeneity is the spatial organisation of tumour cells and their interactions with the tumour stroma. The sub-cellular distribution of ß-catenin is a classic example of tumour heterogeneity affected by the environmental cues. The nuclear enrichment of ß-catenin (a known sign of active WNT signalling) is particularly visible at the invasive front of APC-mutant colorectal tumours, whereas the proliferating cells of the tumour bulk show a more membranous ß-catenin localisation [20]. This observation is partly due to the gradient of growth factors and the cytokines secreted by the tumour microenvironment. The uneven diffusion of these signalling proteins (such as HGF and WNT ligands) can divide the tumour into different foci. Here, cells with a mesenchymal characteristic that show nuclear distribution of ß-catenin are located at the tumour periphery and are in close contact with the tumour stroma [21,22].
In addition to phenotypic variation, our understanding of the extent of tumour heterogeneity has tremendously increased by using technologies that reveal features of tumour cells at the single cell level. One such approach is based on investigating the gene expression profiles by high-resolution single cell RNA-seq (scRNA-seq) analysis [23]. For example, a recent study of a BRCA1-null model of breast cancer indicates that tumour cells with a similar genetic background can cluster into different subpopulations that have distinct gene expression profiles. In this regard, the upregulation of cell cycle regulators (e.g., BIRC5, TYMS and MKI67) is only observed in a cluster of highly proliferative cells and not in the cell cluster that exhibits a progenitor-like phenotype (with the expression of the basal cell markers such as KRT14, IGFBP5, WNT10A). These observations suggest that cancer therapies that target cell proliferation may be less effective in eradicating the quiescent progenitor-like populations in these tumours [24][25][26].
Enhancers, the Epigenetic Playground
Given the strong cell plasticity observed within tumours, a question that emerges is how these reversible cell-state transitions are controlled at the epigenetic level. In this section, we discuss transcriptional enhancers that are among the main sites of epigenetic regulation. Enhancers are cis regulatory elements (CREs) that function as information routers connecting the upstream signalling pathways to the downstream genes [27]. It is suggested that enhancers regulate their target gene expression regardless of distance and orientation. Enhancers interact with transcription factors (TFs), that induce chromatin accessibility at these sites, and are often decorated with various histone marks (Table 1), that are assessed by chromatin immunoprecipitation (ChIP)-based assays. For instance, H3K4me1 (mono methylation of lysine 4 at histone 3) is a general marker of poised and active enhancers, H3K27ac (acetylation of lysine 27 at histone 3) is mainly associated with active enhancers, while H3K4me3 (tri methylation of lysine 4 at histone 3) is enriched at active promoters and H3K27me3 (tri methylation of lysine 27 at histone 3) denotes poised or repressed enhancers ( Figure 1) [28][29][30][31]. Some enhancers are also transcribed and give rise to non-coding RNAs known as enhancer-RNA (eRNA) that can be used to assess enhancer activity [32][33][34]. Therefore, the profile of active enhancers ensures the spatiotemporal expression of target genes to sustain cell identity. The pattern of the distribution, combination and sequence degeneration of TF binding sites (TFBSs) control the regulatory output of enhancers [35]; thus, the deregulation of signalling effector TFs, transcription co-factors, and disruption of DNA sequence can affect enhancer function downstream of cell signalling ( Figure 1). New advances in single cell technologies reveal the heterogeneity of enhancer activity within tumours. For example, mapping chromatin accessibility by scATAC-seq across glioblastoma cells not only confirmed the presence of stem-like cells within the tumours, but also revealed further diversity within the CSCs population. Here, all CSCs share a fraction of active regions corresponding to genes involved in self-renewal and tumourigenicity; however, a fraction of accessible chromatin sites also showed diversity between CSCs. These sites include motifs for factors that affect invasion (FOXD1 and ALDH1A3), response to immune signalling (SP1) and neural commitment (OLIG2, AHR). Combined with scRNA-seq, these findings confirm the heterogeneity even within the tumour-initiating CSCs and highlight the potential challenges in targeted therapies [66,67].
The activity of enhancers can be also regulated by DNA methylation that is deposited at cytosine residues and, based on the genomic location and the co-occurrence of other epigenetic marks, can have a positive or negative impact on transcription. Generally, CpGs methylation at promoters and enhancers is followed by inactivating histone methylation marks and the formation of a condensed chromatin state [68], whereas gene-body methylation shows a positive correlation with transcription [69]. The global hypomethylation, e.g., via mutation in DNMTs, can potentiate the ectopic expression of oncogenes [52]. For example, studies in lymphoma indicate that the hypomethylated fraction of a genome usually harbours genes and CREs that are related to proliferation, differentiation, and negative regulators of P53 pathway [70]. However, an elevated level of DNMT can also contribute to tumourigenesis through the inactivation of tumour suppressor genes [53]. In breast cancer, for instance, the upregulation of DNMTs is necessary for tumour progression; here, the cancer stem cell subpopulation relies on DNMT1 to hypermethylate and suppress ISL1 that functions as a negative regulator of self-renewal in mammary stem cells and plays a tumour suppressor role in breast cancer [71,72]. Using the above-mentioned features, an experimental approach based on a combination of ChIP-seq for histone modifications and TF-binding, RNA-seq for tracking the transcriptional activity, and DNaseI-seq or ATAC-seq approaches for mapping the chromatin accessibility, can be used to identify functional enhancers. Furthermore, by scrutinising the DNA sequence at enhancer sites, for example, using motif search at open chromatin regions decorated by H3K27ac and H3K4me1, the cell-state specific TFs can be annotated and further validated by immunoprecipitation approaches. For instance, applying motif discovery at RAS-dependent open chromatin regions shows the enrichment of AP-1 and Stat92E at enhancers that gain activity downstream of RAS signalling. In this case, the oncogenic recruitment of Stat92E to regulatory elements deems necessary for tumourigenesis and can be further validated by ChIP and mutagenesis experiments [54][55][56][57].
The exact sequence of events that leads to enhancer activation is not well established; however, pioneering factors and lineage defining TFs (LDTFs) can access the condensed chromatin regions and can act as the initial step for enhancer activation. These TFs recruit the epigenetic modifiers, e.g., the histone acetyl transferases CBP/P300 and/or the mono methyl transferases MLL3/MLL4, to deposit H3K27ac and H3K4me1 at enhancers. These active enhancer marks are then recognised by epigenetic readers such as BRD4 that recruit transcription coactivator, the Mediator complex and the RPOL2 transcription machinery to maintain the expression of target genes [58,59].
In order to activate transcription, enhancers need to interact with their target promoters. These interactions are mediated by proteins such as Cohesin that enable chromatin looping. Data from high resolution chromatin mapping shows that the network of enhancer-promoter (EP) interaction changes in tumour cells and contributes to cancer progression. These alterations can be caused by the deregulation of proteins involved in chromatin organisation and looping (e.g., BRCA1 in luminal breast cells [60] or Cohesin in leukaemia [61]), epigenetic modifiers that control enhancer activity (e.g., EZH2 [62]), or structural genetic variants that bypass enhancer-flanking insulators, leading to enhancer hijacking [63]. Oncogenic TFs can also contribute to establishing specific EP interactions in tumours. For example, in prostate cancer, cancer-specific EP interactions involve enhancers that are enriched for and are activated by oncogenic TFs such as FOXA2. These FOXA2-dependent enhancers engage in EP interactions that are specific to tumour cells and influence the expression of key oncogenes, such as the Androgen Receptor (AR) and DLX1 in prostate cancer [64]. Chromatin looping and EP interactions are also among the targets of cancer therapy. For example, in endocrine-resistant oestrogen receptor (ER)-positive breast tumours, inducing global DNA hypomethylation can overcome therapy resistance. This treatment demethylates and activates the ER-responsive enhancers that in turn establish new interactions with the promoters of tumour suppressor genes, leading to their activation and the suppression of tumour growth [65].
New advances in single cell technologies reveal the heterogeneity of enhancer activity within tumours. For example, mapping chromatin accessibility by scATAC-seq across glioblastoma cells not only confirmed the presence of stem-like cells within the tumours, but also revealed further diversity within the CSCs population. Here, all CSCs share a fraction of active regions corresponding to genes involved in self-renewal and tumourigenicity; however, a fraction of accessible chromatin sites also showed diversity between CSCs. These sites include motifs for factors that affect invasion (FOXD1 and ALDH1A3), response to immune signalling (SP1) and neural commitment (OLIG2, AHR). Combined with scRNAseq, these findings confirm the heterogeneity even within the tumour-initiating CSCs and highlight the potential challenges in targeted therapies [66,67].
The activity of enhancers can be also regulated by DNA methylation that is deposited at cytosine residues and, based on the genomic location and the co-occurrence of other epigenetic marks, can have a positive or negative impact on transcription. Generally, CpGs methylation at promoters and enhancers is followed by inactivating histone methylation marks and the formation of a condensed chromatin state [68], whereas gene-body methylation shows a positive correlation with transcription [69]. The global hypomethylation, e.g., via mutation in DNMTs, can potentiate the ectopic expression of oncogenes [52]. For example, studies in lymphoma indicate that the hypomethylated fraction of a genome usually harbours genes and CREs that are related to proliferation, differentiation, and negative regulators of P53 pathway [70]. However, an elevated level of DNMT can also contribute to tumourigenesis through the inactivation of tumour suppressor genes [53]. In breast cancer, for instance, the upregulation of DNMTs is necessary for tumour progression; here, the cancer stem cell subpopulation relies on DNMT1 to hypermethylate and suppress ISL1 that functions as a negative regulator of self-renewal in mammary stem cells and plays a tumour suppressor role in breast cancer [71,72].
Recent studies indicate that DNA methylation of cis-regulatory elements is highly heterogeneous within the tumour bulk. Spatial sampling of breast cancer cell populations revealed a divergent profile of DNA methylation across the tumour that is mainly detected at genes such as GSTP1, FOXC1, ABCB1, PTEN, and TGM2 that contribute to drug resistance. This heterogeneity of DNA methylation is further increased after cancer therapy, in which a specific cluster of stem-like cells repopulates the tumour mass. Heterogeneity in DNA methylation is also observed in genes that regulate the stem cell quiescence (e.g., SOX9, ALDH1L1, WNT5A and HOPX), and that are hypomethylated in a small fraction of CSCs [73]. Applying multi-region sampling in prostate tumours also detected a differential pattern of methylation at distal regulatory elements of tumour suppressor genes such as PTEN, TP53, and GSTP1. Moreover, a heterogeneous DNA methylation was also observed at AR-responsive enhancers across the tumour bulk, generating a cluster of cells with different sensitivity to androgen exposure. This heterogeneity could fuel the later clonal evolution and hormone resistance in tumours [74].
Enhancer Dynamics and Adult Stem Cell Differentiation
A common route to tumourigenesis is epigenetic deregulation, particularly in adult stem and progenitor cells that are the cornerstone of tissue homeostasis (Figure 1). Haematopoiesis is one of the main paradigms for studying stemness in normal homeostasis and cancer given the well-established cellular hierarchy in the haematopoietic system [75][76][77]. During haematopoiesis, HSCs are found in a relatively quiescent state, whereas the blood cell repopulation is mainly driven by proliferative progenitors. In cancer, however, either the normal resident HSC gain proliferative features, or the progenitor cells go through a de-differentiation process. In line with this, the re-activation of an epigenetic repertoire of progenitors and stem cells has been observed in cancer. For instance, HMGN1, a DNA binding protein that regulates chromatin accessibility, serves as one of the main modulators of chromatin architecture in HSC and myeloid progenitors, and is commonly amplified in myeloid malignancies. In myeloid progenitors, HMGN1 is crucial for cell state maintenance, as it regulates the chromatin accessibility and H3K27ac deposition by P300/CBP at HOX loci. Overexpressing HMGN1 in progenitor cells impairs their differentiation, promotes their proliferation, and causes a global increase in chromatin accessibility. The overexpression of HMGN1 therefore leads to the upregulation of oncogenes and loss of lineage-specifying regulators, such as C/EBPα, resulting in an expression profile similar to that of leukaemia stem cells [78][79][80]. The contribution of deregulated enhancers to neoplastic transformation is more evident in aging HSCs that accumulate mutations in epigenetic regulators. During aging, enhancers that control the differentiation, homeostasis, and apoptosis of myeloid/erythroid cells lose H3K4me1 and H3K27ac marking. This loss of active epigenetic marks represses the expression of tumour suppressor genes, such as KLF6, BCL6, and RUNX3, leading to an increased susceptibility to cancer. Not surprisingly, the same repression pattern is also observed in cancer stem cells. Moreover, regulatory elements associated with potential oncogenes, such as GATA2, GFI1B, and EGR1, also gain active histone marks in aging HSCs [47]. Thus, the agingrelated changes in the enhancer landscape appears to mirror some of the oncogenic alterations observed in CSCs, hinting to the notion that aging stem cells might have an increased chance of neoplastic transformation.
Switching between enhancers in different developmental states is another strategy that cells use to tailor their gene expression profiles. In haematopoietic stem and progenitor cells, alternative elements that engage in enhancer switching have different compositions of TF binding sites that include various differentiation regulators (such as MYB, FLI1, LMO2, and RUNX1) and signalling-dependent TFs. This unique array of TF-binding sites and the differential expression of TFs determines the state-specificity and the level of enhancer activity. Interestingly, the deregulation of these TFs is often detected in transformed cells, leading to enhancer switching and the usage of elements with more oncogenic impacts. In line with this, the expression of GATA2 and MYC in haematopoietic malignancies is controlled by the re-establishment of a series of enhancers that normally function in HSCs, further highlighting the contribution of stem-state-specific enhancers to cancer [81][82][83].
The intestine is another tissue with rapid turnover and dynamic cell plasticity. Transcriptional analysis of the stem-like sub-population of colorectal tumours indicate their similarity with LGR5+ stem cells residing in the intestinal crypt. Thus, understanding the epigenetic mechanism controlling the plasticity in normal stem cell can shed light on the contribution of stem cells to tumourigenesis and intratumour heterogeneity [84,85]. Under normal homeostasis, the highly proliferative LGR5+ stem cells support the continuous turnover of the intestine epithelium. Upon injury and when the stem cell population is depleted, other reserve stem cells (such as Bmi1 + , Hopx + , or Lrig1 + cells) that normally have low proliferation rate can replenish the LGR5+ stem cells. In addition, more committed secretory and enterocyte (absorptive) progenitors can also de-differentiate to LGR5+ stem cells to support intestine regeneration upon injury [85][86][87][88]. The profound cell plasticity in the intestine suggests the dynamic reprogramming of enhancers during stem cell (de)differentiation. Using the H3K3me2 that marks active and poised enhancers, a striking similarity was observed between LGR5+ stem cells, secretory progenitor, and absorptive progenitor cells. These findings indicate that many enhancers that are active in progenitor cells are already primed in LGR5+ stem cells. A large number of these enhancers also show the H3K27ac active mark in stem cells and progenitor cells, indicating a broadly permissive chromatin among intestinal crypt progenitors [89]. In the tumour context, the dynamic of histones modifications is vital to the CSC pool and is usually controlled downstream of stemness regulatory pathways. For example, it is known that LGR5 + intestinal cancer stem cells are dependent on canonical WNT signalling. In these cells, nuclear ß-catenin needs to cooperate with MLL1 methyl transferase for the activation of WNT-responsive elements. MLL1 antagonizes the deposition of repressive H3K27me3 by PRC2 and marks the ß-catenin-bound regions with activator H3K4me3 that controls genes such as LGR5, SMOC2 and IGFBP4, which are needed for maintaining stemness identity [45].
The analysis of DNA methylation in intestinal stem cells and their differentiated progenies also demonstrates that only a few promoters and enhancers change their DNA methylation status during stem cell differentiation. Thus, in contrast to ESCs and haematopoietic stem cells, intestinal stem cell differentiation does not require DNA methylation for the stable lock of gene expression. In addition, many differentiation-associated genes already show hypomethylated status in intestine stem cells, indicating an epigenetic priming of stem cells [90]. In contrast to normal homeostasis, alteration in DNA methylation is one of the primitive changes in neoplastic LGR5+ intestinal stem cells downstream of oncogenic APC mutation. Although the pattern of DNA methylation does not change drastically during normal differentiation, APC ko stem cells acquire a distinct methylation landscape. The impaired methylome is mainly observed in intergenic and intronic regions, affects genes that control stem cell self-renewal and attenuates stem cell differentiation. This impeded commitment to differentiation results in the accumulation of LGR5+ intestinal stem cells in tissues after the loss of APC and hyperactivity of WNT signalling. In fact, suppressing de novo methyl transferases in APC negative intestinal organoids could sensitize the LGR5+ stem to differentiation stimuli. These data confirm the importance of DNMTs activity in transformed stem cells and its contribution to tumourigenesis. Aside from hindering stem cell differentiation, deregulated DNA methylation can also lead to the reactivation of transposable elements (TE). Evoked due to hypomethylation, the transposition of TEs predisposes the cells to genomic instability, which contributes to cancer progression by increasing the level of chromosomal aberrations and genomic gain at oncogenes [91,92].
Enhancer Dynamics and EMT
EMT is one of the main mechanisms fuelling the tumour plasticity and heterogeneity and entails dynamic changes in transcriptional and epigenetic landscapes ( Figure 1) [12,93]. Studying EMT in normal and transformed cells indicates dependency on global epigenetic reprograming, that starts with extracellular cytokine or the induction of EMT master regulators (e.g., TWIST and SNAIL). DNA methylation is one of the waves of epigenetic changes that is pivotal for the EMT cell state transition, as suppressing DNMTs (e.g., by chemical inhibition of their functions) impairs EMT, even when the EMT-inducing signal persists [93][94][95]. The reprogramming of DNA methylome starts shortly after EMT induction (e.g., by TGFB) and lasts until the cells commit to a mesenchymal state. In transitioning cells, hyper methylation occurs at CpGs containing regulatory regions that associate with epithelial identity and cell cycle progression, resulting in chromatin condensation at these sites. Histone modifications are another layer of epigenetic regulations that dynamically change in response to an elevated level of EMT master regulators such as SNAIL [48,96,97]. SNAIL mainly functions as a transcription repressor and modulates the loss of H3K27Ac and H3K4Me3, as well as the enrichment of H3K27me3 by recruiting corresponding chromatin modifiers to the promoter of epithelial markers. However, later on in the transition, SNAIL induces positive histone marks (H3K4Me3 and H3K4Me1) on the promoter of mesenchymal genes to support the mesenchymal commitment [48].
Among different extracellular signals, TGFB is one of the most potent inducers of EMT that triggers a global epigenetic reprogramming in target cells. Several signalling cascades function downstream of TGFB to transmit the signal to target genes. For example, in mammary epithelial cells, ERK signalling plays a crucial role downstream of TGFB to regulate H3K27ac deposition at enhancers. In this regard, enhancers activated by TGFB are highly enriched for TFs such as GABPA, JUN, RUNX1, and ATF3 that function downstream of ERK signalling and have prominent roles in EMT. Furthermore, when ERK-signalling is inhibited, cells treated with TGFB fail to express early EMT regulators such as HMGA2, ITGA2, and TGFBR1 due to a lack of H3K27ac deposition at corresponding enhancers. Thus, ERK-signalling is necessary for acquiring H3K27 acetylation at enhancers and the induction of the EMT process [98,99]. The TGFB-induced EMT is not always conveyed through canonical TFs (e.g., SNALI or ZEB1). In alveolar carcinoma cells, an unconventional trio of ETS2, HNF4A, JUNB show overexpression in the E/M hybrid phase of EMT and are necessary for the formation of TGFB-induced super enhancers. These super enhancers control the expression of mesenchymal markers such as FOXP1 and CDH2 in transitioning cells. The synergic effect of ETS2, HNF4A, JUNB is crucial for enhancer activity as their suppression impairs the EMT process through loss of enhancer activity [100].
The flow of information from the extracellular EMT inducers to chromatin is not well understood. However, several master regulators of EMT, such as ZEB1 and SNAIL, are shown to control downstream epigenetic modifiers. For example, ZEB1 increases the H3K4me3 deposition at EMT-related regulatory elements by controlling the histone methyl transferase, SETD1B. This regulatory pathway is also involved in colorectal cancer (CRC), in which increased levels of ZEB1 and SETD1B are correlated with high tumour invasion and poor prognosis [46]. Hijacking epigenetic modifiers by EMT regulators is not restricted to epigenetic activators; a case in point is ZEB1, which can suppress the epithelial markers (e.g., E-cadherin) via interaction with HDAC1 [101], BRG1 chromatin remodeller [102] and DNMT [103]. Thus, the combinatorial effects of altered epigenetic activators and repressors set the transcription stage during the EMT process [104]. The EMT master regulators can also control the binding pattern of epithelial TFs at regulatory elements. In CRC, for instance, SNAIL disrupts the activity of epithelial specific enhancers by the transcriptional repression of FOXA1. The FOXA pioneering factors are integral to epithelial homeostasis and their chromatin binding is crucial for commissioning the epithelial specific program. Thus, a reduction in FOXA1 at enhancers of key epithelial genes (such as CDH1, EPHB3 and CDX2) can lead to decreased H3K4me1 and H3K27ac at these sites and transcriptional changes that favour the mesenchymal transition [105,106].
In CRC, SNAIL also modulates the transcriptional program of WNT signalling in favour of EMT by changing the balance of available WNT effectors and suppressing some of the EMT inhibitors downstream of WNT signalling. Although WNT signalling is known to act as a positive contributor to EMT, it also regulates the homeostasis of normal intestinal epithelium. In this context, some of WNT-responsive regulatory elements enhance the expression of epithelial genes and can negatively impact EMT. One of these WNT-responsive factors is EPHB2, a tumour suppressor gene that controls the distribution and organisation of proliferative epithelial cells in normal crypts. The positive regulation of WNT is applied through β-catenin/TCF7L2 binding at the EPHB2 enhancer. Upon EMT, SNAIL upregulates another WNT effector, LEF1, which competes with TCF7L2 for binding to β-catenin at EPHB2 enhancer. The β-catenin/LEF1 complex decommissions the EPHB2 enhancer and abrogates its responsiveness to active WNT signalling during EMT [107,108].
EMT regulation can be cell-type specific, and the downstream epigenetic changes can vary, dependent on the EMT stimuli. For example, EMT induction in renal, alveolar and breast immortalised cells reveals that the changes in histone methylation (at H3K27, H3K4 and H3K9) is cell-type specific. This difference in epigenetic regulation is also observed when different EMT stimuli such as TGFB/TNFa or EGF are used. This context-specific epigenetic regulation is associated with different transcriptional outputs. For instance, TGFB treatment causes the downregulation of E-cadherin in all investigated cell lines, whereas the gain of mesenchymal markers such as Vimentin is detected in some cells. Nevertheless, a fraction of regulatory elements is similarly decorated with histone marks in all conditions, regardless of the source of EMT induction. Interestingly, these universal EMT elements control genes that function in extracellular matrix degradation (e.g., ADAM and MMP9) rather than EMT-related transcription factors [109,110].
Mutations Affecting the Enhancer Sequence
About 324 million genetic variants are known in the human genome, with an estimated 5 million sites differing between each individual genome and the reference sequence. These genetic variants mainly comprise single nucleotide variation (SNPs) or structural variants, including duplications, insertions, inversions, and translocations. The large fraction of genetic variants are often low-penetrance risk factors that occur at the non-coding genome, including enhancers, promoters, insulators and ncRNAs. Genome-wide association studies (GWASs) indicate that these risk loci show an overrepresentation at, among others, TF binding sites [111] and cell-type-specific regulatory elements, suggesting that many of these risk loci are likely to affect enhancer function. Furthermore, mutations at enhancers can affect epigenetic marking in cancer. Recent studies based on whole genome sequencing (WGS) analysis indicate that active TF binding sites exhibit higher mutation frequency due to the inhibition of DNA repair at these sites when compared to closed chromatin regions [112].
In general, genetic variants can be divided into somatic mutations that are not passed to the next generation or germline variants that are inherited from parents and are present in germ cells. Although most somatic genetic variants in cancer occur in the non-coding genome, recent studies from the ICGC/TCGA Pan-Cancer Analysis of Whole Genomes (PCAWG) demonstrate that the number of driver mutations at enhancers is much lower than coding somatic mutations. Identifying driver mutations is usually based on the high recurrence of the variants in cancer or a strong functional impact in tumourigenesis. To identify the impact of non-coding genetic variants, the PCWAG consortium performed WGS in 38 tumour types that included 2658 tumours and matched non-tumour samples. Based on this approach, 13% (785 out of 5913) of all identified driver point mutations are detected at non-coding regions. On average, 4.6 driver mutations per tumour are detected in coding and non-coding DNA, including 2.6 driver point mutations in coding and 1.2 driver mutations in non-coding genomes. Mutations at TERT promoter represented the most frequent noncoding driver mutations and comprised~one third (237 out of 785) of all non-coding DNA mutations. These mutations activate the TERT expression and result in increased telomere length in somatic cells that is associated with tumourigenesis. Among other non-coding mutations that affect tumourigenesis are point mutations at enhancers of FOXA1 in prostate cancer [27] or mutation at the ADGRG6 enhancer in bladder cancer [113]. Somatic mutation can also form novel TF binding sites that mark the regions as putative oncogenic enhancers (Figure 2A). This gain of enhancer activity is reported in acute lymphoid leukaemia (ALL), in which a point mutation generated a MYB binding site. This ectopic binding site leads to MYB binding and a consequent increase in H3K27ac, generating an oncogenic super enhancer that induces TAL1 expression [114]. However, despite the larger fraction of somatic variants occurring in non-coding regions, only a small number of these mutations appear to be driver mutations when compared to the coding-region drivers [115].
Mutations at Enhancer Regulators
It is important to note that enhancers operate as platforms for the epigenetic machinery. Thus, in addition to genetic variations in cis-regulatory elements, mutations in enhancer-associated proteins can also influence cell state, tumour formation and progression, via altered epi-decoration of enhancers [118,119].
Post-translational modifications at H3K27 are imperative for defining the activity of promoters and enhancers, and variations that influence these modifications can lead to altered transcription. This has been observed for KDM6 (also known as UTX1) and EZH2, which are the main demethylase and methyl-transferase of H3K27, respectively; mutations in these factors are frequently detected in cancer and can change the methylation pattern at H3K27 [49,50,120]. For example, in bladder cancer, which has one of the highest In addition to point mutations, somatic structural rearrangements (deletions, duplications, inversions, or translocations) can also affect enhancer function by disrupting enhancer-promoter interactions or by translocating enhancers to the proximity of the target gene, consequently impacting gene expression ( Figure 2B). Based on PCWAG data, structural variants can significantly influence gene expression. In general, many structural variants significantly increased gene expression, as was observed for the TERT, MDM2, CDK4, ERBB2, CD274, PDCD1LG2, and IGF2 loci. However, genetic rearrangements of nearby genes do not always increase the number of interacting enhancers but often lead to a closer proximity of enhancers to target promoters [116].
Somatic structural variants may also disrupt the boundaries of insulated genomic domains and can lead to deregulated gene expression ( Figure 2C). Most enhancer-promoter interactions occur within topologically associating domains (TADs) [117] that insulate the regulatory activity of enhancers from neighbouring domains. In this regard, structural variants can lead to TADs fusion, the duplication of TAD-boundaries or complex rearrangements, such as TAD inversions. The PCWAG found that structural variants affecting TAD boundaries occur in 5.0%, 8.5%, and 12.8% of all deletions, inversions, and duplications, respectively. Such examples have been observed in a TAD boundary deletion near the WNT4 locus in lymphoma, and the SLC22A2 locus in breast cancer. However, PCWAG results indicate that structural variation at TAD boundaries do not strongly affect the expression of nearby genes; only in 14% of the cases, deletion in a TAD boundary results in the significant expression change of nearby genes [116].
Mutations at Enhancer Regulators
It is important to note that enhancers operate as platforms for the epigenetic machinery. Thus, in addition to genetic variations in cis-regulatory elements, mutations in enhancerassociated proteins can also influence cell state, tumour formation and progression, via altered epi-decoration of enhancers [118,119].
Post-translational modifications at H3K27 are imperative for defining the activity of promoters and enhancers, and variations that influence these modifications can lead to altered transcription. This has been observed for KDM6 (also known as UTX1) and EZH2, which are the main demethylase and methyl-transferase of H3K27, respectively; mutations in these factors are frequently detected in cancer and can change the methylation pattern at H3K27 [49,50,120]. For example, in bladder cancer, which has one of the highest frequencies of KDM6A mutations, KDM6 inactivation leads to increased H3K27 methylation at regulatory elements that control the tumour suppressor genes such as IGFBP3. IGFBP3 is a known pro-apoptotic factor and its suppression leads to aberrant cell cycle progression and tumourigenesis [50]. KDM6 also functions as a component of the COMPASS-like complex, which demarcates enhancers by depositing activating H3K4 methylation marks. Thus, in addition to controlling the expression of tumour suppressor genes, the loss of KDM6 can lead to the irregular activity of enhancers that induce the expression of oncogenes such as KRAS and RUNX3 [121].
A precisely regulated pattern of H3K27 acetylation is pivotal for the maintenance of cell homeostasis, as mutations in acetyl transferases are frequently reported in primary [36] and relapsed tumours [37]. Inactivating mutations, ranging from point mutations to deletions, usually affect the HAT catalytic domain and can lead to a decline in histone acetylation at regulatory elements. This decrease in histone acetylation is usually observed at enhancers and promoters that control the transcription of tumour suppressor genes. For instance, the decreased activity of P300/CBP in keratinocytes results in a drastic drop in the expression of a negative regulator of MAPK/ERK signalling, MIG6, and promotes cell proliferation and tumourigenesis [38,42,122]. In some cases, other orthologue proteins can compensate the defected HAT [39]. For example, the CBP-deficient tumour cells depend on P300 to sustain H3K27ac not only at the homeostatic genes but also the oncogenic expression of MYC [39]. Of note, the HAT compensation by other orthologous cannot entirely recapitulate the regulatory functions of the defected counterpart, as different HATs engage in distinct protein interactions. Histone acetyl transferases can also acquire point mutations outside the core catalytic domain that can subsequently alter the protein-protein interactions [41]. Falling into this category is the P300 S89A mutation, which disrupts its interaction with β-catenin in intestinal epithelium. Impaired WNT/P300/β-catenin signalling results in the downregulation of genes involved in differentiation, metabolism, and cell-cell interaction. Mice models carrying S89A mutation are highly sensitive to intestinal insults and are predisposed to cancer formation [40].
The other side of balancing the histone acetylation is controlled by histone deacetylases such as HDAC1. A multi-omics study in liposarcoma shows that HDAC1 is mutated in 8.5% of primary tumours. Mechanistically, HDAC1 inactivation leads to the deregulated expression of lineage-defining TFs such as C/EBPα, enhancing the undifferentiated state of tumour cells [123]. In another case, CRC cells that carry a frame-shift mutation at the HDAC2 gene become refractory to anti-proliferative drugs. This loss of HDAC2 correlates with increased histone acetylation at regulatory elements that control various pathways involved in cell proliferation [124,125].
Considering the importance of H3K27 in accepting various activator or inhibitory modifications, mutations that occur at this histone residue can also directly impact epigenetic regulations. Oncogenic missense mutations in H3K27 (e.g., H3K27M) have been reported in several cancers, especially early onset gliomas. This dominant negative mutation that usually affects only one of the H3 coding genes, can affect the expression of oncoand tumour suppressors, and in some tumours (such as low grade glioma) contributes to poor prognosis and overall survival [126]. Mechanistically, di-and tri-methyl deposition on H3 is reduced due to the stalling of PRC2 over H3K27M. Although the mutant histone residue does not impair the recruitment of PRC2, it inhibits the spread of methylation along regulatory elements. Genes that are affected by this methylation impairment are among stemness and lineage-defining regulators that do not require a high level of expression to induce oncogenic transformation [127,128]. In addition, inefficient and aberrant methylation at regulatory elements can interfere with the chromatin landscape and can promote the activation of enhancers. Thus, the presence of this onco-histone results in heterotypic nucleosome formation with reduced methylation and an elevated level of acetylation at the wild-type H3K27. This altered landscape of H3K27 modifications ultimately affects the MAPK and Rho-associated GTPase signalling, that, along with the deregulation of lineage specific genes, contribute to glioma formation [121].
Epigenetic Cancer Drugs that Target Enhancers
Chemicals that target the epigenome are among front liners of cancer therapies. Such therapies include approaches that aim at restoring the expression of tumour suppressor genes by resolving the repressed chromatin state at corresponding regulatory sites. Inhibiting DNMTs and the PRC complex (counteracting DNA and histone methylation) are among such approaches ( Figure 3A). In this regard, the cytosine analogue, 5-aza-2 -deoxycytidine, has been one of the first clinically approved epi-drugs that could successfully eradicate tumour cells by inactivating the DNMTs [129]. DNMTs can be also inhibited by naturally occurring compounds such as Shikonin and non-nucleoside inhibitor RG108, both of which are known to restore the expression of tumour suppressor genes such as PTEN [130]. As methylated DNA can serve as a substrate for further histone methylation [68], targeting the histone methyl transferases serves as another approach for reactivating tumour suppressor genes. For example, EPZ-6438 (an EZH2 inhibitor that represses PRC2 activity) inhibits the proliferation of cancer cells in haematological malignancies [51,131,132] and induces apoptosis by restoring the expression of pro-apoptotic factors (such as FBXO32) [133].
In addition to histone methylation, aberrant histone acetylation is also a major influencer of suppressor genes and oncogenes expression ( Figure 3B). Therefore, enhancer reprogramming via targeting histone acetylation provides a promising therapeutic strategy in cancer. One such strategy is based on the upregulation of HDACs or inhibiting the activity of HATs, leading to a loss of the acetylation at CREs associated with oncogenes [134]. Such examples include the use of small molecules, such as Comp 5, that induces SIRT1 catalytic activity, resulting in the deacetylation of H3 in glioma or using HAT inhibitors such as CCS1357 for targeting P300/CBP in prostate cancer, leading to the downregulation of AR and MYC [43,135]. The combined inhibition of epi-factors, e.g., P300/CBP (by GNE-781) and BRD4 (by OTX015), have been also applied for decommissioning oncogenic enhancers (e.g., MYC enhancers), and can increase the anti-tumour efficacy [136,137]. Targeting tumour-associated super enhancers also provides a promising epigenetic therapy, as addiction to oncogenic enhancers can be observed in many tumours [138]. Another approach for modulating oncogenic enhancers is to inhibit the commissioning TFs (or signalling effectors) alongside the general epigenetic modifiers [140]. For instance, the simultaneous targeting of BRD4 (by BETi) and oncogenic pathways, such as WNT and MAPK, in CRC inhibits the oncogenic expression of MYC and decreases the risk of tumour resistance [141]. For this aim, it is important to identify the dependency on Another strategy for targeting histone acetylation is based on using HDAC inhibitors to impede the global histone deacetylation, particularly at enhancers associated with tumour suppressor genes ( Figure 3B). Such examples include the use of SAHA, an inhibitor of HDAC, that restores the expression of tumour suppressor genes which contribute to autophagy, apoptosis, and G2/M arrest in cancer cells [139]. However, the HDAC inhibitor effects are not always fully predictable due to the complex interactions between epigenetic regulators. For instance, HDAC inhibitors, such as largazole, induce a global hyperacetylation across the genome, most notably at poised enhancers. Here, a low dose of largazole increases H3K27ac and H3K9ac levels and enhances the transcriptional activity as anticipated. Surprisingly, treating cells with higher doses of HDACi strips acetylation from H3K27 residues at enhancers and super enhancers, leading to the repression of associated genes, such as MYC and AP-1, by halting the RPOL2 complex. Consequently, this unexpected global enhancer decommissioning drastically affects the proliferation of transformed cells, as they are more reliant on the super enhancer-regulated pathways [44].
Another approach for modulating oncogenic enhancers is to inhibit the commissioning TFs (or signalling effectors) alongside the general epigenetic modifiers [140]. For instance, the simultaneous targeting of BRD4 (by BETi) and oncogenic pathways, such as WNT and MAPK, in CRC inhibits the oncogenic expression of MYC and decreases the risk of tumour resistance [141]. For this aim, it is important to identify the dependency on key TFs by interrogating the landscape of active enhancers in tumour cells [142,143]. Profiling active enhancers was recently applied in meningioma, and it could not only stratify different tumour subtypes, but also proposed druggable enhancers and their dependencies on upstream signalling pathways [142]. The efficacy of co-inhibition strategies is more evident in tumours with hormonal dependencies. For instance, the co-inhibition of PRC2 and glucocorticoid receptor (GR) in lymphoma could drastically halt tumour proliferation by harnessing the GR addiction of oncogenic enhancers [144,145]. A similar approach applies for managing ER-α-induced enhancers in breast and endometrial tumours, where inhibiting epigenetic factors enhances the anti-tumour efficacy of hormone therapy [146,147]. However, considering the ever-changing repertoire of epigenetic regulators in cancer, there is always a risk of developing resistance. Tackling this problem requires holistic analysis to map the activity of enhancers and their associated factors through the course of tumour treatment and cancer progression.
Conclusions
In this review, we have discussed the epigenetic deregulation of transcriptional enhancers and how it fuels the cell plasticity in cancer. We discussed the spectrum of enhancer-reprogramming during EMT and lineage differentiation of adult stem cells and highlighted examples of how these cell-state transitions underlie the cell heterogeneity in tumours. As enhancers connect the upstream signalling to the downstream gene expression program, investigating the enhancer profiles in tumours can reveal the key chromatin factors that control the transcriptional programs in tumour cells. Furthermore, we covered findings of the PCAWG project on how structural variations that disrupt the sequence or the positioning of enhancers (in relation to their target genes) influence the activity of enhancers in cancer. New advances in high resolution imaging and single cell analysis (e.g by a combination of scRNA-seq, scATAC-seq, and low/ single cell ChIP-seq and DNA methylation analysis) will provide further insights into the heterogeneity of enhancer activity in different tumour subpopulations. Furthermore, the role of cancer associated polymorphisms and structural variants at enhancers remain largely unknown; new approaches using the CRISPR-Cas9 system for (epi)genetic editing will provide new methods for testing the functional contribution of non-coding variants in cancer. Finally, how tumour cells adapt their epigenenome to acquire resistant to cancer therapies, remain an important area for further investigation. Understanding this epigenetic adaptation will provide opportunities for using epigenetic drugs that target enhancers to modulate cell plasticity within the tumour (e.g., by induce differentiation in stem like tumour cells).
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2017-04-01T21:22:40.334Z
|
2013-12-01T00:00:00.000
|
14820815
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.gastrores.org/index.php/Gastrores/article/download/588/646",
"pdf_hash": "af94cb128079a5cd23ee3c23ffa0e361a5612f75",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2611",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "af94cb128079a5cd23ee3c23ffa0e361a5612f75",
"year": 2013
}
|
pes2o/s2orc
|
Endoscopic Removal of Granular Cell Tumors of Stomach: Case Report and Review of Literature
Gastrointestinal granular cell tumors (GCTs), usually benign, soft-tissue tumors, are thought to arise from Schwann cells that may occur at many sites. Only 5-7% of these lesions are detected in the gastrointestinal tract. Histologically, it is composed of sheets or nests of plump round or polygonal cells having abundant slightly amphophilic granular cytoplasm with centrally located uniform pyknotic nuclei and immunohistochemical staining for S-100 protein supports the proposed derivation from Schwann cells. In this study, we reported a case of a solitary GCT of the stomach that was completely removed after endoscopic submuscosal resection.
Introduction
Granulosa cell tumors (GCTs) are uncommon, usually benign, soft-tissue tumors rarely seen in clinical practice. In the past, they were called granular cell myoblastoma because of suspected muscle origin. Almost always benign, GCTs are found in patients of all ages with equal frequency in both sexes and it appears to be a relatively greater prevalence in blacks than in whites [1]. Although almost any organ may be involved, 70-80% of GCTs appear as small asymptomatic masses in the skin, subcutaneous tissue, or mouth, particularly on the tongue [1]. The onset of this tumor in the gastrointestinal (GI) tract is rare. Almost, 8% of all GCTs occur in the GI tract and the most common location is the esophagus and large intestine [2]. We report a case of a woman with a solitary GCT of the stomach, incidentally found ahead GI endoscopy, which was completely removed endoscopically.
Case Report
A 54-year-old Nigerian woman was referred to our department for further evaluation of abdominal pain. She had no remarkable past medical history and no history of alcohol consumption, smoking and drugs. She underwent endoscopic examination during a routine checkup. Upper GI endoscopy was performed and revealed a small submucosal lesion of about 1.2 × 1 cm in diameter, located on the lesser curvature of the gastric antrum. The esophagus, duodenum and the remaining parts of the stomach were normal. Upon hospitalization, physical examination, biochemical parameters were completely normal. Endosonography demonstrated a homogeneous, hypoechoic, clearly demarcated 1.2 × 1 cm mass, which was confined to the submucosal layer and above the muscularis propria ( Fig. 1). It was challenging to confirm the diagnosis. Because the tumor was relatively small in size, it was considered for endoscopic resection.
Hypertonic saline-epinephrine solution was injected to Manuscript accepted for publication December 19, 2013 distinct the tumor from the muscularis propria layer and prevent bleeding. The tumor was lifted with placement of elastic bands over tissue to produce mechanical compression over the lower end of the tumor (Fig. 2) and cut electrically with a high-frequency snare inserted, and submucosal resection was done. No post procedural complication, such as bleeding or perforation. The excised specimen showed complete removal of the lesion. Cross-sections showed the tumor to be well-defined, homogeneous, solid and of yellowish color.
In the resected specimen, the tumor measured 1.2 × 0.6 × 0.5 cm in diameter. Histologic appearance showed submucosa to contain a lesion which is circumscribed and composed of nets and fascicles of cells with abundant granular cytoplasm and vesicular nuclei. Wisp of collagen is seen intersecting the lesion (Fig. 3). The granules were positive for periodic acid-Schiff stain, and also were immunoreactive to NSE and S-100 (Fig. 4). The diagnosis of GCT was made. The post procedure recovery was uneventful. She remained asymptomatic and no recurrent disease was observed after a 1-year follow-up.
Discussion
GCTs were defined for the first time by Abrikosoff in 1926. GCTs occur as intramural lesions throughout the GI tract.
It has become obvious that they may occur at many sites, although they affect most frequent skin or subcutaneous tissues of the chest and upper extremities, tongue, breast, female genital organs and only rarely the GI tract [3]. At least, half of the patients were black.
In addition, in half of cases, gastric GCT proved to be associated with esophageal synchronous localized and was rarely associated with other benign or malignant gastric diseases. Similar to our case, only three cases were reported in the world literature; gastric GCT did not illustrate multiple locations or was related to other gastric lesions [4].
The tumor presents as a small nodule or plaque with grayish-white to yellow color endoscopically and usually not greater than 2 cm that originate from the deep mucosa or submucosa [3,5]. On cut section, GCTs are pale, yellow-tan or yellow-gray. The cells are of Schwann cell origin, rounded, polygonal or spindled, and have a small/rounded nucleus [6].
Histologically, GCT is composed of sheets or nests of plump round or polygonal cells having abundant slightly amphophilic granular cytoplasm with small, round, centrally located uniform pyknotic nuclei [7]. Immunohistochemical staining for S-100 protein supports the proposed origin of the tumor from Schwann cells and myelin proteins [7,8]. GCTs show immunoreactivity also for vimentin, NSE, CD68 and CD57 [9,10]. Recently, Parfitt et al [11] demonstrated expression of an intermediate filament protein called nestin (found normally in neuroectodermal stem cells and early skeletal muscle) in GCTs, some of which were located in the esophagus [12]. Nestin might be regarded as a useful marker for identifying GCTs.
There is controversy concerning the histogenesis of GCTs, thus several synonyms have been used to describe this entity. Myoblasts, Schwann cells, histiocytes, perineural fibroblasts and undifferentiated mesenchymal cells have been postulated as the origin of the tumor [13], while theo- ries of the non-neoplastic nature of the lesion that result from trauma, as a degenerative process, or as a storage disorder involving histiocytes have also been considered. However, recent studies support a peripheral nerve-related cell of origin for the majority of these tumors based on the findings of cytoplasmic granules with numerous membrane-bound vacuoles containing myelin-like tubules and "angulate bodies" that show a close relation with pre-existent axons at the ultrastructural level, found between granular cells [13]. The expression of nestin in GCTs suggests that these tumors may arise from a common multipotential stem cell in the GI tract, which has the capability to differentiate along both interstitial cells of Cajal and peripheral nerve pathways [14].
GCTs are generally benign neoplasms, and malignancy rate is estimated to be 1 to 3% of all lesions [13]. There are reports of cases that have recurred or metastasized despite having a benign histologic appearance [13]. Individualities of malignant GCTs are local recurrence, large size (> 4 cm), rapid growth, invasion of adjacent organs and involvement of multiple layers in the GI tract [3,14]. Histologic features of malignant GCTs include necrosis, spindling, vesicular nuclei with large nucleoli, high nucleocytoplasmic ratio, cellular pleomorphism and increased mitotic activity [15,16].
Endoscopic ultrasonography has recently been used more frequently for determining the depth of tumor invasion in the GI wall, and may also be useful to evaluate GI tract submucosal tumors [2]. On EUS, GCTs usually arise in the lamina propria or deep mucosa layers of the GI tract, are usually less than 3 cm, hypoechoic, mildly inhomogeneous, and have smooth margins if benign. They are usually slightly more echogenic than leiomyomas [17,18]. Tada et al [14] stressed that the treatment of choice for GCT should be determined by EUS findings; the tumor is amenable to endoscopic treatment when EUS shows that the tumor is localized in the submucosa and has not invaded the muscularis propria. If the tumor is initially separate from the muscularis propria, the distance between tumor and muscularis propria can be increased by injecting the solution and lifting the lesion, after which removal can be carried out more safely and completely.
Yasuda et al [19] used techniques to increase the distance of the the tumor by injecting saline if it is attached to muscularis propria. However, saline injection alone with band ligation in the lower end of the tumor can be useful for complete removal of tumor from the base. The tumor is drawn into the banding device with suction, and then the rubber band is placed around the tumor. Ligation takes place when the suction is applied over the tumor. The tissue should be appropriate for suction. In fibrotic and hard mucosal tissue, although suction is successful, later the band may release from the site.
In summary, EUS is very helpful in evaluating GCTs to achieve a tissue diagnosis and to evaluate for possible resection of the tumor. Saline injection along with band ligation in the base of the tumor helps to do the complete removal of tumor without complications. Major surgical resection is probably unnecessary when a small submucosal tumor is detected in the stomach of a patient. When the excised tissue reveals findings of malignancy, further surgical intervention ought to be considered.
|
v3-fos-license
|
2019-01-19T10:49:52.499Z
|
2018-07-10T00:00:00.000
|
76650821
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.liberquarterly.eu/articles/10.18352/lq.10238/galley/10754/download/",
"pdf_hash": "def13a8d53724e1d2f1cc613f8f268494494ba53",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2612",
"s2fieldsofstudy": [
"Business"
],
"sha1": "def13a8d53724e1d2f1cc613f8f268494494ba53",
"year": 2018
}
|
pes2o/s2orc
|
The Need to Grow , Learn and Develop – How does Management Affect Motivation for Professional Development ?
This article argues that knowledge management and social recognition is important for organisational learning and professional self-esteem in academic libraries. An anonymous survey was issued in 2016 to investigate how library staff’s self-esteem is affected by how they experience their management’s view and overview of their knowledge. The need for what Axel Honneth refers to as social recognition will also be discussed as an important part of how professional self-esteem and work satisfaction is experienced and further how this affects motivation to participate in professional development.
Introduction
Richard Branson's famous words "Train people well enough so they can leave, treat them well enough so they don't want to" is a good summary of how knowledge management should be carried out.Motivated employees who dare to think in new and innovative ways can be challenging to manage but are highly valuable to their company.In the changing world of information science and librarianship the question of "adapt or die" is more relevant than ever.The need for ´learning on demand´ and constant professional development is becoming increasingly important for knowledge workers, making it even more important for their leaders to focus on knowledge management.
Knowledge management is a relatively new field of research, but an important one to knowledge organisations such as academic libraries.Worldwide, academic libraries are aiming to provide students and academic staff with information literacy skills to help them produce new and enhance their existing knowledge.Yet little attention has been given to librarians' own skills and providing a plan for their professional development.IFLA has formulated the standard for "Continuing Professional Development: Principles and Best Practices" (IFLA, 2016), but few libraries seem to have an explicit focus on this yet.According to Townley (2001), Islam, Agarwal and Ikeda (2014) and Daland (2015), much more attention could be dedicated to the area of knowledge management in libraries.Librarians are knowledge workers and a natural assumption would be that their professional self-esteem and job satisfaction would be affected by whether they had the possibility for professional development or not.Axel Honneth argues that the need for social recognition is also important.This would indicate that being recognised by management for one's skills and competencies would help motivation and professional self-esteem.This article aims to investigate how managements' attitude towards professional development affects librarians' motivation to participate in such things and further how their self-esteem and perception of professional capability is affected.Lines will be drawn to the philosopher Axel Honneth and his theories of social recognition (Anderson & Honneth, 2005;Honneth, 2008Honneth, , 2012)).
Methodology
A quantitative study of how knowledge is viewed, experienced and prioritised in Norwegian academic libraries is the basis for this article.An anonymous survey was conducted using the survey programme SurveyXact in the spring of 2016.The main goal of the survey was to map and understand how Norwegian librarians had experienced the transition to a new library management system.The questionnaire also included more general questions on how management and staff viewed competencies and knowledge in their libraries.The questions were mainly closed, but some open commentary fields were included to catch any other comments or opinions the respondents may have wanted to share.Descriptive statistics will be used to shed light on and investigate the research questions.
Staff in Norwegian academic libraries count for 1,637 full-time equivalent (Statistics Norway, 2016).The answer rate was 499 respondents that completed the whole survey, and 127 that gave some answers, but did not complete the entire survey.This will be reflected in the N value in the graphs.The response of 626 library professionals makes for a response rate of 38%.
There are some challenges to a quantitative survey issued to several academic libraries of different sizes and work cultures, and this will have to be addressed in the analysis of the data.To keep the survey anonymous, a link to the questionnaire was issued without linking the respondents to their answers.This may mean that some respondents have answered the questionnaire several times.Also, one must consider that those who chose to respond are the ones with strong opinions, being positive or negative.
Knowledge Management as Theoretical Framework
Knowledge management can be defined as "The creation and subsequent management of an environment which encourages knowledge to be created, shared, learnt, enhanced, and organized for the benefit of the organization and its customers" (Sarrafzadeh, Martin, & Hazeri, 2006, p. 624).It is a relatively new area of research, emerging over the past twenty years.Knowledge management is a task that should be engaged in first and foremost by management.Management is responsible for strategic planning of the enterprise and should lead the way forward.Not only among existing staff at the workplace, but also to strategically plan the need for new employments.
Libraries are knowledge organisations and the knowledge of the library's staff ultimately decides what the library can do and what services it can offer its users.Therefore, library management should focus on managing knowledge.This entails to build, hold, pool and use knowledge (Wiig, 1993).A library needs to keep and uphold the present knowledge, but also to map out the knowledge gap to develop the further growth of knowledge and raise competencies in required areas.Different employees hold different knowledge and have different personalities.The most creative and innovative people may be the most challenging to manage, as they are constantly thinking critically of the way things are carried out and how it could be done differently.Management may have their hands full just by managing the everyday activity and may get frustrated by this.However, a firm grasp on knowledge management and a strategic plan for the future may help getting the most out of the internal creativity.Creative employees whose ideas are constantly rejected may feel frustrated and demotivated, making them less interested in contributing to the development of the business.This provides another compelling argument for why libraries should be actively focusing on knowledge management.Knowledge can easily be linked to professional self-esteem.Having the knowledge needed to fulfil one's job will build professional self-esteem and feeling confident in one's abilities will be helpful in taking on the challenge of learning new things.But how is professional self-esteem built?Is it staff's own responsibility to know what they know and further what they need to know, or does management need to play a stronger role in guiding their staff?The responses from the survey suggests that management must encourage staff to participate in professional development seminars and conferences in order for them to make professional development a priority.This should be a genuine interest for management, as motivated employees will be more likely to develop their skills and adding value to the staff."Perceived investment in employee development (PIED) is developed through employees' assessment of their organizations' commitment to help employees learn to identify and obtain new skills and competencies that will allow them to move to new positions, either within or outside these organizations" (Lee & Bruvold, 2003, p. 983).
Self-esteem can be linked to the social recognition one is faced with, or not faced with in a professional setting, or what can be described as "the result of an ongoing intersubjective process, in which one's attitude toward oneself emerges in one's encounter with an other's attitude toward oneself" (Anderson & Honneth, 2005, p. 131).In other words, we are not just individuals, but also part of a community that is the workplace.Collaboration and communities are important for learning and development, and additionally self-esteem and job satisfaction.Communities of practice and social learning can function as an important catalyst for knowledge creation (Alavi & Leidner, 2001, p. 126) and knowledge management seeks to support communities of practice in creating and using knowledge (Townley, 2001, p. 45) making this a virtuous circle.However, different types of employees will have different agendas and motivations for learning and performing their tasks.For example, mastery goal oriented employees strive to develop their competence, skills, and ability for the sake of learning and mastering tasks in itself, whereas performance goal oriented employees aim to outperform others and to demonstrate superiority, meaning that they can be reluctant to learn new skills as they will see this as a threat to them when faced with tasks they do not master (Janssen & Van Yperen, 2004, pp. 370-371).
Knowledge workers are reliant on their own knowledge and competencies and the ability to learn and develop this to stay on top of their work tasks.The knowledge of co-workers is also important as communities of practice will be an important part of everyday work life.When the knowledge is tacit, it can be difficult to identify what one knows and does not know.Professionals are constantly performing tasks they master without reflecting upon why and how they do it, or perhaps not even reflecting upon the fact that they are doing it at all.The need for a community of practice can be rooted in the need of learning new things, but also for getting reassurance that one's work responsibilities are carried out satisfactorily."[…] self-trust is not a solo accomplishment.Its acquisition and maintenance are dependent on interpersonal relationships in which one acquires and sustains the capacity to relate to this dynamic inner life" (Anderson & Honneth, 2005, p. 135).The need for social recognition is present in most people.This can be described as the importance of mutual recognition: The importance of mutual recognition is often clearest in the breach.Consider, for example, practices and institutions that express attitudes of denigration and humiliation.They threaten individuals' own self-esteem by making it much harder (and, in limit cases, even impossible) to think of oneself as worthwhile.The resulting feelings of shame and worthlessness threaten one's sense that there is a point to one's undertakings.And without that sense of one's aspirations being worth pursuing, one's agency is hampered.This claim is neither exclusively conceptual nor exclusively empirical.It is, of course, psychologically possible to sustain a sense of self-worth in the face of denigrating and humiliating attitudes, but it is harder to do so, and there are significant costs associated with having to shield oneself from these negative attitudes and having to find subcultures for support.(Anderson & Honneth, 2005, p. 131) Most people enjoy working towards a goal and receiving social recognition for their skills and efforts.The lack of such recognition may contribute to lack of motivation and further loss of professional self-esteem.Frustration may rise from working in an environment where no one cares about the knowledge that is present.Honneth also makes a distinction of the ideological function of social recognition where its mere function is to "[…] encourage an individual relation-to-self that suits the existing dominant order.Instead of truly giving expression to a particular value, such ideological forms of recognition would ensure the motivational willingness to fulfil certain tasks and duties without resistance" (Honneth, 2012, p. 86).In this case, recognition can be viewed as a tool of power to motivate employees to do their job and not rock the boat.
Knowledge Sharing and Professional Development
Key concepts in knowledge management are mapping out, improving and sharing knowledge, referring to Wiig's model of building, holding, pooling and using knowledge.Every place of business should focus on raising competencies, keeping knowledge, sharing it and the further development of it.
Knowledge is often divided into the distinction of explicit and tacit knowledge."Knowledge that is uttered, formulated in sentences, and captured in drawings and writing is explicit.Explicit knowledge is accessible through consciousness.Knowledge tied to the senses, tactile experiences, movement skill, intuition, unarticulated mental models, or implicit rules of thumb is "tacit" (Nonaka & von Krogh, 2009, p. 636).Explicit knowledge is often thought of as what we are able to verbalise and explain to others, while tacit knowledge is deeply rooted in experiences and the way we carry things out without making an explicit reflection on it.Therefore the difference between the two is often equated with the difference between "know-how" and "know-what" (Scarborough, 2008).Tacit knowledge can be difficult to put into words and transfer onto others; in the same way the lacking of the tacit knowledge can be difficult to formulate as an information need.Because tacit knowledge is best transferred through experience, the socio-cultural aspect and the need for communities of practice must be acknowledged in knowledge organisations.Sharing knowledge in an organisation is often a spiral of tacit and explicit knowledge.Tacit knowledge was first described by Polanyi (1962).Tacit knowledge is difficult to identify and pass on.It can be learnt and adapted through conversations and observation, again stressing the need for socio-cultural learning.Being able to do something that one has done for a long time and no longer thinks about when it is done is difficult to make explicit and pass on to others using words.For new workers in an organisation it can, however, be observed and made into an explicit description.The epistemological dimension to organizational knowledge creation embraces a continual dialogue between tacit and explicit knowledge that drives the creation of new ideas (Nonaka, 1994, p. 15).Therefore, a knowledge management system cannot completely supply new employees with all the information they need.There will always be a need for a community of practice and socio-cultural learning.
Knowledge hiding is a new concept."We contend that knowledge hiding is not simply the absence of sharing; rather, knowledge hiding is the intentional attempt to withhold or conceal knowledge that has been requested by another individual" (Connelly, Zweig, Webster, & Trougakos, 2012, p. 67).Knowledge hiding is probably likely to happen if knowledge is considered to be a means of power or leverage, probably most likely by goal oriented employees.Nevertheless, "Given the differences between corporate and academic environments, it may be the case that certain fields, education or librarianship, for example, are more conducive to a culture of sharing" (Burnette, 2017, p. 385).As information and knowledge is becoming increasingly multidisciplinary, knowledge sharing and collaboration is more important than ever for librarians (Daland & Hidle, 2016, p. 66).
Knowledge Management in Academic Libraries
Studies show that academic libraries focus more on information management than knowledge management, and several researchers conclude that this should change."In fact, KM is frequently used -inaccurately -as almost synonymous with information management (IM).This usage is partly justified because IM and KM share a common purpose: to facilitate the shaping, distribution and sharing of knowledge to achieve business goals, objectives and strategies" (Johannsen, 2000, p. 43).Knowledge management is more than information management, as it contains a focus on how information is translated into knowledge.
The Human Resource Side of Knowledge Management
Knowledge management also means managing knowledge workers.Knowledge workers are hired to do a job, but first and foremost they are human beings and individuals.In order to get employees to do their best at work, they must be motivated.Being motivated can often be rooted in being seen and recognised."Recognition should be understood as a genus comprising various forms of practical attitudes whose primary intention consists in a particular act of affirming another person or group" (Honneth, 2012, p. 80).This also highlights the difference between knowledge management and information management."Typical IM issues include how an executive information system may influence decision making quality, how IT and information can be used to achieve competitive advantage, and how alignment between IT strategy and business strategy may be accomplished.KM, on the other hand, is much more people-oriented, focusing on human resource management issues such as learning processes, continuous education, culture, values and attitudes, etc." (Johannsen, 2000, p. 43).Innovative knowledge workers will be of high value to enterprises that need to develop and keep up with changes in the field of expertise.They will offer new perspectives and solutions to current work methods.
As a new employee, training is essential to understand work tasks and work flows.It is expected that new employees are met with a training program that will ensure that they are taught the necessary skills for being able to do their job.The employee will have some questions for the workplace, but new employees cannot be expected to be cognisant of all knowledge required.Further, an experienced worker cannot be expected to present a new employee with at the information they need.If asked directly, many experienced employees may be able to provide insight into questions involving tacit knowledge, but they may not be able to do so unprompted.Through observation tacit knowledge can be detected and recorded for later use.Therefore, it is of importance that management can identify useful conferences and skills development seminars for new employees to attend and encourage them to do so.It is also important that employees themselves actively engage in providing reports and feedback from conferences and that they themselves keep up to date on upcoming conferences and professional development events.
Results
The survey data has been used for other papers and is available at UiA Open Research Data. 1 The survey shows that librarians seem to have a fairly high satisfaction with their own skills in regard to fulfilling their duties, where over 50% report to having the competencies and skills they need to do their job satisfactory.Nonetheless, it is interesting to see what factors may have an influence on their sense of self-esteem.Men seem to have a slightly higher belief in their own abilities than their female co-workers where nearly 77% of the men reported to having the skills they needed and only 63% of the women (Figure 1).
Education seems, surprisingly, to have very little impact as the librarians who had finished a bachelor's degree reported the same level of confidence as their colleagues who had finished a Ph.D. (Figure 2).The respondents who answered that they believed that their leader has a good overview of their competencies and skills seem to have a slightly higher confidence in their own work performance and professional abilities.More interestingly, staff reporting that their leader encouraged them to take part in professional development did participate in more conferences and seminars.
There is a clear correlation of staff attending conferences when their leader encourages them to do so (Figures 3 and 4).
Overall, most of the respondents said they had not participated in continued education for professional development.But the ones who stated that their leader encouraged them to do so have a much higher level of participation.
Intriguingly, whether leaders encourage staff members to participate in interdepartmental skills development seminars or not does not seem to have an effect on participation (Figure 5).This could be explained by the fact that seminars at the actual work place are easier to attend, and that there is a higher expectation of participation when the seminar is at the workplace.
Finally, it is interesting to see if the respondents who think their leaders have a good overview of their skills and competencies feel more confident in their abilities to do their job satisfactorily (Figure 6).The y-axis of the graph represents the answers to the question "I believe I have the necessary skills and competencies needed to perform my tasks at work" and the x-axis shows the answers to the question "My leader has a good overview of my competencies and skills."Those who strongly agree to their leader having this overview also report a higher level of self-esteem.
Discussion
Knowledge management is without a doubt an important task for library management.In order to develop libraries and their services to be relevant in the future, a firm grasp on present knowledge and what challenges may arise in the future is required.
This study suggests that employees who are encouraged to participate in professional development do so.It also implies that members of staff who believe their leader to have a good overview of their competencies and skills experience a higher sense of professional self-esteem.
Knowledge management is not used as such in academic libraries, though several studies indicate that this could be a fruitful approach (Daland, 2016;Fig. 6: "My leader has a good overview of my skills and competencies" crossed with "I believe I have the skills and competencies needed to perform my tasks" (N=409).Islam et al., 2014;Townley, 2001).KM is useful to map out existing knowledge and where there are gaps in order to inform strategic planning.Mapping out existing knowledge can not only help strategic planning, but furthermore point out staff members who have certain skills and competencies.This will make for inquiries from their co-workers, but also give them the needed recognition that may help motivate them to learn even more (Anderson & Honneth, 2005).Knowledge management is thus something that should be given greater attention.Also, the pooling of knowledge will not only benefit management to strategically plan out the future and employment strategies.It will also help ensure employees are happier, more productive and more motivated to continue to learn and develop.
The need for socio-cultural learning and communities of practice is also an important issue in knowledge enterprises.Lloyd (2012, p. 773) stresses that the people-in-practice perspective and that "This perspective has as its starting point the idea that information literacy is a complex collective practice that is negotiated between people who are co-located and participating in the performances of a setting."If management truly knows what competencies and skills lies in the work force, it will be easier to construct working groups and projects where communities of practice can develop and bear fruit.
Conclusions
Knowledge is the most valuable asset in knowledge enterprises like libraries.Knowledge cannot be transferred in its whole to knowledge management systems because it is deeply rooted in the staff members.For staff to be motivated to learn more and to share their knowledge, it is important that they experience this being valued.Honneth speaks of social recognition where we are part of a group and need validation and affirmation from our co-workers.
Communities of practice and socio-cultural learning can provide the framework we need to motivate ourselves to further develop professionally.One can use methods like this to make sure employees keep doing their jobs and don't rock the boat, but in the long run innovative and motivated employees who are able to think new thoughts and dare to challenge the established comprehension will be of greater value as they contribute to evaluation and development of their field.Knowledge management is not only instrumental managing of knowledge, but also focusing on human resource management issues, providing recognition and motivation for all staff members.
Fig. 4 :
Fig. 4: "My leader encourages me to attend continued education and professional development" crossed with "In 2015 I attended x seminars for continued education" (N=370).
|
v3-fos-license
|
2018-08-30T00:09:33.569Z
|
2018-08-14T00:00:00.000
|
52229135
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://energyinformatics.springeropen.com/track/pdf/10.1186/s42162-018-0015-5",
"pdf_hash": "1cbcdf8960ba702d81c9df1d4d969894fb1c0658",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2613",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "4505a681e911393d570b13e0e2e3009dd8f70047",
"year": 2018
}
|
pes2o/s2orc
|
Adverse Condition and Critical Event Prediction in Commercial Buildings: Danish Case Study
Over the last two decades, there has been a growing realization that the actual energy performances of many buildings fail to meet the original intent of building design. Faults in systems and equipment, incorrectly configured control systems and inappropriate operating procedures increase the energy consumption about 20% and therefore compromise the building energy performance. To improve the energy performance of buildings and to prevent occupant discomfort, adverse condition and critical event prediction plays an important role. The Adverse Condition and Critical Event Prediction Toolbox (ACCEPT) is a generic framework to compare and contrast methods that enable prediction of an adverse event, with low false alarm and missed detection rates. In this paper, ACCEPT is used for fault detection and prediction in a real building at the University of Southern Denmark. To make fault detection and prediction possible, machine learning methods such as Kernel Density Estimation (KDE), and Principal Component Analysis (PCA) are used. A new PCA–based method is developed for artificial fault generation. While the proposed method finds applications in different areas, it has been used primarily for analysis purposes in this work. The results are evaluated, discussed and compared with results from Canonical Variate Analysis (CVA) with KDE. The results show that ACCEPT is more powerful than CVA with KDE which is known to be one of the best multivariate data-driven techniques in particular, under dynamically changing operational conditions.
Introduction
Over the last decade, the contribution of buildings energy consumption to total energy consumption has been between 20% -40% in developed countries (Lombard et al. 2007;Shaker and Lazarova-Molnar 2017) . Today the figure points towards a contribution of around 40%. In addition to this, buildings account for approximately 20% of total CO2 emissions (Lazarova-Molnar et al. 2016). Thus, there is an excellent opportunity for reducing energy consumption and CO2 emissions if the general performance of energy-consuming equipment in buildings could be improved.
A traditional, and more passive measure for improving energy performance of buildings is to implement energy conservation measures such as more insulation to exterior walls, ceilings and floors, new insulating windows etc. (Tommerup et al. 2004) which is important. However, with the emergence of new and smarter buildings and new intelligent building equipment, new measures could be implemented. The Danish government aims at a reduction in energy consumption in new buildings in 2020 by 75% relative to 2006 levels. In addition, by 2050 the energy consumption should be reduced by 50% in existing buildings (Government 2009). Thus, there is room for new and innovative solutions for reducing energy consumption to reach these goals (Jørgensen et al. 2015). Faults in buildings compromise the energy performance and also cause occupants discomfort. There are different faults in buildings. Examples are duct leakages in ventilation system, simultaneous heating/cooling, and dampers in ventilation system not working properly (Lazarova-Molnar et al. 2016). Thus, there is a need to detect those faults early so their impact on energy consumption will be minimized.
In the U.S., the total energy consumption in commercial buildings has been divided into different end-uses as shown in Fig. 1 1 . This figure shows how expensive a fault can be in terms of its energy use. Furthermore, studies show that 25% -45% of HVAC energy consumption is wasted due to faults (Akinci et al. 2011) and the most typical faults in commercial buildings are the ones shown in Fig. 2 (Roth et al. 2005), which is a table of the annual impact of each fault in terms of energy consumption.
Studies have shown that in 2009, only 13 of the most common faults in buildings have caused over $3.3 billion in energy waste in the U.S. (Mills 2011).
To improve the energy performance of the buildings, Fault Detection and Diagnosis (FDD) methods are used. However, fault detection and diagnosis in buildings are challenging tasks. Early detection of faults and adverse conditions has been the subject of research in many fields. NASA Ames recently has released a tool which is called ACCEPT which has shown to be very effective at comparing methods used for fault detection and prediction. ACCEPT has shown a good performance compared to other state-of-the-art (Energy U.S.D.o. 2011). As seen, Space Heating accounts for 16% of the energy consumption and Ventilation for 9%, which are the two uses in which faults are addressed in this current work Fig. 2 The annual impact of faults in terms of energy consumption (Roth et al. 2005) methods (Egedorf and Shaker 2017), when applied to data from Cranfield Multiphase Flow Facility (Ruiz-Cárcel et al. 2015) which is a well-known benchmark example.
In this paper, ACCEPT is used for fault detection and prediction in buildings. A combined office and classroom building at SDU is used to evaluate the performance of ACCEPT in detecting and predicting faults. The performance of a method is determined by its False Alarm Rate (FAR), Missed Detection Rate (MDR) and Detection Time (DT) which are explained briefly in the paper. In order to allow for the data from the building to be used in ACCEPT, methods such as KDE and PCA-based contribution plots are used. A new PCA-based method is also developed and introduced for artificial fault generation. While the proposed method finds applications in different areas, it has been used for analysis purposes in this work. The results from ACCEPT are evaluated, discussed and compared with results from CVA with KDE. CVA with KDE has proven to be one of the best performing state-of-the-art methods in FDD -both on the Tennessee Eastman Process Plant (Odiowei and Cao 2010) and on Cranfield Multiphase Flow Facility (forming a real world data set) (Ruiz-Cárcel et al. 2015) -and thus forms a good basis for comparison.
Description of case study: building OU44 in University of Southern Denmark (SDU)
The case study which has been selected is building OU44 from SDU. The data extracted includes six different physical measures in each of four different rooms from the building. Thus, the data contains 4 × 6 = 24 different variables and the data are captured from beginning of September 2016 to end of January 2017. The four rooms are the following: • Ø20-601b-2 (Classroom on 2. floor).
Refer to the SDU webpage 2 for a drawing of the building with the rooms. The physical measures from each room are: • CO2: CO2level in the room measured in ppm.
• Radiator_valve: Degree of opening of heating unit.
• Temperature: Temperature in the room measured in°C.
• Valve_control_from_CO2: Desired degree of opening of ventilation unit due to CO2, (increase ventilation if CO2 ppm is too high).
• Valve_control_from_temperature: Desired degree of opening of ventilation unit due to temperature, (increase ventilation if temperature is too high).
• Valve: Degree of opening of ventilation unit.
The 24 variables are named and numbered according to the above two lists.
Data preprocessing
The data capturing frequency is threshold based, which means that new measurements are only captured when a measure reaches different thresholds. For temperatures, this threshold is usually 0.1°C, and thus if temperature is constant for several hours, no data is captured, but when it increases/decreases by 0.1°C a new measurement is captured. Thus, the data needs to be re-sampled to a common frequency, to make the measures from the 24 variables correspond to the same time instances. The common frequency used is 5 min intervals, which means that 25921 observations are presented in the data set, spanning three months. Because of this re-sampling, some of the variables are constant for longer periods. The data has therefore been linearly interpolated. Since it becomes clear that variables 17, 18 and 19 do not have any information, they have been removed and we now have 21 variables.
Furthermore, the preprocessed data is used for training since no faults are present. However, a testing and also a validation data set are required by ACCEPT. CVA requires only a testing data set. Thus, a set of faulty data needs to be generated. We have done this through introducing artificial faults. This method will be further elaborated on in the methodology section. Of course, real data could be used as well by deliberately seeding faults to the building. However, this could potentially cause reduced comfort followed by complaints by occupants in addition to the time consuming process of collecting the faulty data. Before selecting/developing an Artificial Fault Generation (AFG) method, a review on how to artificially generate data is presented.
On artificial faults generation
One of the prevailing scientific paradigms in data-driven FDD research is to develop better methods (or improve on existing methods) and test their performance against known data sets. One of the most widely used data sets are those from the Tennessee Eastman process. Thus, data for training and testing purposes does exist to validate and compare different methods. Other approaches to generate data include model based computer simulation (Kothamasu et al. 2004;Rodríguez et al. 2008), data acquired in small test rigs or particular parts of a physical system (Ruiz-Cárcel et al. 2015). However, what to do when data from a system is insufficient or missing -that is, how to handle cases in which we face the challenge of lack of data and in this case faulty data? Of course one could carry-out physical tests, as in (Ruiz-Cárcel et al. 2015), to generate faulty data, but that would require a considerable amount of work or might not be possible due to restrictions (as mentioned; complaints by occupants using the building). Another approach could be to develop a mathematical model and perform simulations, but the complexities involved in simulating the real system are prohibitive. Since training data is available, as mentioned, the problem seems to be to generate artificial testing and validation data sets from the training data set.
A method of the generation of artificial data is known as Virtual Sample Generation (VSG). The key problem this method tries to solve is the small data-set learning problem. That is, when the training data sample sizes are small, a biased learning results will be obtained. VSG can help to avoid this. Recent research on VSG is presented in (Li et al. 2017) and (Sha et al. 2013), but to be applicable to this current work, modifications are needed. The main issue to address is that VSG generates virtual data based on knowledge from a small data set, and the generated larger virtual data set then shares the same distribution (or Membership Function as in (Li et al. 2017)) as the small data set. We need the method to generate a faulty data set that exhibit different distributions (in faulty regions) than the healthy training data set, as well as the method needs to take into account the correlations in the data. As such, we want to address the following questions: How much would the CO2 level change if the temperature drops by 3°C? and how would the rest of the variables change? Considering the complexity of this, modified VSG will not be used in this current work, but another method is developed to address the mentioned needs.
The last remaining question would be -at what time instances and how often do the faults occur? For this work, a simple approach is taken, but for future work a more complex approach was identified in the literature, such as the method in (Zhang et al. 2015), known as Fault Sample Generation (FSG). This is the study of how faults occur randomly, which follow certain statistical distributions and properties (such as the average lifetime of components). However, because of the complexity of these methods and scope of this work a simple approach is taken -faults occur on a random weekday every week for 12 weeks. Three typical faults will be considered. These faults are: an open window during night, an open window during day and finally a ventilation fault during day. More details about artificial generation of these faults will be explained later in the section describing the results.
Methods
The methods used include ACCEPT, as documented in (Martin et al. 2015) as well as KDE, as documented in (Ruiz-Cárcel et al. 2015) and (Odiowei and Cao 2009). PCAbased contribution plots will also be used to determine the variables mostly contributing to the faults (since ACCEPT requires a variable to predict). Then KDE is used to estimate the probability density functions of the relevant variables (depending on what the contribution plot shows) in the training data sets to develop an empirical value for the ground truth. Finally, a PCA-based AFG method is presented.
A brief description of ACCEPT
In short, ACCEPT is a generic MATLAB-Based framework for adverse effect and critical event prediction. ACCEPT is an architectural framework developed to compare and contrast the performance of a variety of machine learning and early warning algorithms. ACCEPT tests and compares these algorithms according to their ability to predict adverse events in arbitrary time-series data from systems or processes. This ability (or performance) is measured using previously mentioned metrics such as MDR, FAR and DT (Martin et al. 2015).
ACCEPT is patterned after, and shares the same basic composition as the Multivariate State Estimation Technique (MSET). MSET is an existing state-of-the-art method used for prediction of adverse events in advance of their occurrence and was originally used in nuclear applications and aviation and space applications. However, as distinct from MSET, ACCEPT is an open-source tool that offers users to choose from a variety of machine learning algorithms that can be tuned via hyperparameter optimization using the regression toolbox. Furthermore, additional detection algorithms based upon hypothesis testing go beyond the standard SPRT (Sequential Probability Ratio Test) hypotheses offered by MSET in the detection toolbox.
As shown in Fig. 3, all data can be pre-processed which basically means that each variable in the multivariate data will be centered to zero mean and scaled to unit variance using z-score normalization. Normalizing the multivariate data can be important since the data consists of different variables (or features) and each variable has a different physical meaning. Feature selection is the process of selecting only the variables relevant to the process being monitored -some variables may have no relevant information and these should be removed before performing operations on the data (Chiang et al. 2001). In doing so it will reduce computational burden, make models easier to interpret by simplification, reduce overfitting and avoiding the curse of dimensionality (Bolón-Canedo et al. 2015;Tuv 2009;Okun 2011). Usually, feature selection is not necessary with low dimensions as in this work with only 21 features. It was found in this work that feature selection and normalizing data was not necessary as the satisfying results from ACCEPT was achieved without feature selection (Martin et al. 2015). All data can be pre-processed and then training data is used in the regression toolbox to generate the prediction residual used in the detection toolbox, where different alarm systems will predict adverse events while also considering the validation and testing data The regression toolbox, represented on the left of the figure, contains many regression algorithms from which to generate the output of this box -the prediction residual based on training data. The chosen algorithm (by the user) processes a number of features -the multivariate time-series -and predicts a chosen target parameter based on these input features and compares this prediction with the actual value to generate the prediction residual. This mapping of the target parameter characterizes the basic relationship/correlation between the input features and the target parameter (or response variable) for regression. Thus, it is important that the input features are adequate predictors of the target variable (Martin et al. 2015).
As mentioned, the prediction residual quantifies the difference between the actual value of the target parameter and the predicted value. An optimization problem is established and this problem is essentially the result of a so-called f-fold cross validation. The Normalized Mean Squared Error (NMSE) is the objective function of this optimization problem subject to a regression specific hyper parameter. The NMSE of resulting residuals represents regression performance and is minimized when generating the residual (Martin et al. 2015). The lower the NMSE, the better the regression performance, although one needs to take care to prevent over-fitting by acknowledging bias-variance tradeoff and "detuning" when necessary.
In the detection toolbox (or step), a validation data set containing occurrences of adverse events is used in the design of an alarm system. This data set should in theory be drawn from the same distribution as the final testing data set which also contains adverse events (Brutsaert et al. 2016). All detection algorithms will use Receiver Operating Characteristics curves (ROC curve) analysis to enable the design of trade-offs between FAR and MDR, and in all cases an equal trade-off will be used. All detection methods used are threshold based and thus, if the resulting threshold from the ROC curve analysis is crossed, an alarm is triggered. The performance metrics that ACCEPT produces are defined as follows: • FAR -An alarm is triggered at a time point that does not contain an example of a confirmed anomalous event in at least one time point in the next d time steps (Martin et al. 2015).
• MDR -No alarm is triggered at a time point where an example of a confirmed anomalous event exists in at least one time point in the next d time steps (Martin et al. 2015).
• DT -Time steps prior to the occurrence of a future adverse event, which is detected by the prediction system (Brutsaert et al. 2016).
Prior to using the ROC curve for design purposes, an optimization problem is established to maximize the Area Under the ROC Curve (AUC). A Linear Dynamical System (labeled as "Kalman Filter") is obtained from the residual output, and both the learned LDS parameters derived from training data and the adverse events contained in the validation data set are used in the optimization. The AUC optimization problem is parameterized by the state dimension of the LDS n and the prediction horizon d, taking values of n opt = 2 and d opt = 1, respectively.
Note that the AUC optimization problem is only the first step in determining the threshold and is conducted to find the LDS state dimension n and prediction horizon d that produces the highest AUC value. The next step is to use the produced ROC curve for selecting the threshold, and as mentioned an equal trade-off between MDR and FAR will be used for design purposes in all cases. The threshold selected is ultimately the goal of producing the most accurate representation of the ground truth (Martin et al. 2015). The following regression techniques will be studied in this work: • Linear Ridge Regression (LIN) • Extreme Learning Machine (ELM) and the following detection algorithms:
Description of kernel density estimation
The probability of a random variable x (with a probability density function p(x) to be smaller than a certain value s is defined as: This equation is used to determine the ground truth limit, for a target variable, by solving the equation P(x < s) = 1 − α/2, where α is the significance level and s is the solution. This means that the value s determines that (1−α/2)100 % of the data lies at a lower value than s. In the case where the lower limit should be used in the ground truth function, P(x < s) = α/2 is solved. Here, p(x) can be calculated through the kernel function K: where h is the selected bandwidth (see (Odiowei and Cao 2009)), M is the sample size and x k is the k th sample of x. By replacing x k with the sample variable of interest, it is possible to estimate the probability density function of this variable (Ruiz-Cárcel et al. 2015). There is no single way of selecting a correct h for a given application, but it is important to ensure that the estimated distribution is not too rough or too flat which can be the case with a too small or too big h respectively (Odiowei and Cao 2009).
Determining contribution plots based on PCA
As mentioned, PCA-based contribution plots will be used to select the target parameter to be used by ACCEPT. PCA simplifies the monitoring of a process by converting the high-dimensional data, using loading vectors determined by a singular value decomposition, into lower-dimensional so-called score vectors which capture and preserve the spatial correlations between variables while also capturing most of the variation in the data. An elliptical confidence bound can be superimposed on the same plot containing the principal components. Retaining only the two first principal components is often sufficient to capture the most information from the data, thus making it possible to use a two-dimensional Cartesian coordinate system. When the elliptical confidence bound is crossed a fault has been detected, and then the next step is to use contribution plots to determine the origin of the fault. That is, which variable is contributing mostly to the out-of-control status? Contribution plots are a PCA approach to fault identification, and it determines the contribution of each variable to the principal components determined by PCA. The contribution plot can be based on a single observation at a specific time instance, samples of observations, or on all data. The contribution of each variable x j to the out-of-control scores t i is calculated as Where p i,j is the (i, j) th element of the loading matrix P, σ i is the corresponding singular value and σ j and μ j is the standard deviation and mean of the variable x j , respectively. The total contribution of the j th process variable x j is then calculated as (Chiang et al. 2001): Where r is the number of score vectors or principal components retained. This CONT j can then be plotted to illustrate the contributions of each variable to the fault. Like PCA-based contribution plots, CVA-based can also be used or combined with the PCAbased. However, it has been observed that the two plots usually shows the same variable contributing the most (Egedorf and Shaker 2017;Egedorf 2017). Therefore only the PCA-based is used in this work.
PCA-based Artificial Fault Generation (AFG)
In this section , a new PCA-based method is developed for artificial fault generation. While the proposed method finds applications in different areas, it has been used primarily for analysis purposes in this work. This method of introducing faults to the training data set is based on a PCA-method documented in (Chiang et al. 2001). Principal components are used to represent the healthy or faulty state of the system. The idea is to add faults to the components and then project these vectors back to the high dimensional space to get a faulty data set. Thus, the spatial correlations are preserved when adding faults to the training data set. As a first step the data is loaded in a matrix with m = 21 process variables and n = 25921 observations as shown in Eq. 6: Then each of the 21 variables in the training data set are z-score normalized to 0 mean and standard deviation 1. A Singular Value Decomposition (SVD) is performed on the data as shown in Eq. 7 Where U ∈ R n×n and V ∈ R m×m are unitary (orthogonal in this case) matrices and S ∈ R n×m contains the non-negative real singular values of decreasing magnitude along its main diagonal (σ 1 ≥ σ 2 ≥ . . . ≥ σ m ≥ 0). The loading vectors are the orthonormal column vectors in the matrix V and the variance of the training set projected along the i'th column of V is equal to σ 2 i . Typically, the loading vectors corresponding to the a largest singular values are retained, where a can be determined by e.g. the percent variance test (Chiang et al. 2001). However, that is for process monitoring purposes, and in this case the purpose is to introduce artificial faults to the data set. Thus, a is set to 1, to add faults in only one vector. Therefore, selecting only the first a column vectors in V which captures the most of the variation in the data set, the loading matrix P ∈ R m×a can be formed. The projections of the observations in X into the lower-dimensional space are contained in the score matrix T which is formed as in Eq. 8: Where T ∈ R n×a . Projecting back to the m dimensional space yields: The z-score normalization ofX ∈ R n×m can be reversed by multiplying each variable by its determined standard deviation and finally, by adding the mean, the residual matrix can be formed. The standard deviation and mean to be used here are determined from Eq. 6: The residual matrix E captures the variations in the observation space spanned by the loading vectors associated with the m − a smallest singular values (Chiang et al. 2001). This residual matrix will be used later in Eq. 12 to finally add the remaining variation of X not captured by the one retained score vector. If faults are then added to the one score vector capturing the correlation structure, T faulty ∈ R nxa is formed, and then the faulty data can be acquired bŷ X faulty = T faulty P T Then thisX faulty is reverse normalized and finally the residual of Eq. 10 is added: Thus, the faulty data has been generated. The reason for setting a = 1 is that the score vectors are orthogonal and ordered by the amount of variance: Var(t a ). Thus, the work-around is to set a = 1, add faults to the one score vector, and then compute theX faulty and finally add the remaining variations captured in the residual matrix E. The faults added to the one score vector can be a fixed number subtracted or added for few intervals, a gradual evolving fault, random noise or maybe even other measures. The types of faults added will be discussed in the results section.
Results
The data from the building follows certain patterns. That is, around 7.00 in the morning the temperature measurements in the four rooms starts to rise as does the CO2 ppm concentration. Accordingly, the radiator valves close (due to higher temperature) and the desired valve opening of the ventilation unit increases (due to higher CO2 concentration). The valve opening of the ventilation unit then accordingly also increases. Around 16.00 in the afternoon these sensors fall back to evening/nighttime operating conditions with no or almost no occupancy. Also, on weekends (Saturday, Sunday and other holidays), the variables do not follow the same patterns as in the working days (Monday to Friday), but seem to follow patterns of no or almost no occupancy.
Three different artificial fault cases, produced by the PCA-based AFG method, will be introduced and run in ACCEPT; One fault corresponding to an open window during night, one during day and finally a ventilation fault during day. The reason for introducing both a daytime and a night time open window fault is that naturally more operations such as ventilation valve opening occur in the daytime. This ultimately translates to a more direct effect on a broader range of variables. It is thus anticipated that ACCEPT will be more capable of detecting an "open window fault" during the daytime than at nighttime since more variables will provide its indication. The ventilation fault is introduced since Fig. 2 states that "Dampers not working properly" is one of the typical faults. The open window fault is not directly related to this table, but will be be considered more energy consuming than the duct leakage due to its inherent nature. The open window fault generation is instead justified by Fig. 1 where it is revealed that 16% of total energy consumption comes from space heating -thus faults in the heating system can be costly.
Open windows during night
The faulty data set is generated from the 21 × 25921 dimensional training data set by running through the PCA-based AFG method. T faulty ∈ R n×a is generated by subtracting a fault parameter of 30 from the values in the one score vector (of length 25921) 12 times of duration 108 time steps (9 h), on random weekdays. Since the fault is introduced during night, the beginning of the fault is at 22:00 and ending is 07:00. The fault parameter of 30 represents an arithmetic adjustment from the retained score vector reflecting the main correlation trend in the data set. The vector will thus increase in variance along a straight line (since PCA produces linear principal components). The unit of the score vector values in the score space is not easily interpretable but is revealed when projecting to the observation space. Here we have a "line" spanning itself in 21 dimensions and the unit of the slope of the line would involve 21 factors. Each point on the line would consist of 21 numbers readily interpretable in physically understandable units. A hypothetical unit of the slope could be°C pr. (ppm * %) if we only had three dimensions. Setting the parameter to 30 is intended to serve as a proxy for an abrupt fault evolution, and can be characterized by an equivalent temperature-and a CO2-level drop and accompanying reactions by the different valve openings which preserve the correlations. This means that when CO2 and temperature drops, the ventilation valves close and the radiator valves open (due to low temperature). In the ventilation fault case documented later in this paper a gradual introduction will be implemented by letting the fault parameter be a vector of length 108 (9 h duration).
Since the fault is induced on the temperature variables (and also on the CO2-level) in each of the four rooms, the target variable can be randomly chosen between those four rooms' temperature variables. However, based on the contribution plot, it was found that variable number 10 contributes the most. Thus, this variable is selected as the target parameter to be used in ACCEPT. To generate a validation data set, another random 12 weekdays is chosen, thus making a data set that is not identical but drawn from nearly the same distribution as the test data set. The temperature drops below 20°C and sometimes even below 17°C.
To establish the ground truth for this variable, KDE is used on the training data, and on a 99% confidence interval the lower limit is approximately 19.95°C. Thus, a value below 19.95°C is set to correspond to an adverse event. The results generated by ACCEPT are shown in Fig. 4. As seen, the MDR is slightly lower than FAR, and in all cases the NMSE is higher compared to the NMSE of the benchmark case of another recent ACCEPT study (Egedorf and Shaker 2017). However, the results look good enough with PT and OT having MDR=0.9%, FAR=1.58% and DT=657 when using LIN and PT or OT so the fidelity is within an acceptable range. Detection performance is also acceptable since AUC is close to 1 in all cases. A figure will be shown later on the ventilation fault case showing the AUC of the ROC curve as an example of such a plot.
Open window during day
Variable 10 is selected as the target parameter, since the contribution plot shows that this variable is contributing the most. Since the variable chosen here is the same as in the previous case, the same ground truth can be used. The ACCEPT results are shown in Fig. 5, and as seen ACCEPT is slightly better at detecting the introduced fault during day time compared to the night time fault case. Of course another thing that could explain the slightly different results is that the testing and validation data sets are not the same in the two cases.
The daytime fault is like the night time case introduced on a random weekday every week for 12 weeks. However, here the fault is introduced during day time where people are occupying the rooms from 7:00 to 16:00. The validation data set is seeded as in the night time case (12 random weekdays other than those used in the test data set). The best algorithm combination seems to be LIN and PT with MDR=0.9%, FAR=1.56% and DT=477. In the next fault case there is a gradually evolving fault, which should be harder for ACCEPT to detect (higher MDR and FAR).
Ventilation fails during daytime
According to Fig. 2 a typical fault could be "dampers not working properly". Thus, a fault case where ventilation valves fail is used by proxy. The simulation is run at daytime from 7:00 to 16:00 (9 h as in the previous cases) every week for 12 weeks (with random weekday selection). As mentioned previously, here the fault parameter of the PCA-based AFG method is a vector of 108 values (9 h) peaking at value 108 with a value of -30 (the slope Fig. 6 ACCEPT graphical detection result plot with LIN and PV algorithm combination on fault case "ventilation fails". As seen 12 larger spikes are present with black circles representing correct alarms. The gradual evolution of each spike is hard to observe here, but is clarified in Fig. 7 for the first spike is negative to make the CO2-level spikes positive) -this creates gradually evolving spikes in the CO2-level in the four rooms, see Figs. 6 and 7.
Note here, that the fault parameter vector peak value is -30; the same as the value used in the open window cases. This makes the CO2-level peak at approximately 1600 ppm 12 times -peaking above or below 1600 ppm depending on which of the 12 faults is considered. Since the nature of a fault means that the correlation structure is not necessarily Fig. 7 ACCEPT graphical detection result plot with LIN and PV algorithm combination on fault case "ventilation fails". As seen here the gradual evolution is non-linear (though tending to increase between fault start at 661 and fault end at 769) although the fault parameter vector is linear -that is due to the addition of the residual matrix (with small signal to noise ratio -SNR) in Eq. 12 as well as the possibility of the variable being to some extent non-linear with time even though it has been projected using only one score vector and containing high SNR preserved, the ventilation valves are set to be completely closed (since it is simulated that they fail) apart from what the PCA-based AFG actually dictates.
The variable mostly contributing, as determined by the contribution plot, is variable number 3 (CO2-level in ppm for room Ø22-508-1). Thus, the ground truth is established for this variable. According to (Prill 2013) the CO2-concentration should not exceed 1030 ppm inside buildings. Thus, in a sense, a ground truth defining an adverse event to correspond to a CO2-concentration above 1030 ppm could be used. However, the determination of the ground truth value can also, as previously used, be determined using confidence intervals. Here a value of approximately 742 ppm corresponds to the upper limit of a 99% confidence interval. However, this is known to be too low, and thus we chose a ground truth value of 1030 ppm derived from (Prill 2013). The ACCEPT results are shown in Fig. 8. As seen, the FAR is higher than MDR. However, it is acceptable since LIN and PV seem to be the best combination with MDR=1.15%, FAR=3.2% and DT=247. As expected as noted earlier this fault case is clearly harder for ACCEPT to detect (higher MDR and FAR). The regression fidelity is better than the two previous cases with NMSE values of 0.4669 and 0.8446 for LIN and ELM respectively (resulting from hyperparameters of 0.0614 and 21). Detection performance is also acceptable since AUC is close to 1 (see Fig. 9).
Comparison of ACCEPT and CVA with KDE
Since CVA is a dimensionality reduction technique (like PCA), it requires a parameter to select how many canonical variates can optimally be retained for the data set under consideration. This parameter is r ∈ N + and different methods can be used to select the optimal. Two other parameters that need to be determined are the past and future lags, p and f. These lags are used to expand the observation matrix generating a past and a future matrix (see (Ruiz-Cárcel et al. 2015) for details) and the purpose is to take into account serial correlations between measurements of the same variable taken at different time instances. Lower/higher determined p and f values corresponds to the data being correlated with itself for shorter/longer time periods.
In this work the same approach to the one used in (Ruiz-Cárcel et al. 2015) is used to determine these mentioned parameters. The lags are determined by computing the autocorrelation function (ACF) of a stationary segment in the training data. Since the data are multivariate the sum of squares of each observation in the data are used to acquire a single signal for the ACF. To secure stationarity when computing the ACF the KPSS test Fig. 9 ACCEPT graphical ROC curve result plot with LIN algorithm in combination with the six detection algorithms on fault case "ventilation fails". As shown all AUC's is close to 1. The red, green and blue curves is the ROC curves (solid: validation, dashed: test) and the dots shows the selected trade-off points corresponding to FAR and MDR which relates to the level-crossing threshold giving these performance metrics. Let's e.g. consider the green PV dot located at (0.032, 0.9885) on the green solid curve meaning FAR=3.2% and MDR=100-98.85=1.15%. The legend tells us that this corresponds to an alarm threshold of L a = 1.1616 and an AUC=0.99748 (Kwiatkowski et al. 1992) is used. Several stationary segments were found in the data and used in the analysis of the ACF and finally the lags are detemined to be p = f = 2.
As mentioned, different methods have been suggested to select the value of r . The dominant singular values in a matrix D (see (Ruiz-Cárcel et al. 2015)) can be considered, but however as was found in (Ruiz-Cárcel et al. 2015) this can lead to an unrealistic model if the singular values decrease slowly. Therefore the method is to split the training data set and use one set as training set and the other for testing set. Then CVA is computed on different combinations of these split data sets as training and testing data sets using a range of values of r . The value of r is selected to be the one minimizing the false alarm rate. After several analysis testing different combinations of the data sets and using a range of r finally r = 16 was found to be an optimal choice.
Using these parameters and the T 2 metric as an indicator, the performance metrics are shown in Table 1. The Q metric represents the variation error in the residual space, where T 2 represents the variation in the retained space. They are complementary, but in this case the Q health indicator performed poor compared to T 2 -therefore T 2 was selected for comparison. The reason for the use of KDE instead of fitting the T 2 to e.g. a Gaussian distribution is that the data is non-linear.
In Table 1, OWDN, OWDD and Vent correspond respectively to Open Window During Night, Open Window During Day and Ventilation fault cases. As can be seen the performance metrics of MDR and FAR are similar in the first two cases while ACCEPT has a much lower MDR in the Vent fault case. What should be noted, as in another recent study on ACCEPT (Egedorf and Shaker 2017), the MDR of ACCEPT should in reality be higher in some cases when this comparison is done. ACCEPT is not using fault start and stop times (which CVA does) to compute the performance metrics but uses instead, among other things, the ground truth function. Thus, in the gradual evolving Vent fault case ACCEPT does not detect all data points in faulty region but still does not consider them as missed detection which CVA does, making the ACCEPT MDR much lower than that of CVA -see Fig. 7 where few data points after fault start at 661 to 683 are not considered missed detections. A quick estimate would then suggest that the ACCEPT MDR should be around (683-661)/108=20.37% when compared to CVA. This suggest that the real performance metrics MDR and FAR of ACCEPT and CVA are quite similar. The detection time of CVA in Table 1 is taken as the time steps after fault start that a fault is detected. In the Vent fault case it is seen to be 27 time steps or 135 min. Thus CVA does not predict as ACCEPT does and therefore the DT of ACCEPT is negative in that table. Considering Fig. 7 again the prediction happens at t = 456 and the fault starts at t = 661 and thus a more correct prediction from ACCEPT in that case would be 661-456=205 time steps (17.08 h) and not the 703-456=247 (20.58 h) that ACCEPT computes. However, even though the comparison is difficult due to the inherent differences in definitions of how the performance metrics is computed this discussion suggest that ACCEPT is powerful in detecting and predicting faults when compared to the state-of-the-art method CVA with KDE. The performance metrics of MDR and FAR are similar but ACCEPT makes a prediction which is of course powerful when compared to CVA with KDE.
Conclusion
Adverse condition and critical event prediction is an important subject in a variety of applications and it is very closely related to the area of fault detection. ACCEPT is a MATLAB-based framework developed to compare the performance of different machine learning and early warning algorithms. ACCEPT tests and compares these algorithms according to their ability to predict adverse events in arbitrary time-series data from systems or processes. In this paper, ACCEPT has been used for fault detection and prediction in an actual commercial building. Through using KDE, PCA-based contribution plots , the data from the building has been treated and used in ACCEPT for fault detection and prediction. A novel method for artificial fault generation is introduced. The proposed method uses PCA and finds applications in different areas, and is also used to generate fault data for analysis purposes in this work. The results obtained from ACCEPT have been evaluated, discussed and compared with CVA and KDE in the paper, and it was concluded that ACCEPT is more powerful -especially because of the prediction capability.
|
v3-fos-license
|
2018-12-11T03:43:56.733Z
|
2014-01-01T00:00:00.000
|
55555177
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://vwhci.avestia.com/2014/PDF/002.pdf",
"pdf_hash": "7d94f68bf57449761957c7dd85e52e8a0dbb17c1",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2614",
"s2fieldsofstudy": [
"Psychology",
"Medicine",
"Computer Science"
],
"sha1": "7d94f68bf57449761957c7dd85e52e8a0dbb17c1",
"year": 2014
}
|
pes2o/s2orc
|
Using Mobile Technology to Enhance Psychotherapy for Treatment of Schizophrenia : A Feasibility Study
Forty-two experienced clinicians who work with consumers diagnosed with schizophrenia were surveyed in order to assess the feasibility of using mobile applications as an adjunct to conventional treatment. Clinicians reported that over two-thirds of their consumers could safely make use of this technology. In addition, a majority of consumers were seen as likely to benefit from all but one of the fifteen functions performed by the investigational app (TherAPPist).
Introduction
Technology provides increasingly powerful tools for communication and accessing information.Psychotherapy applications tailored to meet the needs of those diagnosed with schizophrenia could provide individualized, immediate, and cost-effective interventions, using convenient devices that are steadily becoming more affordable.These mobile applications, such as TherAPPist, represent a selfregulation tool that may be a valuable adjunct to traditional treatment and asset to recovery.
The investigational TherAPPist app was developed by the authors to enhance the effectiveness of traditional psychopharmacologic and psychotherapeutic interventions.Released in April 2013, it currently provides the user with an easily accessible relaxation routine, positive refocusing stimuli, images of supporters with affirming messages, and emergency contacts.Its development continues, with plans to allow customization of images and audio, as well as tracking of target behaviors and affective outcomes.
The immediate access to therapeutic refresher messages afforded by the TherAPPist application was intended to improve generalization of therapeutic learning.This app offers immediate redirection and distraction, and provides individualized recovery cues.Therapists can't be there all the time, but therAPPist can be!This customizable app can put images and voices of supporters at the user's fingertips, 24/7.
Use of mobile applications presents a valuable opportunity to improve outcomes for individuals diagnosed with schizophrenia spectrum disorders by supplying timely therapeutic redirection, distraction, and coping tactics in highly challenging, provocative situations.A customizable mobile phone app can contribute to the development of confidence by providing images, video, sound and text that are meaningful for the individual user.The user's involvement in the customization process can foster a sense of ownership that may reduce resistance to following therapeutic suggestions.
An effective mobile device application can not only deter disruptive outbursts, but also improve treatment overall.Progress can be conveniently monitored; troubleshooting can be improved with more reliable data about triggering situations.In addition, immediate, personalized reinforcement can be delivered to encourage progress.This may help address some of the negative symptoms characterizing schizophrenia, including apathy, amotivation, and anhedonia [1].In addition, the app can expand to reflect new learning, and serve as a visible product of recovery success.The objective is not to eliminate the need for professional support, but instead to provide real-time support so that consumers can function more effectively, especially in stressful or emotionally charged situations.
An app can also assist individuals in the day to day management of their illness by providing tools to track the correlation between mood and habits (such as taking medication), and offering an opportunity to journal in stressful situations.
Furthermore, the information tracked by the mobile device app can be reviewed as part of the conventional therapy process, providing valuable data to both the consumer and the mental health provider.Learning to use the app also builds users' familiarity with current technology, helps to update their skills, and provides a "normalizing" experience.
Development of TherAPPist
This innovative app was designed to provide support on a variety of levels, everything from intervention in a high stress or emergency situation, to capabilities for tracking their treatment progress through the use of daily logs and goal tracking.The most significant feature of the app is the ability for it to be personalized and also its intended use as an integrated part of traditional therapy.When necessary, a therapist could help a client to tailor the app by discussing common stressors, as well as helpful images, voice, or text that may produce calming in a stressful situation.For example, one client may prefer to hear a voice clip of his therapist reminding him of the appropriate reactions.Another may prefer to see an image of her pet or a list of her goals.The app will be flexible so that multiple interventions can be tried until a successful one is found.Additionally, users can be encouraged to use the app on a daily basis to track situations and their associated emotions.This information will then be readily available for discussion at the next session.
Generally a chronic, lifelong disorder that frequently impairs functioning from early adulthood, schizophrenia continues to be quite challenging to treat.
Because medication compliance figures prominently in relapse prevention, one significant program feature will be the automation of reminders for medication compliance, and also tracking features that will show the correlation between positive habits (taking medicine, getting enough sleep), and outcomes (measured by mood, stress, etc.).Improvement in the area of conscientious management of sometimes complex medication regimes should contribute to recovery success.
Overview of the Primary Features of TherAPPist
The TherAPPist application is designed to provide features that will be particularly helpful to individuals with schizophrenia spectrum disorders.The application is intended to help consumers: Maintain self-control and manage anxiety, anger, and depression, as well as enhance social skills, decision making skills, and confidence. Reinforce the correlation between positive behaviors and outcomes, using simple, nonintrusive tracking mechanisms. Be completely customizable so that the user can select the images and media files that are most useful to him/her. Provide a valuable source of data which the consumer and his/her therapist can use to evaluate treatment effectiveness.When a client first starts using TherAPPist as part of his treatment program, the therapist works with him to customize the app.Whenever practical, default options will automatically be selected to make the customization faster and smoother.After the initial setup, the user will have the ability to change their selections through a series of options menus.
The Relaxation routine is an audio relaxation induction.Soothing Scenes provides a selection of positive images to invite refocusing and return to a happier baseline.Supportive smiles presents a series of friendly images and phrases.Emergency numbers is a list of the most important contacts for easy access.The Relaxation Routine option is in the foremost spot, making it readily available in high-stress situations.Figure 2 shows an early mockup of the habit/mood tracking feature of the app.The user will be prompted with reminder messages (outside the app) at a preconfigured time each day, and will be asked to use a simple 3-color scale to identify both the adherence to the habit being tracked (sleep, eating right, taking medicine), and also the mood.The user can then review the month (or partial month) with the therapist, so that visual patterns can be identified.For example, in Figure 2, a "red/black" day on the 2 nd shows clearly that nonadherence to habit can result in negative effects on mood, whiles days 3 and 4 show that better (still not perfect) adherence to habit can greatly improve mood.The customization features will allow multiple habits and outcomes to be easily be tracked and shown on multiple calendars.
A medication tracker (to prompt the user when medicine should be taken) and a journaling feature are also planned.
Additional life-management and redirection features will be added as time permits.All of these features will provide valuable data to both the individual and the therapist.Participants in the clinical trials will provide valuable feedback to determine the priority order for adding supplementary features.There are four operating systems that are commonly in use for mobile devices: iOS (for iPhone and iPad), Android, Windows Phone 7, and BlackBerry OS.Of these systems, iOS and Android are the most commonly used, with 53% and 28% of the market, respectively [26]; therefore, we initially developed apps for these two operating systems.Version 1.0 (shown in Figure 1) is already available for download for users of both Android and iOS devices.
Preliminary informal surveys of consumers affected by schizophrenic spectrum disorder issues leave us encouraged that this app could be very useful.When groups of consumers have been asked about whether such an app would be helpful, they have expressed near universal enthusiasm.Many have stated that it would be "cool" to pull out a phone in times of stress.Most maintain that rapid access to helpful statements, images, and sounds would assist them in regulating emotion.They liked the notion of being able to choose from options on a menu, in order to tailor the helping process to their own tastes and preferences.Many liked the idea of being able to install their favorite affirming self-statements.Several cheered the chance to track their progress.They all seemed optimistic about the potential of such a device to increase their happiness; most felt it might help to defuse some tough situations.
An informal query of clinicians revealed that most agreed that this app has the potential to benefit many of their consumers.Reassuringly, most of the therapists queried indicated that they expected only a minority of their consumers to have difficulty mastering use of a simplified app, and only a minority of consumers (12.5 %) was judged to be potentially unsafe with such a device.It was also heartening to find that a majority of the therapists expressed a willingness to try to incorporate use of the app in their ongoing work with some consumers.All viewed it as a useful adjunct to treatment, rather than as a competitor.Finally, some mentioned that the app's providing of guidance in a more impersonal way might reduce reactance and oppositional responding.Instead of feeling "nagged" by family, friends, and therapists, the app could provide an empowering means for consumers to receive timely recovery and wellness information.
The current study assessed the feasibility of using this mobile device with consumers diagnosed with schizophrenic spectrum disorders by surveying experienced clinicians.Respondents indicated the prevalence among their consumers of various problems addressed byTherAPPist.They also estimated the percentage of their consumers, both in inpatient residential care and in community residences, who could safety make use of a mobile phone.
Related Work
Although many forms of treatment are helpful, and medication generally reduces symptoms, relapse is an ongoing problem for many consumers with schizophrenia.Medication non-adherence figures prominently in this, as do other difficulties related to structuring time and activities on one's own.Problems with maintaining social contact and support also contribute to relapse.Research has shown that developing a relapse prevention plan and teaching strategies for coping with persistent symptoms measurably improve outcomes [2].Medication adherence and persistence in schizophrenia is alarmingly low.According to Lieberman,Stroup,and McEvoy [3] between 64 -82% discontinued initial medication within 18 months.Unfortunately, even brief medication lapses can have serious and costly consequences.For example, Weiden [4] found a twofold increased risk of hospitalization following nonadherance as brief as 10 days.
Since remembering to deploy various lessons learned in therapy remains a challenge for many consumers with schizophrenia and schizoaffective disorder, the application's main value may lie in its capacity to prompt, remind, and enhance generalization.However, in addition to automating medication reminders and enabling therapists to detect non-adherence early on, and providing immediate relaxation and refocusing aides, TherAPPist also bridges the discharge support gapby augmenting the social support systemin a cost-effective way.Often individuals who do beautifully in settings where others are always available to prompt responses and offer reminders, or to offer redirection and distraction, deteriorate after moving to less structured settings with less ongoing contact with support staff.
Treatment of schizophrenia remains a challenge, despite dramatic improvements in psychopharmacotherapy and verbal psychotherapy.Increasingly, psychotherapy researchers have sought to identify best clinical practices through empirical investigations, in order to delineate evidence-based treatments for persons with these illnesses [5].The randomized, controlled trial paradigm has yielded confidence in approaches such as The Illness Management and Recovery Program [6], Social Skills Training [7], and Relapse Prevention Training [8].A meta-analytic evaluation of 27 social skills training (SST) studies involving schizophrenic persons revealed that SST dramatically improved patients' behavior in social situations, assertiveness and self-confidence, and hospital-discharge rate.Social Skills Training also helped to lower relapse rates [7].This study was less definitive about the successful generalization of skills developed via social skills training.
In the treatment of schizophrenia, generalization of therapeutic learning is often limited.For example, many addressing self-control problems respond well during treatment sessions, but fail to deploy their new coping strategies in their daily lives.Shohamy, Mihalakos, Chin, Thomas, Wagner, and Tamminga [9] found individuals with schizophrenia to be selectively impaired in their ability to generalize knowledge, despite having intact learning and memory accuracy.Efforts to enhance generalization, including role play, are somewhat successful, but often work only in less emotionally charged situations [10].When situational provocation is strong, and anger or fear levels are high, all too often the client resorts to their earlier (often self-destructive or self-defeating) mode of responding.Learning how to slow down and reflect before acting is extremely difficult for some clients.Use of simple, memorable strategies (e.g., Stop-Think-Relax; [11]) can help clients to give themselves time to process information carefully before running the risk of overreacting.However, again and again, many clients unfortunately forget to use even these easy tactics.
Enhancing generalization by augmenting conventional social skills training with specially designed mobile applications that offer immediate "refresher" experiences mayenhance treatment.Consistent with this, Kopelowicz, Liberman, and Zarate [12] argued that while beneficial, social skills training was not meant to function alone.They endorsed the possible incorporation of outside tactics into the actual methods used for social skills training, because they saw this as necessary for successful treatment.
A variety of other psychoeducational interventions have been shown to increase consumers' basic knowledge about their psychiatric illness, and relapse prevention programs also have been found to be helpful [8,13].Although several psychosocial interventions are associated with improved functioning among those with schizophrenia, unfortunately these evidence-based practices have not been routinely available to many consumers who might profit from them [14].Using mobile technology to make these useful practices more widely available could also benefit many consumers.
Many patients with schizophrenia experience a relapse after hospital discharge [15], often due to problems handling the lack of structure and support [16].The TherAPPist app was designed to provide a mechanism for bridging this support gap for those in recovery, and can provide valuable data about the individual's use of various tactics that they have learned in therapy.This data can then be used by the individual and his/her therapist to alter treatment strategies appropriately.
Although The Illness Management and Recovery Program has been found to yield significant improvement in overall outcome, knowledge about illness, and progress in achieving personal goals, it has proven less successful for producing reliable gains on measures of social support, despite its aim to help clients improve their symptom management and social [6,17].Hasson-Ohayon, Roe, and Kravetz [6] suggest that because applying coping techniquesin challenging situations and establishing and maintaining relationships and are more complex and demanding than internalizing information about illness and identifying personal goals, additional methods of assistance may be needed to achieve these important recovery objectives.The TherAPPist app may enable greater progress in these areas.
This app may be especially helpful for those who withdraw and isolate during times of stress.Unfortunately, this is quite common, because mistrust among these consumers is pervasive and often complicates the treatment process.Frequently, a history of interpersonal struggles marked by criticism, rejection, confusion, miscommunication, and mutual fear has left consumers suspicious of others.Some have particular difficulty reaching out to others when they most need support and redirection, at times of high negative emotional intensity.Many prefer at those challenging times to withdraw socially, but unfortunately lack reliably effective self-soothing strategies.Individuals with schizophrenia often cope with stress in a relatively ineffectual manner, often leading to poor outcomes [18].Some maintain that the dangerously high smoking rates in this population (90%) are due to attempts of individuals to selfmedicate; nicotine's up-regulation of GABA reduces the overstimulation, and many schizophrenics are unable to manage with other self-calming strategies.TherAPPistis intended to simplify the coping process by permitting consumers to provide themselves with calming cues and refocusing prompts.Hopefully, these rapid reminders will help them to manage negative feelings more constructively and reorient their attention positively.
At times of heightened stress, many consumers cannot generate constructive behavioral or cognitive options on their own.Because a support person is not always available at these moments to redirect attention and cue optimal responses, problem behaviors arise and crises escalate unnecessarily.Once skilled in activating this app, the consumer will be able to more self-sufficiently manage challenges.This should enhance perceived self-efficacy, further contributing to resilience and recovery.Believing that you can handle tough moments independently can be very empowering.
Recently much attention has focused on development of early interventions that may reduce damage resulting from repeated episodes of psychoses.An example of this approach, Recovery After an Initial Schizophrenia Episode (RAISE) is aimed at applying a multimodal aggressive response to the first episode of psychosis.This project seeks to improve the prognosis of schizophrenia through systematic interventions during the earliest stages of schizophrenic illness.RAISE is designed to reduce the long-term disability commonly associated with this illness by fostering achievement of recovery goals of independence and productivity.This should reduce the costs associated with long-term care.Mobile applications could assist programs such as RAISE in reducing the risk of longterm disability by fostering early medication adherence and other wellness objectives, including those related to sleep hygiene, nutrition, exercise, and maintenance of social support.
Other Health Mobile Applications
Use of technology, particularly mobile technology, to assist with managing illness and maintaining healthy habits is increasingly popular.There are a variety of apps that are designed to help motivate and track healthy habits for those who are trying to lose weight, for example.Two such apps, Livestrong'sMyPlate and Arawella Corporations' Calorie Counter, provide functions for tracking food, nutrition and calories, as well as activity, on a daily basis.They also provide long term progress charts and graphs.Millions of mobile device users have downloaded and use these apps regularly.
Moreover, there is growing evidence that these apps are effective.An evaluation of Aurora, a mobile phone based app for emotion recording and sharing, showed that the app encourages people to be more aware of their emotions and leads them to engage in socially supportive behavior [19].A similar study by Deleeuw et.al [20] comes to the same conclusion.Additional evidence suggests that apps which run on mobile phones can be effective tools for the management of chronic diseases such as diabetes, hypertension, and asthma [21,22,23].The National Library of Medicine (NLM) maintains a "Gallery of Mobile Apps and Sites" which features apps for tracking medications (MyMedList), accessing consumer information about drugs (DailyMed) as well as general research (access to PubMed for Handhelds), and specific health related information, such as LactMed for nursing mothers, and AIDSinfo mobile for AIDS patients and their doctors (http://www.nlm.nih.gov/mobile/).PmEB, a mobile device app for monitoring caloric balance in obese children and adults, was used to develop a heightened sense of self-awareness and promote self-monitoring [24].
According to a recent PEW survey, 83% of American Adults own some kind of cell phone, and 35% own a smart phone of some kind.In the previous 30 days, 51% of adult cell phone owners had used their phones at least once to get information that they needed right away, 40% had used the phone in an emergency situation, and 42% had used the phone to stave off boredom [25].
Method
Forty-two psychologists, social workers, and therapeutic support aides(18 [43%] male, 24 [57%] female)who regularly provide services to consumers diagnosed with schizophrenia,as well as othersevere mental illnesses,served as participants in the current investigation (mean age: 45.67 years; mean years of clinical experience: 18.30 years).Surveys were administered to professional continuing education program attendees, prior to a discussion about the potential advantages and disadvantages of using mobile application technology as an adjunct to treatment.The survey assessed the clinician's years of experience working with MH/ID consumers, as well as the therapist's age and sex.Respondents were asked how many of their current consumers benefit from 15 functions scheduled to be offered by the investigational mobile application, including the following: Help using skills learned in therapy outside of treatment; redirection when upset ; relaxation induction when upset; reminders about their strengths; reminders about supportive others; positive activities when bored; listening to music when upset; refocusing when upset; reminders to take medication; journaling; monitoring their goal behaviors; monitoring how they feel; auditory feedback about progress; visual feedback about progress; and reminders to eat well.They were also asked how many of their current consumers could safely use a cell phone, as well as how many of their former consumers now in the community could safely use a cell phone.
Results
The clinicians surveyed reported that the majority (56.24%) of their current consumers could safely use a cell phone.An even greater proportion (70.24%) of community-based consumers were viewed as capable of safely using a cell phone.The following table indicates the percentage of consumers that clinicians believed could benefit from the functions offered by TherAPPist.When the most highly rated types of assistance (strengths reminders, redirection, positive activities, music) were compared, paired samples t-tests showed no significant differences in their perceived value to consumers.When the next tier of types of assistance (refocusing, medication reminders, visual progress feedback, generalization help, auditory progress feedback, supportive others, monitoring feelings, and relaxation induction) were compared, paired samples ttests indicated that relaxation induction was seen as less beneficial to consumers than many other interventions in the second tier: refocusing (t=3.14, df=41, p<.01), medication reminders (t=3.46,df=41, p<.001), and generalization help (t=2.38,df=41, p<.05).The other helping components in this tier were rated as comparably important.Relaxation induction was also seen as likely to benefit fewer consumers than all of the four first tier interventions, Among the three lowest rated interventions, journaling was perceived to benefit the fewest of these consumers.Paired samples tests showed that journaling was rated significantly lower than monitoring feelings (t=5.75, df=41, p<.001), monitoring behavior (t=4.30,df=41, p<.001), and eating reminders (t=2.87,df=41, p<.01).
Discussion
Clinicians reported that over two-thirds of their consumers in the community could safely make use of cell phone technology; a lower percentage, but still a majority, of those currently in inpatient settings were seen as candidates for this technology.A majority of consumers were seen as likely to benefit from the reminders about their strengths, redirection, positive activities, and access to music that the appwill provide.The assistance with refocusing, medication adherence, auditory and visual progress feedback, and generalization offered by therAPPist were also expected to benefit over two-thirds of these clients.Somewhat fewer consumers were seen as likely to benefit from the journaling, relaxation, and behavioral tracking features of the app.Since using these latter features may be seen as requiring greater cognitive skills, it may be especially important to simplify and abbreviate these features of the app to make them suitable for this group of consumers.
These findings suggest that additional research using this technology with such consumers would be valuable.This inexpensive app (99cents) offers individualized (via customizable components), timely (via immediacy of help), and responsive (via provision of feedback on progress) assistance.If effective in promoting greater self-control, this app has the potential to enhance consumers' lives and reduce healthcare costs.When fleeting emotional reactions are handled inappropriately, unsafe behaviors can necessitate expensive hospitalization or arrest.A typical inpatient facility stay costs $1227 per day and lasts 6.6 days.
The psychological price of an involuntary commitment on a locked unit can be even more challenging to endure.Similarly, being arrested and secluded in jail imposes both financial and psychic costs.According to a June 2010 article published in The Economist, yearly spending on a single inmate ranges from $18,000 in Mississippi to approximately $50,000 in California, where the average bill per day to house an inmate in state prisons is about $129.If this app can even modestly reduce episodes of incarceration among those diagnosed with schizophrenia, it will represent an important step forward.
Future Work
Often, a small intervention can make a big difference in preventing a maladaptive escalation, if it can be offered at just the right time.It is hoped that TherAPPist will enable consumers to have more control over delivery of services, making treatment more responsive to their immediate needs.TherAPPist reduces the need to wait for assistance, which may make all the difference for many easily frustrated, impulsive consumers.
Since executive function impairment underlies so many forms of personally and socially maladaptive behavior, mobile applications designed to supplement self control capabilities have the promise to enhance lives.As technology advances, and devices incorporate more biometric data, future therapeutic mobile applications will serve as tools to improve behavioral functioning, health, and subjective well-being.
The need to curb growth of healthcare spending mandates the development of more creative means of assisting those with various chronic illnesses.The timely, inexpensive, continuous support made possible by mobile applications such as TherAPPist could dramatically improve the lives of those with schizophrenia and other brain disorders, without unduly straining budgets.
Disclosure
The authors jointly developed and are disseminating one type of mobile application for enhancing the treatment of schizophrenia (TherAPPist).Although the survey used in this study assessed the general feasibility of mobile applications in helping these consumers and the potential utility of functions potentially offered via various generic mobile applications, a competing interest exists here due to the potential financial gain associated with the marketing of TherAPPist.
Table 1 .
Clinicians' estimates of how many of their current consumers would benefit from the following functions provided by the TherAPPist mobile application: Means and Standard Deviations (SD).
|
v3-fos-license
|
2020-09-17T18:57:05.912Z
|
2020-07-21T00:00:00.000
|
225577658
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jprm.scholasticahq.com/article/13195.pdf",
"pdf_hash": "84207372ee950abf922473fddeab31d089f675ea",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2615",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "84207372ee950abf922473fddeab31d089f675ea",
"year": 2020
}
|
pes2o/s2orc
|
Low-income Families Guide Innovation: Application of Human- Centered Design
The Prosperity Agenda (TPA) is a nonprofit organization whose human-centered design process centers on the belief that all people are resilient and resourceful. From 2016 to 2019, with support from the W.K. Kellogg Foundation, they collaborated with the Washington State Department of Commerce to design and implement a new program to encourage two-generational savings among families receiving social welfare assistance. Innovative classroom events focused on savings were a direct outgrowth of TPA’s work with families experiencing poverty. The positive results of the yearlong pilot confirmed the idea that an intervention rooted in human-centered design and guided by both the experiential wisdom of low-income families and the deep expertise of event facilitators would help families build financial resilience.
introduction
Poverty assistance programs like Temporary Assistance for Needy Families (TANF) often focus on correcting a perceived lack of discipline among "the poor." Such programs temporarily relieve acute financial stress but do little to end the root causes of persistent poverty (Soss et al., 2011). Traditional financial literacy interventions that focus on individual behavior change have little impact on actual financial circumstances of lower-income families (Fernandes et al., 2014). These interventions reinforce a stereotypical "culture of poverty" that blames people for their economic status because they have "bad values" that drive unwise personal choices (Frameworks Institute, 2019).
The Prosperity Agenda (TPA) is a nonprofit organization whose humancentered design (HCD) process runs counter to prevailing approaches that are anchored in the assumptions that families in poverty make bad financial decisions, don't know how to save money, and are responsible for their own fate (Fraser & Gordon, 1994;Jacobson et al., 2009). From 2016 to 2019, with support from the W.K. Kellogg Foundation, TPA collaborated with the Washington State Department of Commerce (Commerce), who manage TANF programs, to develop and implement a new program to encourage twogenerational savings among families receiving social welfare assistance. This paper summarizes TPA's utilization of HCD principles and participatory research (PR) methods in the creation of this intervention. Bergold and Thomas (2012) described PR as the involvement of any groups of people who are not professional researchers. The Savings Initiative combined PR with HCD and systems thinking to cultivate a shift from the status quo ( Figure 1). Both HCD and PR focus on how to generate innovative solutions to social problems with a commitment to uphold dignity and respect for marginalized populations (Björling & Rose, 2019;Kia-Keating et al., 2017). Poverty in particular contains many challenging and interdependent factors that require a systems approach to improve impact (Frameworks Institute, 2019). To transform a system, one must transform the relationships between people who make up the system (Kania et al., 2018). TPA acted as thirdparty facilitators, designers, and evaluators to cultivate the conditions for collaboration and participation across stakeholders, while assessing and documenting progress towards a community-based solution (González, 2019).
results
Operationalize Concept: TPA partnered with Commerce to develop an intervention that would outperform traditional financial literacy programs and meet families' needs. Commerce provided in-kind support like access to clients, assistance with Washington State Institutions Review Board (IRB), and $20,000 for pilot site stipends. The Kellogg Foundation provided financial resources for TPA to perform research, design, testing, and program refinement. Success for this initiative was guided by one key question: How might we use a two-generation approach to improve financial resilience and to strengthen savings behaviors of parents in social welfare programs? Examine Social Context: To inform the design, TPA conducted a robust qualitative inquiry. TPA and Commerce identified four rural and urban contractors in Washington State. TPA and the contractors developed an interview protocol and conducted in-person interviews, focus groups, and observations to gather first-hand parent information about savings barriers, practices, behaviors, and goals. TPA interviewed program staff to understand the constraints of introducing new programs. TPA interviewed 40 parents receiving TANF, 26 contractor staff (case managers, program managers, and program directors), and 7 staff from Commerce. Immersion in the lived experience of an intervention's intended beneficiaries is essential to HCD (Mulgan, 2006). The research phase yielded significant insights about how families save, spend, and discuss finances with their children. TPA analyzed the qualitative content by categorizing commonalities and identifying overarching themes. Insights were used to develop "Personas" and "Causality Maps." TPA built three personas: one case manager and two TANF participants ( Figure 2). Personas are not summaries of the research. They communicate key opportunities and challenges that emerged from research.
TPA developed two causality maps to communicate additional research insights from participants. The first causality map (Figure 3) shows how TANF program parents described different levers that impact their savings success. The second causality map captured how parents and children influenced each other's savings behaviors. Understanding these causal mechanisms allowed TPA to set priorities, make informed decisions, and hypothesize short-and long-term outcomes for the evaluation.
Design: The design team consisted of eight individuals: one career coach who was previously enrolled in TANF, three design consultants, and four TPA staff. They engaged in six four-hour design sessions and reviewed the "how might we" question, personas, and causality maps to gain a mutual understanding of the challenge. Especially when the team engaged with the personas and causality maps, which are direct reflections of problems and successes described by impacted families, they surfaced three main "design criteria" that drove the development of potential solutions: 1) impacted families utilize a wide range of non-traditional savings tactics, like paying more toward a bill than is due; 2) social and cultural pressures to spend money add to decision fatigue; and 3) connecting with others and identifying as a saver increases the likelihood of achieving (financial) goals.
Building on these principles, the design team brainstormed and clustered ideas on sticky notes. These clusters yielded multiple possible prototypes. One idea stood out as having the highest possible feasibility and impact: easy-toimplement event kits that help staff facilitate conversations around money. TPA named these event kits Money Powerup Packs (MPUPs). The career coach and TPA collaborated with TANF parents to improve the initial concept, refine measurement tools, and define what success was. One participant reported that the post-event survey provided a way to be truthful about non-traditional savings tactics. Continuous Development: Four contractors from the research phase tested four MPUPs for four months. TPA solicited feedback through surveys and phone calls with the facilitators. To create a valuable experience for participants, MPUPs have to work for facilitators. Facilitators that elevated the participants' voice provided important information about the strengths and challenges of the materials, structure of the events, and value of event activities. Informed by the first round of testing, TPA designed four additional MPUPs and improved existing MPUPs by adding content, creating electronic formats, enhancing instructions, and providing more guidance on facilitating event activities. Because TANF recipients are a protected class, including their perspective directly was not possible until the IRB approved the evaluation study. TPA adopted a mixed-methods evaluation design and connected evaluation questions with projected outcomes, measurement tools, frequency of data collection, and data analysis methods.
Facilitators had the power to decide how to implement MPUPs. They decided to change or expand the activities, or even alter the meaning and intent of each event. TPA conducted more than 56 check-in calls to emphasize that facilitators were co-researchers and designers throughout the process. For many facilitators, this was a complete shift from the norm, where they are often directed to complete tasks but not invited to contribute to the overall vision or efficacy. Facilitators employed at organizations that are less hierarchical generally felt more comfortable with ambiguity and made autonomous decisions to permanently change the course of MPUPs. Participants had the power to directly influence decisions that drove MPUP refinements. TPA responded to each and every suggestion made by participants in accordance with the design criteria from the research phase. For participants to voice their opinion about the events, facilitators were coached to create a safe space and remind participants that their feedback will be used to improve MPUPs for future participants. An external evaluator from the University of Washington confirmed that "the safe space provided participants an opportunity to share their feelings in a non-judgmental environment." Altogether, 330 TANF-receiving parents participated in the MPUP evaluation. TPA gathered their feedback through baseline, outcome, and post-event surveys. TPA also observed multiple MPUP events, interviewed 6 facilitators, and conducted 11 focus groups. The external evaluator confirmed that participants described MPUPs as supportive, non-judgmental environments in which they could directly participate in learning about money and savings. Figure 7 shows how participants learned from each other, from events, and with their children. Figure 8 demonstrates that participants envisioned their financial future, reflected on family financial behavior and financial decisionmaking, and felt socially connected (scale equates 1 with "disagree" and 4 with "agree").
MPUPs provide sufficient space for participants to connect with one another and build relationships. This unique feature lets participants build mutual trust, share vulnerably, and offer each other guidance and insight. Despite various limitations, TPA learned three major lessons along the way that matured their HCD process and PR methods significantly: 1) careful partner selection is the most critical step in fostering the conditions for collaboration across stakeholders. Organizations who promote autonomy among frontline staff, believe in the resourcefulness of low-income families, and consider themselves innovation engines with prime testing grounds should participate in working groups alongside impacted families and hold equal decision-making power to shape and execute a successful research phase; 2) in addition to adapting to the availability of impacted families, design sessions and unstructured ideation sessions should occur more frequently to include more underrepresented perspectives, which are critical in the development of social innovations using participatory methods; and 3) the expertise from former research and design participants who represent the larger system should be tapped to further refine solutions, detect anticipated and emergent outcomes, and inform strategies to scale impact. conclusion TPA's HCD approach combined with PR methods and systems thinking yielded innovative event kits that facilitators used to initiate meaningful conversations around money. Evaluation results confirmed that by partnering with low-income families and event facilitators, TPA was able to design pragmatic solutions that helped families build financial resilience. Organizations like TPA are uniquely positioned to share power with marginalized families while, at the same time, earning trust from decisionmakers to continue pursuing processes that overcome disciplinary mindsets and instead promote dignity, respect, and prosperity for all.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CCBY-4.0). View this license's legal deed at http://creativecommons.org/licenses/ by/4.0 and legal code at http://creativecommons.org/licenses/by/4.0/legalcode for more information.
Low-income Families Guide Innovation: Application of Human-Centered Design
|
v3-fos-license
|
2018-04-03T01:10:36.741Z
|
2011-01-01T00:00:00.000
|
209138995
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CC0",
"oa_status": null,
"oa_url": null,
"pdf_hash": "49bce3f7874740a0cb05ac9cd8ac64b0e85591be",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2616",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "1c27ab286036dccb7c6106dd497b71077dc39f76",
"year": 2011
}
|
pes2o/s2orc
|
Medications for Unhealthy Alcohol Use
The prevalence of unidentified or untreated unhealthy alcohol use remains high. With the advent of pharmacotherapy and models of counseling appropriate for use in primary care settings as well as in specialty care, clinicians have new tools to manage the range of alcohol problems across the spectrum of health care settings. By extending treatment to primary care, many people who do not currently receive specialty care may have increased access to treatment. In addition, primary care providers, by virtue of their ongoing relationship with patients, may be able to provide continuing treatment over time. Extending the spectrum of care to hazardous drinkers who may not be alcohol dependent could result in earlier intervention and reduce the consequences of excessive drinking.
U
nhealthy alcohol use, which includes the spectrum of drink ing behaviors and consequences ranging from risky use to problem drinking, along with alcohol abuse and alcohol dependence (Saitz 2005), has been linked to a multitude of health and social problems.Unhealthy alcohol use accounts for an estimated 85,000 deaths at an economic cost of $185 billion annually in the United States (Harwood 2000).Beyond this, numerous medical problems, such as liver disease, neurologic problems, and malignancies, as well as behavioral dysfunction result ing in employment and legal problems are directly attributable to alcohol.
Research has demonstrated that a variety of treatment approaches can help individuals with unhealthy alcohol use decrease their alcohol intake and thus avoid the many consequences described above.Counseling interven tions have been designed to address the full spectrum of unhealthy alcohol use from brief interventions for risky use to more complex and rigorous counseling strategies for individuals with alcohol dependence.In addition, beginning with disulfiram in the late 1940s and more recently with nal trexone and acamprosate along with newer medications "in the pipeline," pharmacotherapy has been demon strated to be a useful adjunct to behavioral therapies for many people with unhealthy alcohol use, particu larly those with alcohol dependence.
The spectrum of unhealthy alcohol use can be addressed in a variety of health care settings, including primary care, specialty practice, and alcohol treatment programs.Although com plex behavioral strategies have been developed primarily for specialty settings and treatment programs where they can be effectively delivered, screening and brief intervention counseling has been developed for use in primary care settings, with a focus on treatment referral when necessary.Medication use in these nonspecialized settings and in a spectrum of patients including nondependent individuals is a recent phenomenon.
Research is needed to address the optimal use of medication therapy for the treatment of alcohol use disorders and for treating the broader spectrum of unhealthy alcohol use, from non dependent risky drinking to alcohol dependence.This is especially true given the major scientific advances in pharmacotherapy that have been made over the past 60 years.To improve access to effective medication therapy, research also should explore the use of these medications in a range of health care settings.To optimize medication treatment outcomes, practitioners need to assess both the appropriate STEPHANIE S. O'MALLEY, PH.D., is professor of psychiatry; and PATRICK G. O'CONNOR, M.D., M.P.H., is professor of internal medicine, both at the Yale University School of Medicine, New Haven, Connecticut.level of counseling (from minimal to more intensive) and the appropriate methods to enhance medication adherence for individual patients.The development of medications to address the spectrum of unhealthy alcohol use across the broad range of health care settings has the potential to maximize benefits for future patients.
After reviewing the medications currently approved for alcohol depen dence and new medications being investigated, this article will outline ways to optimize treatment outcomes through patient-treatment matching and increased treatment adherence and review potential uses of medications for nondependent hazardous drinkers, including the use of medications in primary care settings.
Medications for Alcohol Dependence
The Food and Drug Administration (FDA) has approved four medications for the treatment of alcohol depen dence: disulfiram (Antabuse ® ), oral naltrexone, extendedrelease naltrexone (Vivitrol ® ), and acamprosate (Campral ® ).Topiramate, a medication used to treat epilepsy and migraine, has demonstrated evidence in two clinical trials of alcohol dependence, and a number of other promising medications are being studied.For detailed infor mation about mechanisms, risks, and benefits of approved medications and those on the horizon, please see KrishnanSarin (2008).(For specific reviews: disulfiram [Malcom et al. 2008], oral naltrexone [Pettinati et al. 2006], injectable naltrexone [Swainston Harrison et al. 2006], acamprosate [Scott et al. 2005;Mason and Crean 2007], topiramate [Johnson and AitDaoud 2010] and the prod uct information for each medication).The National Institute on Alcohol Abuse and Alcoholism's (NIAAA's) Clinician's Guide provides practical information about prescribing medi cations for alcohol dependence (NIAAA 2007) and covers a range of consider ations (e.g., concurrent counseling, length of treatment, mechanisms, contraindications, precautions, adverse events, drug interactions, and usual adult dosage).
Disulfiram, the first drug approved for the treatment of alcohol dependence, and still one of the most commonly used agents, produces an aversive interaction with alcohol by interfering with the metabolism of alcohol.During alcohol metabolism, alcohol is con verted to acetaldehyde, which then is broken down by the enzyme aldehyde dehydrogenease.Disulfiram inhibits this later step, leading to a build up of acetalydehyde and results in aversive effects such as nausea, vomiting, pal pitations, and headache.Ordinarily, the negative consequences of alcohol consumption (e.g., health problems) are delayed and are uncertain (e.g., your significant other may or may not become angry with you; the police may not apprehend you for drunk driving).The knowledge of the poten tial disulfiram alcohol interaction, however, can make the consequences of drinking certain and immediate and thereby support a person's moti vation to avoid drinking, and the actual reaction may limit the amount consumed if abstinence is violated.Medication compliance can be a problem, however, and disulfiram is most effective when provided with supervised administration by a signif icant other or health care provider (Krampe and Ehrenreich 2010).
Naltrexone is an opiate antagonist that primarily blocks µreceptors with more variable occupancy of δreceptors at the standard dose of 50 mg daily (Weerts et al. 2008).In laboratory studies, naltrexone has been shown to reduce the number of drinks con sumed (Anton et al. 2004;Krishnan Sarin et al. 2007;O'Malley et al. 2002).In clinical trials, naltrexone reduced the percentage of heavy drinking days (Pettinati et al. 2006).Recent metaanalyses have indicated that oral naltrexone has modest effi cacy over 3 months on preventing relapse to heavy drinking, return to any drinking, and medication discon tinuation (Srisurapanont et al. 2005).The standard dose is 50 mg daily, but a multisite study demonstrated that 100 mg daily also was effective when combined with medical management (Anton et al. 2006).
Extendedrelease naltrexone, a formulation that only requires a monthly injection, holds the potential to minimize problems with medication adherence.In a 6month trial, 64 percent of participants received all 6 months of doubleblind medication, translating into daily coverage for the entire treatment period (Garbutt et al. 2005).Naltrexone was significantly more effective in reducing the rate of heavy drinking than placebo, an effect most pronounced in those who had achieved abstinence prior to receiving the first injection.In the subset of those who were abstinent for at least 4 days prior to random assignment, extendedrelease naltrexone also significantly improved continuous abstinence rates (O'Malley et al. 2007).Specifically, 32 percent of those receiving extendedrelease naltrexone (380 mg) remained abstinent over 6 months compared with 11 percent of those receiving placebo.
The primary adverse effects of nal trexone, whether oral or injectable, are nausea followed by headache and dizziness.Patients with significant liver disease are not candidates for naltrexone nor are patients who require opiate medications for pain control.Acute pain control requires alternatives to opioids.To avoid precipitating an opioidwithdrawal syndrome, patients should be free of opioids for 7 to 10 days before beginning naltrexone.If extendedrelease naltrexone is admin istered subcutaneously rather than as an intramuscular gluteal injection, the likelihood of severe injectionsite reactions may increase (http://www.vivitrol.com/pdf_docs/prescribing_info.pdf ).
Acamprosate, available in oral delayedrelease tablets (Campral ® ), was approved for use in the treatment of alcoholism in the United States in 2004, following extensive use in many other countries.Acamprosate is believed to normalize the balance between excitatory and inhibitory pathways altered by chronic alcohol consumption (Littleton and Zieglgansberger 2003), although the actual mechanism of action is uncer tain.Using combined data from three European studies that were the basis of the approval of acamprosate in the United States, Kranzler and Gage (2008) found that acamprosate improved rates of continuous abstinence, percent days abstinent, and time to first drink.Two studies conducted in the United States did not find overall efficacy for acamprosate (Anton et al. 2006;Mason et al. 2006); however, the methods of these studies differed in substantial ways from the European studies.Notably, 90 percent of patients in the European acamprosate clinical trials received inpatient detoxification, compared with only 2.3 percent and 7.7 percent of those in U.S. trials (Mason and Crean 2007).
One of the strengths of acamprosate is its sideeffect profile; the most common side effects are gastrointestinal in nature.Acamprosate can be used in patients with moderate liver disease but is contraindicated in patients with severe renal impairment, and dose reductions are recommended for those with mildtomoderate levels of renal impairment.
Topiramate, an anticonvulsant, is hypothesized to have beneficial effects on drinking by facilitating functioning of the neurotransmitter γaminobutyric acid (GABA) and antagonizing gluta mate activity.Two placebocontrolled trials (Johnson et al. 2003(Johnson et al. , 2008)), including a multisite study, have demonstrated the efficacy of topiramate in veryheavydrinking alcohol dependent patients who were not required to be abstinent prior to starting treatment.In these trials, therapists used brief behavioral compliance enhancement therapy to enhance medication adherence and provide support for patients who worked on their personal goals for their drink ing.Patients also reduced cigarette smoking, which suggests a potential side benefit of using topiramate to treat alcoholdependent smokers (Johnson et al. 2005).
Topiramate requires very gradual dose escalation.The most common adverse events include cognitive dys function, abnormal sensations (e.g., numbness, tingling), and anorexia and taste abnormalities.Additional rarer serious adverse events have been identified, such as metabolic acidosis, acute myopia, and secondary narrow angle glaucoma.The optimal dose for alcohol dependence has yet to be established and may be lower than that the target dose of 300 mg per day tested in prior research.
New Medications on the Horizon
Currently available pharmacotherapies only have modest effects, which has spurred efforts to identify treatment responders, new medications, treatment combinations, and methods to enhance adherence.As reviewed by Krishnan Sarin and colleagues (2008), several other medications show some clinical evidence of efficacy.
Numerous studies have tested selective serotonin reuptake inhibitors (approved for depression), often with disappointing results including countertherapeutic effects among patients with earlyonset alcoholism.However, studies show that these medications (e.g., sertra line) may be efficacious among indi viduals with lateronset alcoholism (Kranzler et al. 1996;Pettinati et al. 2000) or in combination with naltrex one for patients with major depression (Pettinati et al. 2010).In contrast, ondansetron (a selective serotonin3 [5HT 3 ] antagonist approved for nau sea) shows some efficacy for reducing heavy drinking among patients with earlyonset or TypeB alcoholism (Kranzler et al. 2003;Johnson et al. 2000).
Medications targeting GABA and glutamate systems show promise as treatments for acute and protracted alcohol withdrawal and for relapse prevention.Treatment with baclofen (a GABA b receptor agonist [i.e., it binds with the GABA b receptor] used for muscle spasticity) has been found to reduce symptoms of alcohol with drawal, and a placebocontrolled study of 84 alcoholdependent patients with cirrhosis yielded promising results (Addolorato et al. 2007).However, a recent placebocontrolled study in 121 patients did not find an advan tage of baclofen over placebo on mea sures of drinking, although baclofen was associated with reduced anxiety (Garbutt et al. 2010).Additional efficacy studies will need to address whether individuals with more severe dependence or greater anxiety may benefit from this medication.Gabapentin, an anticonvulsant, also shows promise for alcohol withdrawal and for improving drinking outcomes in early treatment among individuals with high alcoholwithdrawal symp toms (Anton et al. 2009) or individuals with comorbid insomnia (Brower et al. 2008).
Given the role of dopamine in the maintenance of alcohol dependence, drugs that have direct effects on dopamine through either partial ago nism (e.g., aripiprazole) or through antagonist effects (e.g., olanzapine, quetiapine) have been investigated as candidates for alcoholism treat ment.A multisite study did not find an overall advantage of the atypical antipsychotic aripiprazole over placebo on the primary outcomes, although some secondary outcomes suggested that studies at lower doses would be worthwhile (Anton et al. 2008a).A smaller, singlesite, placebocontrolled study did not show a benefit of olan zapine, and, although not statistically significant, discontinuation of treatment was higher in the group receiving active medication compared to the group receiving placebo.The anti psychotics all have important adverse events that may limit the potential of these agents for treating alcohol dependence.
There has been considerable enthu siasm about the potential of rimona bant, a cannabinoid receptor 1 antagonist, based on preclinical research showing that it reduced alcohol drinking.However, psychiatric adverse events noted in obese patients, a negative human alcohol selfadministration study (George et al. 2009), and a negative clinical trial in individuals with alcohol dependence (Soyka et al. 2008) have ruled out this particular agent for the treatment of alcohol dependence.
Many alcoholdependent individuals also smoke cigarettes, and researchers have investigated the potential role of the nicotinic acetylcholine receptor (nAChR) system as a factor in both addictive behaviors (for a review, see Chatterjee and Bartlett 2010).Nicotinic compounds, including agonists, partial agonists, and antagonists, currently are under investigation for the treatment of alcoholism.Human laboratory studies have shown that mecamylamine, a nonselective nAChR antagonist approved for hypertension, can reduce alcohol preference and the stimulating effects of alcohol in healthy study par ticipants (Blomqvist et al. 2002;Chi et al. 2003;Young et al. 2005).Laboratory studies also have shown that vareni cline, a partial agonist approved for smoking cessation, can reduce craving and drinking in smokers who drink heavily (McKee et al. 2009).A pre liminary study among smokers receiving varenicline for smoking cessation found that it significantly reduced heavy drinking (compared with a placebo) during an extended pretreatment period (Fucito et al. in press).Studies are ongoing to evaluate the efficacy of these two compounds in clinical trials of alcoholdependent patients.
Researchers also are studying agents that may address the relationship between stress and alcohol consumption.Prazosin, an α1 adrenergic antagonist that is effective in treating posttrau matic stress disorder (PTSD), has shown preliminary efficacy in a small pilot study with 24 alcoholdependent patients without PTSD (Simpson et al. 2009).Other targets for new treatments are receptors for stress related neuropeptides, including corticotrophin releasing factor (CRF), neuropeptide Y (NPY), substance P, nociceptin (George et al. 2008;Heilig and Egli 2006), and inhibitors of ALDH2 (Overstreet et al. 2009).
Optimizing Outcomes by Patient-Treatment Matching
Research is being done in an attempt to identify predictors of patient response to FDA-approved treatments.In a secondary analysis of a U.S. acam prosate trial, patients with a strong commitment to abstinence benefited from acamprosate (Mason et al. 2006).However, several hypothesized predic tors of acamprosate response, including high physiological dependence, late ageofonset, and serious anxiety symp toms, did not predict differential response in a pooled analysis of data from seven placebocontrolled trials.A secondary analysis of baseline trajec tories of drinking in the Combining Medications and Behavioral Interventions for Alcoholism Study (COMBINE), the largest study of pharmacotherapy to date, found that individuals who achieved 14 or more days of abstinence may not be good candidates for acamprosate whereas those who were frequent drinkers but did not attain extended abstinence may benefit (Ralitza et al. in press).With regard to naltrexone, several studies, but not all, have suggested that family history of alcoholism (Krishnan Sarin et al. 2007;Monterosso et al. 2001;Rohsenow et al. 2007) and a variant of the opioid receptor, µ1 (OPRM1) may predict differential benefit (Anton et al. 2008b;Oslin et al. 2003).In the COMBINE study, people with "Type A" alcohol dependence (i.e., fewer comorbid psychiatric and substance abuse disorders) responded well to naltrex one (Bogenschutz et al. 2009).Because primary care providers may feel more comfortable managing less complicated patients, this is an encour aging finding.In the end, the promise of personalized medicine will depend on the identification of reliable pre dictors of differential treatment response.
Optimizing Outcomes by Increasing Adherence
Poor adherence to prescribed medica tions can limit a treatment's effectiveness.As a result, research has investigated predictors of adherence and methods for enhancing adherence.One of the best predictors of future behavior is past behavior.In the case of medication compliance, selfreported problems with adherence characterized as pur poseful nonadherence (e.g., stopping medication early due to either feeling better or worse) predict medication compliance and treatment outcome (Toll et al. 2007)
Medication Use for Nondependent Hazardous Drinkers
Currently, research has evaluated alcoholism medications primarily in alcoholdependent populations.Many individuals, however, drink at harmful levels but do not meet the criteria for dependence and may ben efit from medications to augment counseling approaches used with this subgroup of drinkers.
Young Adults
Because young adults are less interested in quitting drinking than in reducing their drinking, interventions to help them moderate their alcohol con sumption may be particularly useful (Epler et al. 2009) (Kranzler et al. 2009).A preliminary openlabel study of naltrexone and BASICS in young adults suggests that this approach is associated with reductions in heavy drinking and alcoholrelated consequences (Leeman et al. 2008).
Reducing Drinking
Regardless of age, many individuals with alcohol dependence, particularly those with less severe problems, would prefer to reduce their drinking rather than seek total abstinence, and low severity of alcohol dependence is one of the characteristics that predicts recovery from alcohol problems, as evidenced by moderate drinking (Humphreys et al. 1995).As a result, medications that reliably reduce the risk of heavy drinking would likely enhance treatment seeking, especially among individuals with less severe problems.In this regard, topiramate and naltrexone show potential for a subset of patients.However, the field needs to identify which patients achieve and maintain nonhazardous drinking with these medications and better medications that have this effect for a broad spectrum of patients.
Medication Use in the Treatment of Unhealthy Alcohol Use in Primary Care Settings
The rapid progress in the development of medications to treat alcohol dependence, although impressive, has resulted in a relatively slow adap tation of these new treatments.In 2007, the percentage of Veterans Administration patients with alcohol use disorders who received pharma cotherapy was 3 percent (Harris et al. 2010).Among patients seen in the past year in 128 Veterans Health Administration facilities, the rates ranged from 0 to 20.5 percent among those who received specialty care and from 0 to 4.3 percent among those who did not receive specialty care.A number of obstacles have hindered medication use in alcohol dependence treatment programs, including lack of knowledge and availability of medical staff who can prescribe.However, researchers have identified the follow ing factors associated with the adop tion of medication use: organizational characteristics, such as accreditation; the presence of staff physicians; and the availability of detoxification (see the sidebar by LaPaglia on p. 305).
Primary care providers are well suited to address a wide variety of behavioral problems in their patients and routinely manage chronic diseases with a combination of counseling and medication management.Using primary care-based prevention strategies to address behaviors such as overeating and smoking, practitioners already routinely screen for conditions such as high cholesterol, hypertension, and cancer and treat a full range of chronic conditions such as diabetes and asthma.Similar clinical management strategies for unhealthy alcohol use and alcohol use disorders have been developed.However, despite convincing data sup porting the value of evidencebased screening techniques, brief interven tions, and medication approaches, primary care settings rarely use these tools (D'Amico et al. 2005).In an effort to address this situation, the Institute of Medicine (2005) strongly endorsed the notion that primary care providers should have a greatly enhanced role in identifying and managing substance use problems in their patients as part of a strategy to improve access to care for individuals with substance use disorders.
With regard to alcohol, a single question about how often the patient exceeds the daily maximum drinking limits (i.e., more than four drinks for men and more than three drinks for women) in the prior year can be effec tively used to screen for unhealthy alcohol use (Willenbring et al. 2009).The NIAAA clinician's guide, Helping Patients Who Drink Too Much, pro vides practical advice about how to
Challenges and Solutions of Adding Medications
Treatment to Specialty Addiction Treatment Programs: A Review With Suggestions Donna LaPaglia, Psy.D.
S
low diffusion of evidencebased innovations is a common occurrence in health care.Rogers (2003) documented the lag that exists between proven scientific benefits and their adoption into formal practice.This gap is very pronounced in addictions treatment, despite documented evidence of therapies that show promise in treating substance use disorders (Lamb et al. 1998;McGovern et al. 2004;Sorenson and Midkiff 2002).This widely acknowledged gap occurs for psychotherapeutic interventions as well as established pharmacotherapies.
A multitude of factors are thought to influence the substance abuse treatment community's ability and/or willingness to incorporate these practices into routine care.This sidebar focuses specifically on the adoption of medicationbased approaches for the treatment of alcohol dependence, describes the historical context and the environmental milieu of current addictions treatment, and makes recommendations for the suc cessful implementation of medications use in addiction treatment programs.
Data Medicationassisted treatment accounts for a small percentage of ongoing substance abuse treatment in this country.With a vast majority of the substance using population not reaping the benefits of addiction medications, it is necessary to examine the historical beginnings of addictions treatment to inform adoption recommendations.
Because of social stigma, addictions treatment grew in isolation from mainstream medical care (Guydish 2003; White 1998), with recovering peers ministering to each other out of necessity.The system of care that evolved carried with it a "personal" focus with peer teachings spread by word of mouth.These teachings and the surrounding attitudes and belief systems emphasize selfreliance and the belief that healing can take place solely within the community of addicted people and that no medical intervention is necessary.Addiction treatment programs sprang forth from Alcoholics Anonymous (Alcoholics Anonymous 1976) and other stepbased movements.The resulting system of care possesses, at its core, a philosophical belief that total abstinence is gained not through the use of medication to treat alcohol dependence but instead through blood, sweat, and personal tears working through the 12 steps.
The most recent Federal data indicate that non medical personnel, many of whom possess personal 12step recovery histories, deliver the majority of alcoholism treatment in this country in specialty care settings.These treatment programs differ widely in organizational structure, source of payment, services offered, leadership characteristics, staff credentials, presence of medical personnel, program size, and patient characteristics.
Data from these specialty care settings indicate that adoption of medication for the treatment of alcohol disorders is uncommon in both the public and private sector (Ducharme et al. 2006).An examination of public reimbursement as reported by the National Conference of State Legislatures (2008) indicates that Medicaid coverage of substance abuse medications is not common among States and that it is an option not a requirement (Gelber 2008).
Accordingly, factors that may positively influence the adoption of medication use should target State regulatory structures, availability of medical staff, community linkages, and curricula of alcohol and drug training programs as well as graduate psychology pro grams.Focus on the following areas may increase a program's readiness for the adoption of medication use: DONNA LAPAGLIA, PSY.D., is an assistant professor of psychiatry at the Yale University School of Medicine, New Haven, Connecticut.
• Increase State agencies' understanding of the benefits related to medicationassisted therapies for addiction in an effort to increase acceptance for public funding.
• State licensing requirements for programs could be amended to require greater availability of medical staff and credentialed counselors, given that the presence of medical personnel is key to the adoption of medication use, as is the presence of counselors with higher educational attainment (i.e., Masters level or higher) (Knudsen et al. 2005).
• Formal linkages between specialty treatment providers and primary care physicians could allow for the flow of information, expertise (clinical and medical), and support, thus enhancing the experiential base of both providers.
• Graduate psychology programs, addiction psychiatry fellowships, and drug and alcohol programs should include evidencebased treatments in curriculum and internship training opportunities.Currently, degree programs often neglect evidencebased treat ments, may not offer addictions course work, and may not offer opportunities for students to develop competence in any empirically validated treatments, psychosocial or pharmacotherapeutic (Crits Christoph et al. 1995;Miller and Brown 1997).
• Most substance abuse counselors indicate that they do not use scientific journals to inform practice (Miller 1987b;Sobell 1996).As a result, dissemination of information about medications to program members, from directors to line staff, should occur through publications in trade journals and newsletters, con tinuing education course work, professional meetings, and facetoface interaction (workshops).For the education to be effective it must be written or trans lated into everyday language using actual case exam ples of programs successfully adopting medications.This approach is personal and positive and can be quite powerful for substance abuse counselors with recovery histories.
• With the accumulating evidence base for motivational enhancement strategies (Carroll et al. 2006) and more widespread experience with these interventions, medication adoption no longer challenges program treatment philosophy.Instead, it may be viewed as supporting clients who are motivated to achieve and maintain continued abstinence.
• Ongoing consultation, supervision, and feedback are useful for programs adopting and maintaining the practice of treating alcohol disorders with medication.
Agencies that have decided to utilize medications for the treatment of alcohol disorders should consider the following suggestions for successful implementation and maintenance: • Programs can offer incentives for attendance at training sessions (e.g., time off with pay), food and prizes at the training event, bonuses, or promotions contingent on achieving a level of competence with the use of the pharmacotherapy (Carise et al. 2002).
• Programs should develop plans for incorporating medications into their existing practice.Process improvement methods, such as the NIATx Way (Langley et al. 1996), provide a potential tool for doing this.Process improvements allow agencies to make major changes by tackling one small project in a short amount of time (2 to 4week turnaround).The steps for setting up a change project (with examples for addressing use of pharmacotherapy) are as follows: (1) Gather data for the indicator you wish to change (e.g., number of alcoholdependent clients within your program currently treated with medication); (2) determine the target population (e.g., alcohol dependent clients with no prior medication attempts and prior psychosocial treatment failures); (3) establish a clear aim (e.g., greater engagement in treatment in the targeted population); (4) select a change leader (i.e., a positive, energized person who has the ability to leverage and or interact with all levels of the organization); and (5) create a team (including employees from all levels of the organiza tion) responsible for developing and implementing the change.As one example, the team might decide to implement a tickler reminding the physician to discuss medications as part of the treatment plan review.At the end of the change period, the team analyzes the data and makes decisions (plan, do, study, act) based on the findings.This could include continuing the use of medication for clients with prior treatment failure or expanding the client base by including alcoholdependent clients new to substance abuse treatment.The positive client results motivate the team to continue in this direction.Using this model creates natural "buy in;" employees are less likely to feel that incorporating medication adoption is a managementonly decision because they had a hand in designing the program change.
Medications for Unhealthy Alcohol Use follow up with individuals who screen positively for excessive drinking (NIAAA 2009).The guide recommends that clinicians evaluate the potential use of alcoholism medications as a treatment component for patients who screen positively for excessive drinking.
Studies of Medication Use in Primary Care
Evidence supporting the potential use of alcoholism medications in primary care settings derives from studies con ducted in such settings and studies that compared specialty care with primary care models of counseling.These studies provide clues to the nature and amount of behavioral counseling needed to accompany pharmacotherapy.Some studies address both of these questions (or do not separate the questions), whereas others address one or the other.Most studies have not enrolled primary care patients but have evaluated primary care models of treatment provided by medical providers who are not alcoholism specialists in research settings.
In an initial study examining the effectiveness of naltrexone in combi nation with a primary care model of care, 197 alcoholdependent participants were treated with naltrexone for 10 weeks in combination with cognitivebehavioral therapy (CBT) provided by an alcoholism specialist or in com bination with primary care manage ment (PCM) provided by a primary care practitioner (O'Malley et al. 2003).Treatment response was similar at the end of 10 weeks, with 84.1 percent (74 of 88) of the PCM patients and 86.5 percent (77 of 89) of the CBT patients avoiding persistent heavy drinking.Among those who responded to a primary care model, continued treatment with naltrexone for 6 months significantly helped sustain gains.Among those receiving CBT, mainte nance of response remained relatively high and continued naltrexone did not improve this outcome significantly over placebo.
The COMBINE Study (Anton et al. 2006) tested the efficacy of medi cations for alcoholism in the context of a medical management model of counseling in contrast to an approach in which patients received medical management and specialist counseling.In this study, eight groups of recently alcoholabstinent individuals with diagnoses of primary alcohol dependence based on the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition received medical management with 16 weeks of naltrexone (100 mg per day) or acamprosate (3 g per day), both naltrexone and acamprosate, and/or both placebos, with or without a combined behavioral intervention (CBI).A ninth group received CBI only (no pills).
The medical management interven tion consisted of up to nine sessions over 16 weeks with a health care professional (e.g., a nurse, physician's associate, nurse practitioner, or physi cian).Following an initial 45minute interview, subsequent appointments were approximately 15 minutes long.The approach included monitoring of drinking, support and encourage ment, establishing a plan for medication adherence, monitoring and problem solving adherence issues, and advice to attend support groups (e.g., Alcoholics Anonymous).
The results indicated that patients who received naltrexone plus medical management, CBI plus medical man agement and placebos, or naltrexone and CBI plus medical management had higher percentages of days absti nent (80.6, 79.2, and 77.1 percent, respectively) than those receiving placebos and medical management only (75.1 percent).Naltrexone also reduced the risk of having a heavy drinking day, but this effect was most evident in those receiving medical management but not CBI.Acamprosate showed no significant effect on drink ing versus placebo, either by itself or with any combination of naltrexone, CBI, or both.These results suggest that health care providers could use a primary care model of counseling with pharmacotherapy to improve treatment outcomes.
Consistent with the findings in COMBINE, O'Malley and colleagues (2007) demonstrated that a 50mg daily naltrexone regimen in combina tion with medical management also was effective in a rural Alaskan envi ronment among alcoholdependent individuals (primarily Alaska natives).Naltrexone significantly improved abstinence rates and decreased rates of alcoholrelated consequences over the course of the 16week treatment.Given that access to specialty care often is limited in rural communities, the potential of incorporating phar macotherapy into primary care practice could help reduce important health disparities resulting from limited access to treatment.
Oslin and colleagues ( 2008) com pleted the only study that has evalu ated the intensity of interventions that primary care providers might use.In this 24week study, participants received naltrexone or placebo and one of three psychosocial interventions.All participants attended nine brief medication visits with a physician for safety monitoring, brief review of drinking, and dispensing of medica tions.One group only received these doctor visits.A second group received up to 18 additional counseling sessions with a nurse practitioner based on the BRENDA model (Volpicelli et al. 1997), which includes aspects of motivational counseling and specifically focuses on adherence to treatment, progress made toward reducing alcohol consumption, problemsolving, and selfchange strategies.The third group received up to 18 individual CBT sessions with a clinical psychologist or social worker.
The results favored CBT compared with the two other less intensive treatments.The differences between the BRENDA and the doctorsonly groups were not significantly different.The effect of naltrexone was not significant and did not vary by the type of psychosocial intervention, although the sample size was too small to detect anything other than very large interaction effects.Medication adherence was relatively low (50 per cent took medication on 80 percent of days over the 24 weeks of therapy) and may have been related to the rel atively longer duration of the study and the use of the 100mg dose.Medication adherence was associated with better outcomes, irrespective of medication condition.
Extendedrelease naltrexone appears to be well suited for use in primary care settings.Skilled medical personnel are required to administer extended release naltrexone with an intramus cular gluteal injection; many specialty programs do not have access to needed medical care providers.Moreover, the efficacy studies of extendedrelease naltrexone used BRENDA counseling, albeit the frequency of appointments may have exceeded that likely to occur in primary care.Future studies should evaluate the efficacy of onceamonth extendedrelease naltrexone with less frequent counseling and in patients recruited through primary care sites.
As reviewed by Mason and Crean (2007), the European studies of acamprosate typically enrolled partici pants who had completed inpatient detoxification and then received standard care as outpatients.The treatment outcomes, including time to first day of drinking or cumulative abstinence duration, were very similar whether patients received brief inter ventions or intensive treatments includ ing relapse prevention therapy, individual therapy, group therapy, or family therapy in addition to acamprosate (Pelc et al. 2002;Soyka et al. 2002).
One of the few medication trials actually conducted in primary care sites (KiritzeTopor et al. 2004) com pared standard care to standard care with acamprosate among 422 alcohol dependent patients recruited and treated for 1 year in general practices.Patients treated with acamprosate and standard care showed significantly greater improvement, with 64 percent reporting no alcoholrelated problems for 1 year compared with 50.2 percent of those receiving standard care alone.Although the study physicians had prior experience treating alcoholism and had participated in at least one clinical trial, the general conclusion from this study was that general practitioners could effectively use acamprosate to manage alcohol dependence.The low loss to followup of 17 percent over 1 year highlights a potential advantage of treating patients in primary care, where patients have ongoing relationships with their providers compared with specialty care programs, where dropout rates are substantially higher.
Recent studies of continuingcare interventions suggest that interventions of a year or longer and treatments that are less burdensome can promote sustained engagement and positive effects (McKay 2006).As discussed above, the use of medications by pri mary care providers may be a viable approach to providing lowintensity longerterm treatment.Patients also may be open to this approach.In a survey of medially hospitalized patients with alcohol dependence (Stewart and Connors 2007), 66 percent agreed that they would like to receive a medication that would help prevent drinking, and 32 percent were inter ested in primary care treatment.
In summary, the research literature supports the effectiveness of medications, such as naltrexone, in combination with models of care that primary care providers or medical professionals associated with specialty alcohol pro grams could use.Several published manuals (NIAAA 2009;Pettinatti et al. 2004Pettinatti et al. , 2005;;Volpicelli et al. 2001) are available that detail the specifics of these approaches.In prior research, these primary care interventions involved brief but frequent appoint ments.As discussed earlier, frequent contact is likely to enhance medication adherence and contribute to the effectiveness of medications.However, researchers have not yet established the tradeoff between decreasing the frequency of followup in conjunction with primary care counseling and the effectiveness of medication treatments for alcoholism.Even research on injectable naltrexone, a onceamonth preparation, was evaluated in con junction with 12 sessions over 6 months.As a result, additional research is required in order to guide clinical practice about the minimal frequency of counseling, but this should not prohibit the use of FDAapproved medications in these settings.
Implementing Medication Use in Primary Care Settings
In the management of both acute and chronic conditions, physicians and other medical professionals often need to consider carefully when to suggest medication treatment to individual patients.Typically, the decision to recommend medication treatment relies on a combination of an assessment of the evidence to support a particular therapy for a specific condition and clinical judgment concerning whether an individual patient is appropriate for that treat ment based on a variety of patient and diseasespecific features.Clearly, such decisions are best arrived at using a patientcentered approach involving patient education, prefer ences, and mutual decisionmaking.Even when medication therapy has a clear evidence base in a given clinical situation, patients and their providers may identify a variety of reasons why a specific therapy may or may not be used.Beyond this, research often demonstrates that there are certain patient subgroups for whom a specific therapy may or may not be particu larly effective.These subgroups may be identifiable based on clinical, demographic, genetic, or social features that all may play a major role in the decision process regarding medication use.With the availability of several FDAapproved medications, a provider may recommend a trial with a new medication should an individual patient not respond to the first medication tried.
The implementation and widespread use of medications to treat alcohol problems faces a unique set of barriers in primary care.Although primary care providers are proficient at prescrib ing a wide variety of medications, they generally are unfamiliar with medica tions for treating alcohol problems other than those used to treat alcohol withdrawal.Indeed, a growing body of research to support basic screening methods, brief interventions, and especially medication therapy has yet to have a major impact on how primary care providers care for individuals at risk for or with alcohol problems (D'Amico et al. 2005).The results of studies on how to enhance the use of screening and brief intervention, however, may inform how to promote medication treatments for alcohol problems in primary care.For example, in one study, practicebased provider education and quality improvement activities resulted in a 65 percent screening rate (compared with 24 percent in control practices) and a 51 percent counseling rate (versus 30 percent in control practices) (Rose et al. 2008).In addition, the success of strategies to implement screening and briefintervention practices in prima ry care appears to rely on a variety of complex provider and organizational characteristics (Babor et al. 2005).Understanding and addressing these characteristics may be particularly important if these medications are to gain acceptance in primary care.Finally, "marketing" strategies shown to be helpful with the implementa tion of brief intervention counseling, such as telemarketing and academic detailing (Funk et al. 2005), may be particularly useful in enhancing pri mary care physicians' use of medications for treating alcohol problems.Future research should carefully examine the effectiveness of these and other approaches to improving the extent to which primary care physicians can be prompted to use effective medica tions when appropriate to treat their patients with alcohol problems.
Summary
Identifying and treating people with alcohol use disorders remains a chal lenge.With the advent of pharma cotherapy and models of counseling appropriate for use in primary care settings as well as in specialty care, clinicians have new tools to manage the spectrum of alcohol problems across the spectrum of patients and settings.By extending the continuum of care to primary care settings, many people who do not currently receive specialty care may have increased access to treatment.In addition, pri mary care providers, by virtue of their ongoing relationships with patients may be able to provide continuing care interventions.Medication use with hazardous drinkers who may not be alcohol dependent may promote reduced drinking and likely will .
National Institutes of Health
Alcohol Alert, a quarterly bulletin published by the National Institute on Alcohol Abuse and Alcoholism, describes the latest research on alcohol use disorders and treatment in a brief, easytouse format.
Forty years ago, Federal legislation placed new emphasis on solving America's alcohol problems and created the National Institute on Alcohol Abuse and Alcoholism (NIAAA).Since then, NIAAA has led an increasingly effective effort to both define alcohol issues as medical in nature and to address them using evidencebased findings.This Alcohol Alert reflects on 40 years of NIAAA's research and outreach accomplishments and provides insight into the future direction of this important work.
NIAAA: 40
Forty years ago, Federal legislation placed new emphasis on solving America's alcohol problems and created the National Institute on Alcohol Abuse and Alcoholism (NIAAA).Since then, NIAAA has led an increasingly effective effort to both define alcohol issues as medical in nature and to address them using evidence-based findings. 1 This Alcohol Alert reflects on 40 years of NIAAA's research and outreach accomplishments and provides insight into the future direction of this important work.
The Institute's Formation and Impact
Passage of the Comprehensive Alcohol Abuse and Alcoholism Prevention, Treatment, and Rehabilitation Act of 1970-also known as the Hughes Act-created NIAAA as a high-profile agency in charge of addressing problems related to alcohol consumption.Researchers and policymakers who wrote the law were bringing to light a shift in scientific thinking about alcohol problems that had begun in the 1930s with the formation of Alcoholics Anonymous.Instead of viewing alcoholism as resulting from personal weakness, researchers and health care providers were beginning to view it as a curable public health problem.
|
v3-fos-license
|
2019-04-14T13:03:43.088Z
|
2016-01-01T00:00:00.000
|
111802729
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/03/matecconf_icmes2016_02025.pdf",
"pdf_hash": "5544b4a375dba5b491640bd8480643d8da671276",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2619",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "eac4bf69fb706cd2c2c5459d83523841331c1c67",
"year": 2016
}
|
pes2o/s2orc
|
Stability Simulation of a Vehicle with Wheel Active Steering
. This paper deals with the possibility of increasing the vehicle driving stability at a higher speed. One of the ways how to achieve higher stability is using the 4WS system. Mathematical description of vehicle general movement is a very complex task. For simulation, models which are aptly simplified are used. For the first approach, so-called single-truck vehicle model (often linear) is usually used. For the simulation, we have chosen to extend the model into a two-truck one, which includes the possibility to input more vehicle parameters. Considering the 4WS system, it is possible to use a number of potential regulations. In our simulation model, the regulation system with compound coupling was used. This type of regulation turns the rear wheels depending on the input parameters of the system (steering angle of the front wheels) and depending on the output moving quantities of the vehicle, most frequently the yaw rate. Criterion for compensation of lateral deflection centre of gravity angle is its zero value, or more precisely the zero value of its first-order derivative. Parameters and set-up of the simulation model were done in conjunction with the dSAPACE software. Reference performances of the vehicle simulation model were made through the defined manoeuvres. But the simulation results indicate that the rear-wheels steering can have a positive effect on the vehicle movement stability, especially when changing the driving direction at high speed.
Introduction
At the end of the last century, the designers seriously dealt with the idea how to increase the stability and adroitness (manoeuvrability) of passenger and utility vehicles. Some manufacturers offered vehicles with steered rear axle. The wheel-steering was both passive (elastokinematics)rear-wheel deflection was initiated by force impacts when driving a car round a bendand active. Active steering of the rear axle made it possible to control the vehicle better both at a low speed and manoeuvring in a limited area (parking etc.) and when changing the driving direction at a high speed. Active systems (named 4WS -Four-wheel Steering) are technically quite expensive. Rear wheels must be pivoted so that they can swivel and necessary conditions must be provided for that swivelling. Two aims are meant by installing the active wheels steering of the rear axle. Not only improving the adroitness when driving slowly, but also improving the stability when driving at a high speed [1], [2].
Rear-wheels deflection controlling is usually done according to the steering wheel rotation, but in two phases, which are selected with respect to the vehicle speed. The first phase is related to travelling at a low speed. In this phase, the rear wheels are rotated against the direction of rotation of front axle wheels; turning radius reduction occursmovement pole approaches the vehicle and the vehicle movement trajectory can be curved more.
The second phase is related to travelling at a high speed and the rear wheels are rotated in the same direction of rotation as the front axle wheels; turning radius growth occurs; however, at the same time the whole vehicle deviates from the original track [4]. Mathematical description of vehicle general movement is a very complex task. For simulation, models which are aptly simplified are used. For the first approach, so-called single-truck vehicle model (often linear) [2] is usually used. For the simulation, we have chosen to extend the model into a two-truck one, which includes the possibility to input more vehicle parameters. In the Fig. 3, there is a substitute linearised 3D vehicle model and it is possible to see the position of the vehicle coordinate system x,y,z and the position and purpose of the vehicle yaw angle H and the position and purpose of vehicle slip angle D. The whole vehicle centre of gravity T lies in the height h above the roadway and in the side view it is remote about the values l p and l z from axles. A bodywork can be tilted around the instant pitch axis connected tight to the vehicle. This axis is horizontal and goes in the vertical distance h \ from the centre of gravity. Both wheels of the same axle can be -owing to external force or tilting -deflected about the same steering angle E eventually about the same wheel camber angle [. As external force is illustrated lateral force from axle lateral deflection S D and lateral force from wheel camber S [ whose centre is distant from the wheel contact with the roadway by pneumatic caster n D eventually by axle pin rearward rake n [ . In the Fig. 4, there is also an alternative model of a steering system. Front wheels are turned around the steering swivel pins 0-0. These are joint together using control levers in length l ř and a one-piece steering connecting rod in a way that the both front wheels are turned about the same steering angle E p . Steering knuckles have a design caster n K and a kingpin offset r 0 . The steering mechanism is set in motion by the main steering arm which has the same length l ř as control levers. Therefore the steering-connecting-rod gearing is equal to one, and the overall steering ratio i ř is given by steering-gear ratio.
The main steering arm performs the (slewing) angle motion E V / i ř with the damping factor which is proportional to the speed. The connection between the roll angle \ and front-wheel steering E p , which, in practice, can be achieved by the choice of the axle kinematics and steering, is in the following figure replaced with the lever C.
This lever causes the steering E p extension (in positive meaning) if the bodywork is tilted in the positive direction. Just the same way as the whole vehicle model, even here operates the lateral force from the lateral deflections and from the wheels camber. Equations of motion for the spatial model car: balance of forces in the y-direction (1) balance of moments about the z-axis balance of moments about the x T -axis balance of moments with respect to axes steering swivel pins 0-0 (front axle) Lateral forces from the directional deviations of the front and rear axle are described relationships Arbitrary variable and excitation function of a vehicle system is the steering-wheel steering angle E V .
3
Rear-wheel steering with the compensation of lateral deflection centre of gravity angle Considering the 4WS system, it is possible to use a number of potential regulations. In our simulation model, the regulation system with compound coupling was used. This type of regulation turns the rear wheels depending on the input parameters of the system (steering angle of the front wheels) and depending on the output moving quantities of the vehicle, most frequently the yaw rate. Criterion for compensation of lateral deflection centre of gravity angle is its zero value, or more precisely the zero value of its first-order derivative ( D D and for the steady motion H H = konst., tzn. = 0 ) [2]. After substituting these conditions into two linear equations of motion it is possible to deduce the theoretical dependence for the required steering angle of the rear wheels. This dependence between the steering angle of the front wheels and the driving speed as the input quantities, and the steering angle of the rear wheels as the output quantities, is for the section of medium and high speed (concordant steering). A flow diagram of regulation is illustrated in the following figure. [6] From the above described condition develops the dependence of turning the rear wheels on the vehicle speed and the front wheels turning. The Fig. 6 shows an illustration for a passenger vehicle of a standard size. The maximum rear wheels steer is further limited by adhesion and maximum value on account of a collision (size of the free space in the arch bodywork. In the Fig. 7 we can see the results from the simulation and comparison of a car with conventional steering and with the 4WS system with the compensation of lateral deflection angle when doing the avoiding manoeuvre. Parameters and set-up of the simulation model were done in conjunction with the dSAPACE software. Reference performances of the vehicle simulation model were made through the defined manoeuvres. One of the typical manoeuvres is the vehicle get-through throughout a defined corridor specified in ISO 3888-2; you can see the results of these manoeuvres below. Other arrangements of simulation models are dealt with within the frame of the Centres of Competence project, Project # TE01020020 Josef Božek Competence Centre for Automotive Industry (this research has been realized using the support of Technological Agency, Czech Republic).
Summary
It is more sophisticated and expensive (compared to a traditional design) to create a chassis with steered rear axle. But the simulation results indicate that the rearwheels steering can have a positive effect on the vehicle movement stability, especially when changing the driving direction at high speed. The calculated ratio dependence of rear-wheels steering to front-wheels steering is still necessary to optimise during some driving tests. Obtaining the necessary input data for simulation model is very demanding and require a large number of measured values and experience. Simulation models, however, provide easy access to the verification of possible regulatory dependencies that after tuning to a eal vehicle can contribute to increased safety. The range at the high speeds and big front-wheel steering is limited by the lateral stability.
|
v3-fos-license
|
2018-05-03T02:35:08.972Z
|
2018-04-30T00:00:00.000
|
13982677
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11042-018-5972-y.pdf",
"pdf_hash": "dfe5bbb948d69f1a6eb0365582cfeed0791283e2",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2621",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"sha1": "3a6a0306139155b080bea9a0dd20a287280e45df",
"year": 2018
}
|
pes2o/s2orc
|
The thematic modelling of subtext
Narratives form a key component of multimedia knowledge representation on the Web. However, many existing multimedia narrative systems either ignore the narrative qualities of any media, or focus on the literal depicted content ignoring any subtext. Ignoring narrative subtext can lead to erroneous search results, or automatically remixed content that lacks cohesion. We suggest that subtext can be computationally modeled in terms of Tomashevsky’s hierarchy of themes and motifs. These elements can then be used in a semiotic term expansion algorithm, incorporating knowledge of subtext into search and subsequent narrative generation. We present two experimental applications of this technique. In the first, we use our thematic model in the automatic construction of photo montages from Flickr, comparing it to more traditional term expansion based on co-occurrence, and showing that this improves the perceived relevance of images within the montage. In the second, we use the thematic model in order to automatically identify Flickr images to illustrate short stories, where it dampened the perception of unwanted themes (an effect we describe as reducing thematic noise). Our work is among the first in this space, and shows that thematic subtext can be tackled computationally.
Introduction
This work aims to solve the problem of a lack of thematic modelling in multimedia narrative systems by presenting an approach to term expansion powered by semiotics and informed by the literary theory of themes.
Narratives are central to the way that people communicate, from the brief conversational accounts that are exchanged everyday, to the historical and mythological stories that underpin our cultures [33]. However, while we have embraced digital technology in order to record and exchange our narratives -for example, via digital and social media -the narrative structures themselves are opaque to our machines, and our strategies for searching and managing content are therefore unable to take advantage of them.
Several projects have built machine-processable models of narrative. Drammar [32] has been used to create digital annotations of narrative that aid tools in the analysis and research of narrative media. Similarly the ArtEquAKT system [50] automatically generated artist's biographies on request, populating an adaptive story template from an ontology, and using a combination of crawled content and generated sentences to create the final text. However, the existing work in this area focuses on the primary structure, typically the plot (the order in which information is presented) and explicit content (the people, objects, and places that appear in the media), whereas secondary structures such as subtext are ignored. Subtext is the underlying meaning or ideas that the author of a piece of text or media wished to communicate to the reader/viewer. While research has been conducted into the emotional response to online multimedia [41], to our knowledge there are very few examples of Themes within media being explicitly modeled. Those systems that do address Themes [11] use the term to mean a form of classification that differs from the conventional narratology meaning [48], and other subtext modelling systems approach what could be considered smaller scale linguistic subtext such as sentiment or sarcasm rather than the broad narrative context of theme [22]. Addressing this gap in machine understandable models of narrative and multimedia analysis is the primary motivation of this work.
In this article we argue that Themes are one way in which this can be done. We have been inspired by Tomashevsky's work on themes and subtext [48] and our approach is based on a thematic model of themes and motifs that can be used to drive a semiotic term expansion during the search for narrative content [17].
We focus on two applications to demonstrate the potential of this model -the generation of themed photo narratives, and the automatic illustration of short stories. Both are search tasks, and in both cases the thematic model is used to retrieve images that the system considers to be thematically appropriate.
In the first task the images are assembled together into a visual collection around a particular theme, this could be considered a machine generated montage, or a resource for later manual editing (similar to the video aggregation and assembly work done by Kaiser [24]). In the second task the thematic model is used to find illustrations for a short story, with the aim of producing a more coherent overall narrative -something identified as a challenge for narrative generation [43].
In both of these cases the initial task of finding and describing resources is key. Assembling a set of resources that is appropriate to the original search (or seed terms) is critical if the final presentations are to be sensible and are to feel coherent. This means that the quality of the set as a whole is important, as the resources are used and experienced together. The problem of diversity in search results is similar in that it considers the quality of the entire result set [2], but we actually have the opposite goal, in that rather than trying to return a varied set of results we need to return a set that is cohesive. This has been tackled in terms of coherence in time [26] and coherence in space [1], but we are interested with thematic relevance as a way of achieving coherence at the level of subtext. For example, Storyscope is an ontology-driven web-based environment for exploring narratives around museum collections, which uses setting and theme to select items that are relevant to the current storyline [51]. This is directly analogous to our illustration example but with a closed and annotated set of media items.
Our work therefore addresses the challenge of retrieving thematically coherent search results from open content for re-use in a narrative. In previous work we have described the model, and shown in an initial experiment that semiotic search (bases on a thematic model) is more effective than straightforward keyword search at producing a set of results that is thematically consistent [17]. In this article we build on this work, and evaluate its effectiveness in these two thematic coherence tasks.
In doing so we address three research questions: 1. Will using semiotic term expansion based on a thematic model produce image montages that are more thematically consistent than term expansion based on co-occurrence? 2. If we use these result sets for automatic illustration will it improve the perceived thematic coherence of a short story? 3. Does improving the thematic coherence of a short story also improve the perception of other coherence factors (such as logical coherence)?
This article is structured as follows: Section 2 describes the theoretical background to our work, and how principles from structuralism, semiotics and narratology have been applied in information systems.
Section 3 sets out the underpinning thematic model, and how the theory of thematics and semiotics have been applied in the creation of our computational model. Section 4 describes our first experiment to show whether using the model for term expansion can generate more relevant results for image montages than term expansion based on co-occurrence. This extends the work presented in [17] where we compared our approach to normal search in Flickr.
Section 5 then describes a second experiment to explore whether using semiotic term expansion in the automatic illustration of a short story improves the overall thematic cohesion of that story, and whether that subsequently has an impact on how cohesive that story is perceived to be in other ways. Finally in Section 6 we summarise our findings, and discuss potential avenues for future work.
Background
Our approach draws on structuralist work within narratology. Narratology refers to the theory of narrative that arises from literary theory, criticism, and philosophy. Structuralism is a philosophy concerned with identifying structures emergent through language. It has been applied to many areas including literary theory and semiotics, for example in work by Barthes [6]. Our work is based on a structuralist analysis of narratives which assumes the existence of patterns and re-occurring forms. This is useful as it provides a framework in which we can work with defined entities and relationships.
Narratology and Structuralism
Structuralism has been criticised for its rigidity [46], and its critics observe that narratives do not always conform to a given explicit structure. Consequently it was philosophically followed by post-structuralism which favoured a less determinate theory of language. However, from the perspective of this research (which requires machine readable structures that are necessary simplifications of some richer reality) the discrete rules, elements, and relationships that structuralism offers are useful when beginning to build machine readable models of narrative. We acknowledge that not all narratives may adhere to structuralism's models, but also recognise the value in these models for identifying and creating structures within multimedia.
Unsurprisingly ideas of what comprise the elements within a narrative differ considerably. A classic distinction is between what is told in the narrative and how it is told, these were identified respectively by Russian Formalists as the 'Fabula' and the 'Sjuzhet'. This was adapted by French structuralists, particularly Roland Barthes [6], as 'Histoire' and 'Discours', which in turn is widely interpreted in English structuralism as 'Story' and 'Discourse', and later by others as 'Fabula' and 'Discourse'. The overloaded terminology here is confusing, but the essential important lesson is that a narrative may be modeled as a selection process where a wider corpus of candidate narrative elements (a 'Fabula') has limited selections made from it which are structured together into a narrative (the 'Discourse').
How story becomes discourse through the process of both authorship and consumption has been explored in literary theory through the notion of plot selection. As demonstrated in the Barthesian model of narrative, the conventional view is that the author selects story elements from the Fabula to be a part of the Sjuzhet. This concept was further explored by Musarra-Schroeder, based on Calvino's writings [40] as 'The Garbage Axiom' representing the process of the author deliberately omitting potential story elements. Our own narrative system follows a similar process of 'Fabula' and 'Discourse'as explained later in this article we build a corpus of images on a given topic (our 'Fabula') and then select from this based on the rules of our model a montage of images (our 'Discourse').
This selection of items to become part of a narrative is an important component of computational narrative systems. These systems include a diverse and sophisticated computational exploration of plot, but their structures are largely limited to the literal content of characters, actions, and settings. Little attention is made to the subtler notion of subtext. This can leave resulting stories lacking in cohesion or thematic depth. In our work we seek to go beyond the literal selection of content for a discourse, and to do that we need to model what those selections might mean to a reader. For this we turn to the field of Semiotics.
Semiotics, Thematics, and Subtext
Semiotics is the study of signs and how we extract meaning from them. Sassaure wrote that all signs are made up of a signifier and a signified; something we are observing and our understanding of it [44]. This literal interpretation is that of denotation; we see a specific football and to us this denotes the concept of a ball. Barthes expanded on this by describing the idea of connotation, that signs have a meaning beyond their literal expression, he wrote that the entire denotative sign becomes a signifier for a further signified; for example, we may connote from the ball the concept of competition [6].
Conceptually this divides what might originally have been thought of as a single part of the narrative into two things: what the audience sees (the literal denotation), and what the audience understands (the connotation inferred from what they are presented). Contemporary structuralists have used this notion of connotation to begin to model the underlying meaning of a text, and we have used the same principle in our own work to begin to model subtext in terms of themes.
Thematics can be described as a structuralist approach to the concept of themes within narratives [48]. Tomashevsky deconstructs thematic elements into themes (broad ideas such as 'politics' or 'drama') and motifs (more atomic elements directly related to the narrative such as 'the helpful beast' or 'the thespian'). A motif is the smallest atomic thematic element and refers to an individual element within the narrative which connotes in some way the theme. Themes may be deconstructed into other themes or motifs whereas a motif may not be deconstructed. This builds a hierarchy with specific denoted motifs at the bottom and a tree structure of connoted themes above. Tomashevsky believed that themes were at the root of giving a narrative meaning and cohesion. Through themes an author can give a story purpose by presenting a coherent perspective rather than merely a report of events.
Computational work on themes seldom follows this narratological definition, and is often more simplistic. For example, Bischoff [7] looks at extracting themes from multimedia (music in this case) and tries to support thematic tagging of work. However Bischoff use of the word 'theme' refers more to its usage in media (such as 'traveling music') than its semiotic subtext. Similarly Joke-o-mat [11] presents successful work in the thematic tagging of sitcoms, however they have used the word theme to describe a type of scene section (such as 'dialogue' or 'punchline') rather than what narrative theorists such as Tomashevsky would have considered the thematic subtext. Harrell's use of 'thematic domains' [19] is closer to what we propose, but lacks any semiotic structure, while each domain represents a conceptual definition of the theme they are simply a collection of associated terms. Our model attempts to go beyond this, by including the denotation and connotation relationships between themes and motifs.
Computational work on subtext with a definition broader than just themes can also be found, typically this work is concerned with the sentiment of text. In the same way that our work seeks to cover narrative subtext rather than explicit narrative content such as plot, this work seeks to uncover the subtextual meaning of text (such as sentiment) rather than just its direct message. Examples of recent work in this space has been exploring the detection of sarcasm in text through a variety of approaches including rule based methods, NLP feature detection, learning and deep learning algorithms, and shared tasks approaches as detailed in Joshi et al.'s recent survey of advances in the area [22]. Sarcasm is undoubtedly a form of subtext (and of significant importance to sentiment analysis where deceptive language can entirely reverse a sentiment) and there are examples of tag and metadata based approaches, such as that by Maynard et al. [36], which is similar to our own approach (in its reliance on metadata). However, where our work differ is the diversity and variety of intended message with thematic subtext (as opposed to the more discrete "is/isn't sarcastic" subtext of sarcasm), along with the specific type of subtext being addressed.
Term Expansion
The machine based expansion of connections between terms and concepts is something that is more typically known in the information retrieval field as term expansion. The idea is that by expanding the terms in user's queries or the candidate terms against which they are being matched a greater number of successful matches may be found. There are a variety of methods that can be used to achieve such expansion by assessing different relationships between a term and other terms.
Lexical Systems
Perhaps the most straightforward method of term expansion is to use a thesaurus, expanding a term using synonyms and other similar words. WordNet [39], a large general purpose thesaurus developed by Princeton University, provides a good basis for a system undertaking such an expansion with a large variety of terms and many different kinds of lexical relationships drawn between them. Voorhees conducted an initial investigation [49] on the generic effectiveness of lexical query expansion using WordNet as a basis for different lexical relationships and using the TREC collections 1 as test search data. However Voorhees' work shows that there is little advantage to such expansion, finding only minimal improvement on very small queries and no improvement on larger ones.
Co-Occurrence
Co-occurrence is a statistical method involving the analysis of the semantic similarity of two terms based on the frequency with which they occur together in a document. Co-occurrence can be used in automatic keyword extraction, such as in Matsuo's work [35] but can also drive query expansion as described by Kubek [25]. In such systems a corpus of potential results is analysed and terms attached to documents in the corpus that co-occur frequently with the terms used in the query are used to expand it. This method of expansion is automatic and has returned impressive results, and as Li's recent review of tag based image retrieval shows [27] co-occurrence continues to be regularly used in a range of systems as a measure of term similarity.
Co-occurrence appears to be an effective method for term expansion in improving relevance of queries. However it is a solely statistical basis for inferring what a users intentions were when using a term rather than based on any semantic understanding. As such it is vulnerable to query drift (expanding the terms in inappropriate ways) and its effectiveness is highly dependent on the quality of the corpus used to train it.
Ontological Approaches
One approach to solving the problem of query drift is to use models of expert knowledge as a basis for expansion for queries such as the work done by Fu [12], which uses an ontology to expand and improve geographical queries similar in objective to the co-occurrence work done by Buscaldi [10]. This tends to be most effective within a specific domain as ontologies are normally created for specific fields by a small group of experts, fully exploring a small group of concepts. For example, ontologies such as the Gene Ontology [4] are used to expand terms used in queries relevant to their subject or glean further meaning from terms used in media related to their subject. Ontological solutions for narrative media retrieval and composition have been used by the multimedia research community before, such as in more recent work by Kaiser in video assembly [24] and demonstrated persuasive results, if at the cost of the construction of detailed domain specific ontologies.
Our work is similar to this ontological approach in that it is term expansion based on an expert model, however in our case the model is a thematic one, and the relationships between concepts are all semiotic. However, as a thematic model, its rules are generalised and not tied to one specific domain or narrative typerather they can be used in a range of instances. It does require the construction of instances of the model, but these are not limited to a single domain, as is the case with some other ontological approaches.
Multimedia Feature Extraction and Processing
In this particular work we are utilizing term expansion as a form of feature extraction, a common focus of multimedia analysis where features of a piece of multimedia are computationally inferred often to form assertions over the content. Typically this involves analysis of data on the content either extracted from the content itself or metadata included alongside the item. The former is often achieved through a hardware sensor such as in work on activity recognition by Liu et al. [30], or direct multimedia processing such as natural language processing as seen in the work by Preoiuc-Pietro et al. [42] on ideology analysis on social media. These techniques are not limited to text and feature extraction through image processing is common, including a variety of learning algorithms as demonstrated by Li et al.'s work [28], as is use of neural networks to classify varied media as seen in work by Shu et al. [45]. The alternative form of feature extraction does not use the direct multimedia itself but rather processing of metadata on the content already included such as tags. This includes applications seeking to refine metadata by adding or removing erroneous tags as seen in work by Tang et al. [47] and Li et al. [29], or tag and keyword processing as seen in Liu et al.'s [31] work on career trajectory analysis using occupation keyword analysis, or Kaiser's work [24] on multimedia aggregation through metadata.
Our own work is more in the later field as we use meta data as the basis of term expansion in order to infer the thematic features of images. However while existing approaches are often stochastic in nature, trained from co-occurrence or other observed associations in a large data set, ours is powered via semiotic relationships based on a thematic model which is itself based on fundamental literary theory and human captured denotations and connotations.
Thematic Model
In our work we assume a situation where a multimedia story is compiled from many small segments of content that are structured together. In this case the selection of these small atomic segments and their content are key to communicating a theme. We use the term Narrative-Atoms or Natoms to describe these segments which, depending on the granularity of the system, might be a single photo or paragraph, a sentence, or a fragment of an image. These are similar in definition and use to the 'Narrative Units' identified by the Drammar ontology [32] in that they are flexible, but effectively a single irreducible piece of media.
The content of these natoms is rich with information, however only some of it may be visible to a machine (such as generated meta data or authored tags on images.) We call these visible computable elements Features. Features might take any number of forms, in our work we commonly use tags but they might also be automatically detected through some computational analysis as mentioned previously in our discussion on feature extraction and processing. Features can each denote a Motif, a basic thematic object that has connotations within the story, for example the tag cake is a feature that denotes the motif of food. These motifs in turn connote broader Themes in the context in which they are presented, for example food in the context of a gathering may connote celebration. These themes, when combined with other themes or motifs could in turn connote broader themes, for example wedding might also connote celebration.
The model, shown in Figure 1, shows how the parts of the model map to Barthes' ideas of denotative signs as the signifiers for connotative signs. Features denote Motifs with themes being broader concepts communicated over the entirety of the narrative, typically by numerous motifs.
A set of rules augment the core components of the model (Natoms, Features, Motifs and Themes) with Justifications. When a connotation relationship is formed between a motif and a theme (or between sub-theme and theme), a justification for the connotation is also added explaining why one connotes the other; we added these rules to aid authorship, as no two themes should be connoted by motifs or themes with the same justification (we discuss authorship in more detail in Section 3.2 below. Justifications help the author consider the role of potential elements in connoting a theme and help them consolidate the wide variety of relevant features into motifs formed around the key roles.
In plain text these rules can be articulated as: 1. An element may be either a theme or a motif, not both, and all themes and motifs are considered elements. 2. A feature is not an element, nor can an element be considered a feature. 3. A denote relationship is always between a feature and a motif, and all motifs must be denoted by at least one feature.
4.
A connote relationship is always between an element and a theme, and all themes must be connoted by at least one element. 5. All connote relationships must include a justification. 6. No two connote relationships may exist with the same theme and justification.
This forms the basis of our computational model of themes for narrative. Prior existing models in multimedia research such as Drammar [32], the recent work on transmedia by Jung [23], the video mash-up domain models by Kaiser [24], or the broad narrative ontology presented in OntoMedia [21] have all shown the advantages of machine readable models of narrative in search, media aggregation, annotation, navigation, and generation. However, prior models have nearly entirely focused on the literal content and plot of narrative and not its subtext -as ours does. As with Drammar [32] our model represents another instance of narrative theory realised as a computational model, in this case the theory of thematics. While there are other multimedia approaches to both feature extraction and subtext analysis, such as the work on sarcasm detection [22], these approaches do not address theme. Or, in the limited cases where they do, they address theme as genre or usage [7,11], or lack a semiotic structure [19]. Figure 2 shows a simple example of how a collection of natoms connotes a theme in the terms of the model, in this case a passage of text, and two photographs that could be interpreted as connoting the theme of winter. The features presented are present within the given natoms, it is feasible that the natoms would be tagged with them or that they might be automatically extracted from them. These features literally denote the motifs of snow, cold, and warm clothing. As snow demonstrates many different features might denote the device of snow but in this case thematically they serve the same effect. Finally in the context of each other these motifs connote the concept and theme of winter.
Authoring Method
In order for our approach to be practical it was necessary to have a systemic way for people to create valid instances of our thematic model. We deconstructed our own process for creating definitions and identified five stages for defining a given theme in the terms of the model: 1. List Associated words: The contributor spends some time expanding the seed theme into a list of associated words to get a list of related concepts. 2. Classify as Themes or Motifs: The contributor then makes two lists using the results of stage 1 based on the rules of the model classifying each as either a theme or a motif. 3. Group elements: The contributor groups together similar elements or those that share a similar purpose into a single element based around the shared purpose or a generalisation of the features they share.
Expand Sub-Themes:
The contributor takes remaining theme elements and expands them as they have done the initial theme. Care is taken to consider stage 5 when doing this in order to save time. 5. Remove associated elements: The contributor removes each theme or motif that is not entirely relevant to the root theme.
This authoring process was refined into a guide, and has been described in depth and evaluated with users in our previous work [16]. The process is expensive in that it requires human authoring of definitions, however a majority of untrained users did create valid definitions, demonstrating that the method can be successfully used. A key area of future work will be how to better support the creation of thematic models, for example via a richer authoring tool, crowd-sourcing, collective intelligence, or part-automation of the process. The experiments described later in this article use valid thematic definitions created using this process by independent English undergraduate students at the University of Southampton, and later transcribed into XML for use in our systems by the developer.
First Task: Thematic Montages
To demonstrate the effectiveness of the narrative model in helping structure similar information our first experiment was devised to use the model in support of a retrieval and composition task for multimedia on the Web.
The Thematic Engine
The photo sharing system Flickr was used as a source of content (potential Natoms) due to the large amount of readily available tags (Features) that accompany the images. Tag folksonomies such as that made available by Flickr have been demonstrated to offer meta data on items of a higher semantic value as opposed to collections with automatically generated data [3].
The theme definitions were written in XML, with each file representing a thematic element (either a theme or a motif). Definitions for themes listed the motifs with which they shared a connotation relationship and definitions for motifs listed the features that denoted them. For this first experiment, four root themes were authored by hand following the defined authoring method described in Section 3.2. The themes selected for the initial experiment were Winter, Spring, Family, and Celebration.
The Thematic Engine generates montages by taking a desired montage size (number of images), a desired content (keyword subject), and a desired list of themes (comma separated list of keywords). The Thematic Engine searches Flickr for the desired content and forms a base corpus (in narrative terms a Fabula) using the top 30,000 images returned by the keyword search. The thematic quality of each image (its relevance to the requested themes) is then calculated and the top N images are returned where N is equal to the desired montage length.
The thematic quality of each image is calculated based on the features present. Each tag is considered to be a feature and using this, each image's component coverage and thematic coverage is calculated. How these are calculated and how Fig. 3 The Process by which the TMB generates a montage thematic quality is calculated from them is presented in equations 1, 2, and 3 below. TQ is thematic quality, TC is thematic coverage, CC is component coverage, T is the number of desired themes, C is the sum number of components (elements, themes or motifs, that directly connote a theme) of all desired themes, and t and c are the number of themes or components respectively for which the image has a relevant feature. A feature is considered relevant if it directly denotes a motif that is either a component or through a chain of connotation later indirectly connotes the component or theme requested.
The final thematic quality is therefore expressed as a percentage and is based on how many of the desired themes the image is to connote to as well as how relevant it is to each theme's top level thematic components. The entire process is depicted diagrammatically in figure 3.
Initially we tested the effectiveness of the Thematic Engine as compared to a simple keyword search [15] [17]. As the Thematic Engine is based in part on Flickr we elected to compare it to Flickr's keyword search. As well as comparing the thematic relevance of both approaches for individual images we were keen to see how well the thematic system performed in a more narrative context of many 'natoms'; in this case a photo montage. To summarise, our experiment showed us with statistical significance that the the inclusion of themes produced images perceived to be more relevant then the Flickr keyword search, especially when images were presented in groups.
Having demonstrated that semiotic term expansion was effective we need to evaluate how it compares to existing term expansion methods, and how well it functions when used within a narrative context.
Comparison with Co-Occurrence
Our initial work demonstrated that semiotic term expansion is effective, but it is necessary to investigate the quality of that expansion as compared to existing techniques of term expansion.
Mandala's original review of a range of term expansion methods for query expansion [34] showed the strongest individual approach was co-occurrence, a method of term expansion that continues to be used as an effective means of measuring term similarity today in multimedia retrieval [27]. As such we identified co-occurrence term expansion as a suitable candidate for comparison.
In order to keep the comparison fair, the co-occurrence system would operate with the same rules as the Thematic Montage Builder (TMB) which used the Thematic Engine described above. A corpus on the subject of the montage would be compiled and the system would then expand the term representing the desired theme to identify the objects in the corpus with the highest thematic quality. The top N of these images, where N is the desired size of the montage, would then be returned as the montage.
The system rates the semantic similarity of two terms within the corpus based on how frequently they occur, and co-occur. For this system if the terms cooccurred as tags for a particular image in flickr this was recorded as a co-occurrence. Based on these two frequencies the semantic similarity of the two terms may be calculated in a number of different ways, we use the 'Mutual Information' measure as our similarity calculation which (while very similar to other similarity measures) has been shown to be slightly more effective [34].
Using these calculations the system can create a vector for a pseudo document (a model representing a theoretical ideal document with tags proportional to their similarity to the desired term). This is based on the semantic similarity of every term used as a tag in the corpus to the term for the desired theme, where each term is a dimension. The thematic quality of each image is then calculated as the Euclidean distance of a vector describing the image (where the frequency of each term comprises its distance along that dimension) from the vector describing the pseudo document. In the case where multiple themes are used the half-way point between the pseudo document for each theme is used. Also, when detecting the presence of a term, basic stemming is used so that plurals and other minor variations of the same term are all still detected.
This created a Co-Occurrence montage generator similar to the TMB in that a desired theme and content could be specified along with montage and corpus size and a montage would be returned that contained images relevant to the desired content that were also thematically relevant to the desired theme. The difference being one was using the semiotic expansions in the form of the thematic definitions and the other performing an automatic expansion based on co-occurrence.
Both of these applications were of O(n log n) complexity, the original scoring and co-occurrence detection being O(n) and the merge sort to order being O(n log n), and could not be used in real-time. It is possible however that the technical implementation of these algorithms might be improved, however as our contribution focuses instead on the relevance of images selected and not the efficiency this implementation is suitable for our needs.
Methodology
We ran an experiment to compare the performance of the TMB and the Co-Occurrence generator. The experiment displays images to participants under a title composed of both a content keyword and theme(s) such as London in Winter (images about London with the theme of winter). Both systems generate ten image montages for each title and participants view the images both individually and grouped together as a montage and rate their relevance to the titles. The experiment itself is divided into four tests; two tests for titles with a single theme, and two for titles including multiple themes to test to performance of the systems in both situations. For both sets of titles the first test displays the images individually at random under the title they were generated for and the users are asked to rate their relevance to the title from 1 to 5. The second test for each set of titles groups the images together in their montages, once again under the titles, and asks the participant to rate the relevance of the images as a group. Two base cases are used to give the results context, a low base case (BaseL) of ten randomly selected images which are taken from the most recent images uploaded to Flickr, and a high base case (BaseH) of ten images selected by a person compiling the best montage they can for the given titles from images in Flickr.
The titles were chosen to explore how the systems performed with titles including both single and multiple themes as well as titles with themes that complimented the content of the corpus or fabula as well as ones that clashed 2 with it. As such four single theme titles were used; two regular theme fabula pairings and two clashing theme fabula pairings, as well as two titles with multiple themes. In the tests requiring single theme titles, users were given one regular paired title and one clashing one alternating to the other two titles for the next participant. The titles were: We enforced a rule that no montage would contain more than one image by the same Flickr user as images uploaded as part of a set by a single user would often have strong inherent commonality. All montages were generated in the same afternoon to ensure they were using as similar a state of Flickr as possible. When the images were presented individually they were randomised so as to prevent the identification of which images belonged together in montages.
Results
Recruitment to the experiment was through social media sites and received a total of 57 participants. Our findings were that the thematic system outperformed the co-occurrence based system both in individual images and with montages. Table 1 shows the frequency data and statistics for single images, and table 2 shows the same data but for the images grouped as montages (5=highly relevant, 1=not relevant). It is to be noted that in some cases a participant skipped or missed rating an image or montage and consequently the total frequencies are not identical (though are similar). The hypothesis that the TMB selects images rated more relevant for the given titles then the co-occurrence based system is true with a 0.0005 probability of error both for individual images and montages. While this improvement might seem slight it is important to view it in the context of both base cases. Figure 4 shows the mean relevance ratings of the four different methods of selecting the images. Standard error was calculated but is too small to display on these graphs. Both graphs show the thematic system outperforming the co-occurrence system. The margin of improvement, which at first might seem small, is more impressive considering the margin between entirely random images and images purposefully selected to make the best montage possible.
We note that images are rated higher when presented as a montage (with the exception of BaseL). As shown in table 3, the average improvement in relevance rating from rating given as single image to rating given as a montage however is higher for images selected by the TMB then those selected by the co-occurrence based system. The hypothesis that the TMB experiences a stronger improvement from individual images to grouped images is true with a less than 0.0005 probability of error.
We also recorded how both systems performed for titles that contained a single theme as well as those with multiple themes. This is shown in table 4. We recorded how each system performed for titles with a clashing theme keyword pairing as well as those with a regular pairing. This is displayed in tables 5.
Analysis
Our data shows that semiotic term expansion driven by our thematic models is a more effective means of expanding thematic keywords than co-occurrence. The relevance of TMB images was rated higher for both single and grouped images than the co-occurrence images, and the improvement from single presentation experienced by images presented as a montage was also greater for the TMB, all to a degree that can be considered statistically significant. While the improvement experienced may at first seem slight the standard error on the means shown is very small (0.027 -0.074) and in the context of the two base cases the improvement is more impressive. The improvement from entirely random images to purposefully selected images by hand is 1.872 for single images and 3.046 for group images, the improvement from co-occurrence to TMB is 0.419 and 0.703 for single and grouped respectively. Semiotic term expansion also showed it was more capable of selecting images for titles containing multiple themes; this can be attributed to the way thematic score is calculated emphasising images relevant to both themes and looking for common shared motifs. As before the TMBs weakest performance was when it was required to produce montages for titles with a clashing theme fabula pairing in the title, this is to be expected due to the fact that the features representing the specific desired motifs will rarely found within the corpus. However, in this case the co-occurrence system also struggled and performed comparably badly.
The lower performance of the co-occurrence system may be explained by query drift as discussed in [53]. This is to some extent born out by examining the image sets generated by co-occurrence, for example we can see it has drifted from winter to snow to snowdrop (the flower). It has also been noted in work such as that by Xu [52] that the best results from co-occurrence come when it is trained using a local corpus that is known to be relevant to the query being expanded. While we were training using a local corpus it was not specifically relevant to the element we were expanding, for that to be the case the corpus would (as an example) have to be populated with a Flickr search for 'London in Winter' rather than just 'London'. If this is the case it is possible co-occurrence is less effective for expansion of terms for which it is more difficult to acquire a training corpus of ascertained relevance such as a theme.
There is the possibility that the TMB may be particularly well suited to a particular title and was therefore having its average taken higher by an individual case. In order to analyse this a little further table 6 displays the mean rating for each title from both the TMB and the co-occurrence systems for single images whereas table 6 does the same for montaged images. Both tables also show the improvement in relevance made by the TMB (negative numbers representing instances where co-occurrence performed better).
The TMB has scored significantly higher for titles 1 and 5, which were 'London in Winter' and 'Family in New York at Winter'. However, if we remove the mean ratings for both titles including winter entirely we find the TMB still has a higher mean than co-occurrence for both single and montaged images, showing 2.380 for the TMB and 2.267 for co-occurrence for single images and 3.243 for the TMB and 2.992 for co-occurrence for grouped images. It is also still statistically significant, even excluding the winter titles; the TMB performed better than the co-occurrence -It is possible to use definitions created in terms of a thematic model to generate simple photo montages relevant to a desired theme. -A system using thematic definitions creates montages rated more relevant than those offered by either basic keyword search or co-occurrence term expansion. -The thematic system is still effective in situations demanding multiple themes but less effective if the desired content and theme clash. -While all systems are more effective at finding themed montages rather than single images, the improvement experienced by the thematic system is greater.
Second Task: Illustration and Thematic Cohesion
Our second objective was to assess the impact of the thematic model on the automatic illustration of a short story. In particular, is it better than regular search in terms of thematic cohesion? In order to do this some tangible ways of measuring the cohesion of a narrative must first be established.
Cohesion Variables
By narrative cohesion we mean the extent to which the various parts of a narrative successfully work together to produce some overall effect in the reader. There are a number of different ways in which a narrative can be considered to be cohesive. Genre is a common classification of narrative based upon a set of reoccurring features that position a narrative culturally within the context of other narratives. Tomashevsky suggested that the genre of the narrative was what limited the motifs available [48]. The Coh-Metrix project [14] worked towards creating a system for analysing the coherence of texts through several metrics (including latent semantic analysis, term frequency and density, and concept clarity.) The measuring of these metrics however was intrinsically based upon the genre of the narrative, which, they identified as important to coherence [38]. In his work identifying key features of narrative Bruner [9] also highlights the importance of genre to cohesion. Under his discussion on 'Genericness' he explains how genre is a way of 'comprehending narrative.' By conforming to convention the narrative guides the audience to subconsciously fill in gaps in the presentation and make sense of the content.
In work by Booth [8] there is a description of the importance of the concept of narrator in narrative. As the narrator is core to the telling of the story, coherence in how the narrator is presented is also important to the cohesion of the story itself. McAdams explains from the perspective of modern psychology that people become narrators in order to make sense of a series of events or stories, thus it is the presence of a narrator that leads to coherence in a story [37].
We have already discussed how the logical use of language may affect the coherence of a narrative however there are other linguistic choices made in the telling of a story that might also affect its coherence. Earlier we discussed how structuralists such as Barthes [6] and Bal [5] consider narrative to be comprised of layers, often of story and discourse, where story stands for content and discourse for how the story is told. Features of discourse have already been identified here; themes, genre, narrator, but these cannot be said to completely account for the language choices made in presenting a narrative. The use and style of language can have an effect on its coherence. Style can be said to be a composite of attitude, tone, and mood of a narrative, representing decisions made on the presentation of elements at the discourse level. The stylistic cohesion of a narrative could be said to be in part the extent to which an author sets out and then abides by their own linguistic conventions.
From the literature we have thus identify five key variables for narrative cohesion [18]: -Logical Sense: the connective language used to explain the content of the narrative. -Themes: the concepts communicated implicitly throughout the narrative.
-Genre: the conformance to conventions that culturally contextualise the narrative. -Narrator: the presence of a consistent perspective communicating the narrative. -Style: the way narrative elements are presented within the discourse.
Measured appropriately, and considered together, we propose to use these cohesion variables as a basis to understand the level of cohesion within a narrative that has been automatically illustrated.
The Illustrator Experiment
Having decided upon these metrics for measuring narrative cohesion we can now address our second and third research questions, and look at how illustrations selected by our semiotic term expansion method alter the perceived thematic cohesion of a narrative, and whether this subsequently impacts the perceived cohesion of the narrative as a whole.
Methodology
For this experiment participants filled in a web questionnaire on the perceived narrative cohesion of three short stories with illustrations. The three short stories selected had three different methods of generating illustrations for the stories, thus nine possible combinations, with each user seeing the three stories with illustrations generated from different methods. The illustration method to story pairings were rotated using the principle of latin squares to get a spread of data for each method on each story.
The stories used in the experiment were divided into logical sections with each section given an illustration. To facilitate this the stories were stored as xml allowing them to be marked up where the different sections began and ended. The xml model for each story stored a content keyword for each section as well as a theme for the whole story. These keywords and themes were used in the selection of the images.
The stories used were selected from Steve Ersinghaus' 3 contributions to the 2009 100 days project where he wrote 100 short stories. This was an ideal resource for the experiment with a large collection of stories with suitably complex themes, strong imagery that lent themselves to illustration, and an author that was happy to engage with the experiment. Fifteen of the stories were reviewed for their suitability for the experiment. The stories that were picked were the ones which logically fell into 3-5 sections (each of which could receive an illustration) and were of an appropriate length for the planned experiment (took less than 10 minutes to read). Also, to ensure the spectrum of naturally occurring coherence in the plot was covered, a story that was distinctly abstract (and arguably authored with deliberately low cohesion) was selected, as well as a story that was more deliberately strongly coherent, and a third that fell somewhere between. The three stories selected were: The illustrations for the stories were generated by one of three methods: -Method 1 -Content and Theme: Illustrations were generated based on a content keyword for each section and a theme selected for the story. This was done using the TMB with a corpus based on the content keyword from Flickr and the theme designated for the story. A comparison between methods 1 and 2 would show whether thematic cohesion had increased due to the themed images and also whether this had resulted in a change in other cohesion variables. Method 3 on the other hand gave our results context with an intended best case scenario. The expert for method 3 was an English Masters graduate from Cambridge University with a history of involvement in both literary criticism and computer science research communities, and was independent of the research team.
In generating the meta data necessary for the experiment attempts were made to be as fair and impartial as possible. Before selecting images for method 3 our expert was asked to identify a keyword to describe the literal content of each defined section of the stories and also to list the themes that they felt were present within each story. They were also asked to identify from their lists of themes for each story which they felt was the strongest theme. The strongest themes went into the story models as the listed theme for each story and the keywords for content identified were entered for the content keyword for each relevant section.
Having completed this the newly identified strongest themes were modelled into definitions for use with the TMB. To keep the definitions of the identified themes impartial three volunteers were asked to follow the thematic definition guide explained in an earlier section to define the themes. During this process an expert in the model was present to collaboratively help in forming these definitions to ensure the models created were valid, creative control of the definitions was left solely to the volunteers and all the themes and motifs comprising the model were identified by them. The stories and their identified themes are displayed in table 7. Having completed our models of the stories, illustrations were generated for them using the various methods and added to the models. In the case of our own approach this followed the same procedure as dictated in section 4.1 and figure 3. As Flickr is a user-generated collection it is possible that individual images might be incorrectly tagged. While the effect of individual images was reduced in the previous experiments by the large volume of images involved, the number of illustrations viewed in this experiment is much less and as such the effect of a single anomalous image is potentially increased. To reduce the effect of individual images each system selected their top five images instead of one for each illustration and when participants viewed the illustrations a random image from this montage of 5 would be selected to be the actual displayed illustration.
The images selected obeyed similar rules to our previous experiment in that illustrations for a single story may not contain more than one image per Flickr user (as images from the same set may inherently be cohesive). Selected images were reviewed with the intention of removing any potentially offensive images, or images with impractical height to width ratios, however, ultimately no images needed to be removed.
The experiment was advertised through social media and 66 participants took part. Participants were emailed a link to a brief introduction and a glossary of terms to ensure they knew what was meant by terminology such as themes, genre, narrator, etc. Participants were asked when reading the story to also consider the illustrations. Once they had begun the participants were shown the first story with its illustrations and then asked to answer a short questionnaire (explained below). This process was repeated for all three stories.
The questionnaire was designed to measure the perceived cohesion based on the five variables we had identified as related to narrative cohesion. Each question was answered using a single Likert scale of 1-5 (5 being the very positive response) with the exception of question 2 which asked the users to rate each theme on a list of 23 themes (the entire list of themes identified by the independent expert for all stories). The questions were: 1. How logical was the story? E.g. did the story make causal sense to you? 2. Please rate the strength of the presence of the following themes in the story.
E.g. how apparent was it that these themes were present? Were they subtle or overt? (Followed by a list of themes) 3. How strongly do you feel this story fits into an established genre? 4. How strong and consistent was the presence of an identifiable storyteller? E.g.
Was the story told from a perspective you could easily identify? 5. Is the style, presentation, and language used to express the story consistent?
E.g. is the story throughout presented in the same way or does it frequently change tone?
Stories were displayed in a deliberately plain format on a single page. While this could lead to a long page, navigating can break immersion when evaluating a narrative [13] and as we were measuring cohesion we were keen to avoid this. A screen shot of a narrative displayed through the system can be seen in figure 5.
Results
The results for different story and method pairings can be found in table 8 and the graph in figure 6. For Logic, Genre, Narrator, and Style the mean of the rating for the relevant question was used, for theme however our question was more complicated and this warranted a more sophisticated scoring system. Thematic cohesion has been divided into three scores; Theme(S) representing the mean score for the strongest theme (as identified by our independent expert) for that story, Theme(I) representing the mean score for all the other included or present themes identified in that story, and Theme(E) representing the mean score for all the themes not identified by our expert for that story.
Analysis
The results lead to some interesting observations. First of all, as might have been expected, the overall cohesion scores of the deliberately selected abstract story 'The Point' were lower than the other two stories (a total average of 2.351, as supposed to 3.702 for 'The Night', and 3.864 for 'Computer Leon'.) The story selected for deliberately high cohesion scored generally higher. This helps supports the general notion that our questionnaire was able to record cohesion scores. However, conclusions based on the different methods for presentation are not straightforward with no method significantly and consistently raising cohesion above other methods. Our research question was whether thematic illustrations selected by a thematic system improved the perceived thematic cohesion of the narrative. To answer this we need to consider how an improved thematic cohesion would manifest within the scores. As a story becomes more thematically coherent its stronger deliberate themes would be identifiable throughout and false or unintended themes (what we might refer to as 'thematic noise') would become less detectable. As such, in our thematic scores we would expect to see Theme(S) rise and Theme(E) decrease for a successful increase in thematic cohesion.
Analysing the overall data for the range of stories we find that the thematic approach (TMB) has increased Theme(S) and decreased Theme(E) over the generative approach not using themes (Keyword Search). However, when putting this through a t test the hypothesis 'TMB scores Theme(S) higher than Keyword Search' scores a t of 1.181 (df=130, p=0.2) whereas 'TMB scores Theme(E) lower then Keyword Search' scores a t of 2.607 (df=2010, p=0.005) showing that while the decrease in Theme(E) is statistically significant with only a 0.005 probability of error, the increase in Theme(S) is not statistically significant with a 0.2 probability of error. Thus we can conclude that while the images selected by semiotic term expansion have improved thematic cohesion, they have done this only by reducing thematic noise, rather than increasing the presence of a specific theme.
The style of the story may well be a factor in the ability of the Thematic Illustrator to improve thematic cohesion. Our results (as shown in table 8) show that for the thematic approaches, improvement of Theme(S) over the keyword approach is much more substantial for Story 2 ('The Night') than for other stories. Also to be noted is the relatively minor or negative effect on cohesion of thematic emphasis in Story 1 ('The Point'). This could be attributed to the relatively abstract style of story making it difficult to automatically generate relevant or effective illustrations and as such reducing the effect of illustrative emphasis.
To answer our other research question, whether an increase in thematic cohesion leads to an improvement in overall cohesion, we performed a Pearson's correlation between Theme(S) and each of the other non-thematic metrics. The results are presented in table 9. What we find is a moderate correlation with Logic (p = 0.005), and a weak correlation with Genre (p = 0.05). There is also a weak but non-significant correlation with Narrator (p = 0.1), and almost no correlation at all with Style. These results suggest that a system capable of improving thematic cohesion could see an improvement in other cohesion variables, in particular Logic and Genre. This would provide a strong argument for pursuing methods of thematic emphasis as it might be used to raise the coherence of generated or adaptive narratives. However further work is needed to establish the ways in which these variables are dependent on each other.
Within this work we have begun to understand how narrative cohesion may be modelled and captured. The experiments contained have also shown that it is potentially viable to alter the coherence of the narrative through thematic emphasis using illustrations. While more work is necessary to build a complete understanding of the effect of thematic emphasis, significant steps have been made here to establish metrics, the effect on thematic cohesion (in particular thematic noise), and the relationship between different variables of cohesion.
Conclusions
We began this work by noting that the research on using narrative concepts for information retrieval and the automated generation of content often tends to ignore subtext or at least does not explore narrative themes. We have suggested that a way in which subtext can be explored is by modelling themes based on thematic structuralist theory [48], and have used these thematic models as the basis for a semiotic term expansion.
Our goal has been to see if a search strategy based on this semiotic term expansion will yield more thematically coherent results and lead to better automated remixing of online materials, in particular the automatic construction of photo montages, and illustration of short stories. We outlined three specific research questions: Question 1: Will using semiotic term expansion compose themed image montages that are more thematically consistent than term expansion based on co-occurrence?
In previous work we had shown that semiotic term expansion works, and is more effective than keyword search [17]. However, the thematic models required to drive semiotic term expansion are expensive to create and it was therefore important to show how our method compared to more established methods of term-expansion, in particular using co-occurrence. Our first experiment shows that our system using semiotic term expansion outperformed term-expansion based on co-occurrence with statistical significance (p=0.0005). While the scale of the improvement is small in objective terms when considered relative to the high and low base cases in our experiment it represents a more sizable improvement. We acknowledge that our conclusions here are limited to our own specific implementations (which we detail) and while co-occurrence remains the basis of many state of the art approaches that minor technical refinements might be made to both implementations. However, our results still demonstrate the value and potential in our approach.
Question 2: If we use these result sets for automatic illustration will it improve the perceived thematic coherence of a short story?
Improving thematic cohesion can be broken down into two parts: improving a chosen theme, and dampening unwanted themes. In our second experiment we have shown that using semiotic term expansion dampened unwanted themes significantly (p=0.005), but did not necessarily improve the perceived cohesion of the chosen theme. This may indicate that there is a certain ceiling to what can be achieved in terms of promoting a theme, but does show that thematic noise can be effectively reduced.
Question 3: Does improving the thematic coherence of a short story also improve the perception of other coherence factors (such as logical coherence)?
We have presented a number of coherence factors drawn from the literature and have been able to look at the correlations in the improvement of the different factors to see if making a change in one actually has an impact on the rest. We have shown using Pearson's correlations that improving perceived theme correlates moderately with perceived logical coherence (r=0.30, p=0.005), and weakly with genre cohesion (r=0.19, p=0.05). This is evidence that improving thematic cohesion gives readers the perception that the story is more coherent in other ways.
Our research has therefore shown that semiotic term expansion based on a thematic model is effective at making search results more thematically relevant and we believe that it might be utilised in conjunction with other models of narrative to improve narrative generators or other re-mixing systems.
The success of our semiotic term expansion is reliant on the quality of the thematic definitions built for it. Due to the subjective nature of the model this is in turn reliant on human authors. In previous work we have shown that it possible to provide a guide that leads to the creation of effective models, and this does provide some systematic structure to the creation of thematic models. However, more work is needed to explore whether models could be constructed in an automatic way, for example by using clustering techniques to derive coherent terms and concepts from social media streams [20].
Our work is unusual in that it focuses on the subtext and narrative themes, rather than the primary media content or structural elements. Our results show that subtext, in particular thematic subtext, can be successfully manipulated by a machine.
Semiotic approaches provide a way for us to model the underlying meanings and intentions of authors and creators, leading to opportunities for improving both search and automatic content generation. This work represents a contribution to-wards those goals, but requires further development in how semiotic structures could be created and how they could be applied. The ultimate goal is that systems will begin to understand and utilise the subtler aspects of narrative in as meaningful a way as we do, and that their ability to search, analyse, or generate narratives becomes subsequently more powerful. Future work in this space might seek ways to accelerate the construction of thematic definitions -which is time expensive, explore the application of the model for coherence in other domains, or explore the limitations of the approach in other manners such as with other mediums (e.g. video), a broader selection of stories from a wider selection of genres, or with even larger more varied collections of themes.
|
v3-fos-license
|
2021-06-02T06:16:57.306Z
|
2021-06-01T00:00:00.000
|
235268292
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://journals.biologists.com/jcs/article-pdf/134/13/jcs253484/2088190/jcs253484.pdf",
"pdf_hash": "f740585c7052e926fd648d225c16e8e2cc542801",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2622",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "04ec06901e017ef016812733cb69404049742f79",
"year": 2021
}
|
pes2o/s2orc
|
An optogenetic method for interrogating YAP1 and TAZ nuclear–cytoplasmic shuttling
ABSTRACT The shuttling of transcription factors and transcriptional regulators into and out of the nucleus is central to the regulation of many biological processes. Here we describe a new method for studying the rates of nuclear entry and exit of transcriptional regulators. A photo-responsive LOV (light–oxygen–voltage) domain from Avena sativa is used to sequester fluorescently labelled transcriptional regulators YAP1 and TAZ (also known as WWTR1) on the surface of mitochondria and to reversibly release them upon blue light illumination. After dissociation, fluorescent signals from the mitochondria, cytoplasm and nucleus are extracted by a bespoke app and used to generate rates of nuclear entry and exit. Using this method, we demonstrate that phosphorylation of YAP1 on canonical sites enhances its rate of nuclear export. Moreover, we provide evidence that, despite high intercellular variability, YAP1 import and export rates correlate within the same cell. By simultaneously releasing YAP1 and TAZ from sequestration, we show that their rates of entry and exit are correlated. Furthermore, combining the optogenetic release of YAP1 with lattice light-sheet microscopy reveals high heterogeneity of YAP1 dynamics within different cytoplasmic regions, demonstrating the utility and versatility of our tool to study protein dynamics. This article has an associated First Person interview with Anna M. Dowbaj, joint first author of the paper.
In 1C (and applies actually also to 2B,C), why does the fluorescence signal on mitochrondria rise above the initial values during the recovery phase? It might be useful to plot also the total cell intensity.
In figures 1F and G, it would be good to spell open the mCherry above the graphs.
Figure legend for 1F is weird. Figure 3C lacks the r and p values for correlation. Figure 4 contains a lot of data, and it is slightly difficult to identify the most meaningful data. I suggest to make a much more clearer distinction between those correlations that were statistically significant compared to those that were not. In addition, one parameter that should have been included here is intensity of the utilized construct in order to examine for the possible overexpression problems. It might be interesting to also think about other possible parameters, such as ratio of import and export or sum of import and export (overall shuttling speed) to draw out possible biologically meaningful correlations.
The use of lattice light sheet in Figure 6 is interesting, and especially the use of repeated pulses of release as "technical replicates" within the same cell, is exciting. However, the present analysis of the results do not really add too much to the story, and there is no attempt to study nuclear transport with this method. The images in figure 6B and C could be clearer, and the channels should be shown separately, because the mCherry signal masks the mito signal. How were the regions in 6E chosen? Figure 6F might be clearer, if shown only with the lines.
Discussion line 336: it is stated that the peptide does not interfere with the endogenous activity of YAP1 and TAZ with reference to figure S1E, which is actually a Western blot showing the expected sizes of the proteins. S1F is showing a luciferase assay, which demonstrate that the constructs can activate transcription, but with the presented data, it is too strong statement to say that the peptide does not interfere with endogenous activity.
Reviewer 2
Advance summary and potential significance to field This manuscript describes the development of a new optogenetic tool for interrogating nuclear/cytoplasmic shuttling as well as a model and software package to analyze the associated data. These tools are then used to study the shuttling of YAP and TAZ. This is an exciting tool and should be used widely by the field. However, there are some missing controls and unjustified assumptions that prevent the manuscript from being suitable for publication at this time. Also, in places the manuscript is poorly written and there are many small errors in the text and figures.
MAJOR CONCERNS 1.
The optogenetic tool is based on LOV-TRAP system and is initially validated with mCherry. A natural control to verify proper functionality would have been to use mCherry with a nuclear localization and/or exportation sequences. Seeing the expected differences in the import and export rates of these constructs would further establish that the system is working as expected and show that accurate rates can be determined with the overall procedure. The authors should either add such controls or justify why they were not completed in the text.
3.
A major portion of the work is the development of a model to describe the observed data, however, the role of diffusion is ignored in this model. This omission is confusing as the authors have done this type of modelling before (Ege, Cell Sys, 2018). The justification for ignoring diffusion in this work should be established quantitatively and stated in the text. Additionally, the data acquired the lattice light sheet demonstrating variablility of import/export rates throughout the cell would be most easily explained by local differences in YAP diffusivity.
4.
The data obtained with the lattice light sheet seem very preliminary, as the number of measurements seems quite low. Also, it is challenging to interpret these results as presented with a model that does not contain diffusion. Additionally, experiments with mCherry should be included to establish that the observed spatial variation is related to YAP functionality and not physical process, such as molecular crowding. The authors should either substantially increase the quality of this data or consider removing it. Its inclusion does not bolster the main points of the paper, so this reviewer recommends removing it.
5.
The "Differential equation to model nuclear import and export" section should be rewritten, or the title of this section should be changed to reflect the fact that there is substantially more in this section rather than just the equations. This reviewer suggests the comparison to FLIP be given its own section, as this is key to the validation of the approach.
6.
The authors should discuss how the estimates of YAP1 import/export and TAZ import/export compare with previous measurements in the "Application of opto-release methodology to YAP and TAZ" section. The consistency with FLIP demonstrates internal consistency of the study, but consistency with previous measurements should also be established.
7.
As transient transfections were used to create the system, large variations in expression levels between cells in the population are likely. The authors should show data demonstrating that the results are not dependent on the absolute expression levels of the transfected components.
8.
The maximal nuclear accumulation of YAP using this system (for example in Fig 2), is quite low throughout the experiments in the manuscript. It can't be determined if this was due to low levels of YAP nuclear localization or incomplete release of YAP from the mitochondria due to a defect in the optogenetic tool. The authors should perform experiments distinguishing between these two possibilities.
9.
The text in the "Import and Export Rates" section is vague and unorganized. It should be rewritten to provide more precise explanation and interpretation of the data. Also, the title needs adjusting. I believe the main point is the import and export rates are correlated for YAP but not other constructs.
10.
The development of the semi-automated software is presented as a significant part of the work. Including a supplemental figure that demonstrates the proper functionality of the software on simulated data would provide definitive proof that the code is functioning properly.
11.
Figure captions generally lack key details, like the number of cells in each experiment and number of experimental days. More detail should be added to these captions.
12.
The availability of the MATLAB code is not stated.
The time courses in all figures should be converted to time from frame number.
2.
Is the mCherry data repeated from Figure #1 to Figure #2? If so this should be stated. Also, how does this effect the statistical comparisons? 3.
In Figure #3C is the distribution of YAP_S94A bimodal? 4.
In Figure #5, the positions of the various ROI should be shown. Are the 5 regions equidistant from the nucleus? 5. Figure #1 shows YAP1/TAZ in the schematic but all of the data regard mCherry. 6.
What is shown in Sup Fig #8 and its relevance to the manuscript is unclear.
7.
Image size/quality is generally-low throughout the manuscript and ideally larger-size images (in both manuscript footprint and quality) should be included. 8.
Page 11, line 276: It might be noted here that YAP does not have a canonical NLS "Importin alpha1 mediates Yorkie nuclear import via an N-terminal Non-canonical nuclear localization signal. J Biol Chem 291, 7926-7937"
Reviewer 3
Advance summary and potential significance to field see below
Comments for the author
The paper "An optogenetic method for interrogating-…" authored by Dowbaj and colleagues reports on the development and use of an AsLOV-based optogenetic tool to control the cellular localization (including mitochondria, cytoplasm and nucleus) of the YAP transcription factor. Using this approach, the authors quantify, using a MATLAB-based app, the rates of nuclear entry/exit under a variety of conditions. Finally, they combine the optogenetic tool with the use of lighsheet microscopy to measure the dynamics of the transcription factor within the cell in 3D. I think the paper has very interesting and novel elements (such as the use of a YAP optogenetic tool and the capability of its 3D tracking), however the quantification part suffers from multiple fundamental mathematical flaws that unfortunately massively impact the quality of the work. I honestly hope that the points below help the authors with re-analysis.
Major points 1. The model has 2 different rate constants for protein unbinding from the mitochondria, depending on whether blue light is on or off i.e. whether the LOV domain is excited or relaxed. However, the rate constant for protein binding to the mitochondria must also take on 2 different values depending on the LOV domain conformation. Without taking this into account, the model is unphysical and, due to all obtained rate constants being interdependent (since they are fit simultaneously), all rate constants obtained with the model in its current form suffer from this.
2. On lines 756-758, the authors state that relaxation of AsLOV2 domains occurs on the timescale of seconds. Therefore, for illumination to be considered constant, light should be supplied at a rate at least an order of magnitude higher than the relaxation rate, ideally continuously (simply achieved using widefield illumination). If light is supplied to LOV domains at a rate equal to their relaxation rate, between illumination pulses the fraction of stimulated domains will drop exponentially to 0.37=exp(-1)). This is particularly important given the interconnected nature of the measured rate constants. Fig. 1F for mCherry and other figures for other constructs) there is no significant difference in the rate constant of mitochondrial release when the light is on ("mito light") or off ("mito dark"). The former should be many times greater than the latter -as they are, these values would state that stimulation does not work. 4. The rate constants (k's) found here have inherent dependence on cellular parameters such as cytoplasmic/nuclear volume. This would be clear if the differential equations were derived from first principles. E.g. Timney et al. JCB 215, 57 (2016) use differential equations derived from first principles, so measured rate constants can be transformed into quantities independent of nuclear/cytoplasmic volumes and number of NPCs etc. Only after a transformation such as this can correlations be investigated. A similar transformation needs to be applied to rate constants measured in this study before any correlations between rate constants and cellular parameters, or between import and export rate constants, can be performed fairly.
A particularly worrying point is that, after analysis (in
Other major points Major. 5. The methods of "bleaching intensity normalisation" and "non-conserved intensity correction" are overcomplicated and introduce a troubling number of free parameters into the data processing. If simply c(t)/m(t)/n(t) are the cytoplasmic/mitochondrial/nuclear intensities, normalised by their combined intensity (i.e. whole cell intensity) then photobleaching will automatically be accounted for and c(t)+m(t)+n(t)=1 at all times, thus an outflow-inflow function is not required. 6. Fig. 1B supposedly shows H2B-mTurquoise labelling the nucleus. Can the authors also show this channel individually (not merged), as it has a sparse, speckled appearance? In addition, with reference to lines 760-762, it is hard to see how imaging of H2B-mTurquoise does not interfere with optogenetic activation despite using the same wavelength. Fluorescent imaging typically needs much higher intensity than optogenetic stimulation. 7. There are 2 examples of data being processed inequivalently: lines 160-162, constant thresholding is applied to some cells, dynamic thresholding to others; lines 1016-1020, an inflow-outflow function is applied to some cells, and not others. Making any comparison or assimilation of data that has not been processed in exactly the same manner is difficult.
8. Almost every instance of "rate" throughout the paper, in the text and figures, should be "rate constant" -the model used finds rate constants. This distinction is very important and conceptually crucial. 9. Throughout this work, data has been normalised, but it is never made clear with respect to what. For example, in Fig. 1C, and similar graphs, the vertical axis is called intensity, but it is clearly normalised (I assume to the intensity of the entire cell, and these numbers represent the fraction of intensity that comes from the mitochondria, cytoplasm, and nucleus, but this is not made clear). Since these values are input into quantitative modelling, it is vital that they are clearly explained. In particular, I wonder whether the data has been normalised by the the cellular area (in confocal miscoscopy) or the cellular volume (in lightsheet microscopy).
10. Several statements are not supported by the evidence provided: lines 111-113, neither figure shows information on expected localisation; line 120-122, Fig. S1C does not show enrichment to mitochondria; lines 125-127, Fig. 1B does not show an increase in cytoplasmic fluorescence; lines 228-231, YAP1_5SA has a low import rate as well as a low export rate, so the claim that nuclear persistence is a result of a low export is not justified; lines 247-250, there is no data/figure to support this claim.
First revision
Author response to reviewers' comments
Reviewer 1
The manuscript by Dowbaj et al. describes an optogenetic method to investigate nucleocytoplasmic shuttling of proteins, and apply it to study YAP/TAZ transcriptional regulators. The tool is based on LOV-TRAP, which is an optogenetic system for light-induced protein dissociation (Wang et al. 2016, Nature Methods). In the manuscript, the tool is utilized to recruit the proteins of interest to mitochondria, followed by light-induced release of the protein to the cytoplasm and then measurement of nucleo-cytoplasmic shuttling rates. This manuscript is very interesting, and it has several potentially important points. The first is the use of optogenetics as a tool to measure the transport rate of proteins, which is a cool idea. Second, the ability to measure transport rates of two proteins simultaneously from the same cell is exciting, since this cannot be easily achieved with the current photobleaching-based methods. Third, recording the fluorescence fluctuations in the whole cell with the use of lattice-light sheet opens novel possibilities for studies on the intracellular heterogeneity.
However, there are several profound issues, starting with the functionality of the optogenetic tool and lack of controls, that undermine the impact of the manuscript in the present state.
We are pleased that the reviewer finds that our approach is a 'cool idea' and the ability to measure the dynamics of two proteins simultaneously 'exciting'. Nonetheless, we also note his/her concerns and thank him/her for the constructive critique.
Major concerns Comparing the release rates of the constructs in Figures 1F (for mCherry) and 2D-E (for mCherry vs. YAT/TAZ) indicates that the release rate is not significantly different between dark and light conditions. This indicates that the tool is not functioning properly, e.g. the protein is not efficiently released upon the imaging conditions utilized here. Also the cell images seem to indicate the same; the increase in cytoplasmic intensity is very small. Why is there any release in dark conditions, and why would this depend on the utilized constructs (mCherry vs. YAP/TAZ in 1D)?
The reviewer points out the information originally presented was insufficient to be confident that our tool was working in the intended way. Specifically, he/she queries the small degree of release of the Zdk-tagged protein from the mitochondrial LOV anchor. We agree that the data provided in the original submission was not optimal and have now taken several steps to address this.
1.
We have implemented a new orthogonal method to analyse the on and off rate constants of Zdk and LOV interaction. In collaboration with the School of Mathematics at the University of Nottingham, we recently developed an analytical method for simultaneous deriving diffusion and on and off rate constants from confocal FRAP data. The theoretical part of this work is now accepted for publication following peer review in the Journal of Mathematical Biology. Using this approach, we have measured the on and off rate constants for Zdk LOV interactions in both the dark and the light, and Zdk on and off rate constants in the absence of any LOV protein (in this context it would not be expected to have any high affinity binding partners). In the absence of any LOV domain protein, the inferred on and off rate constants are 500 times higher indicating no long-lived interaction with any immobile proteins ( Figure 1G). Crucially, there is an increase in the off rate constant in blue light with no change in Zdk-mCherry diffusion ( Figure 1H & Figure 1J). These data clearly support that our construct is working. The reviewer may additionally query why the magnitude of the change in off rate constant under blue light that we observe is only 2-4 fold, and not greater. The reason for this is that we deliberately chose a LOV / Zdk combination with a lower dynamic range -the original analysis of the different LOV & Zdk mutants and differences depending on which component (LOV or Zdk) is tethered to mitochondria are reported in Wang et al., Nature Methods 2016. In contexts where one wishes to control a biochemical function with light, a large dynamic range between dark and light is desirable. However, our goal is to release only a small amount of protein so that we don't overwhelm the normal regulatory mechanisms controlling nuclear import or export, hence our choice of a LOV /Zdk pairing with a lower dynamic range.
2. In addition to the new orthogonal FRAP data, we have also re-analysed and re-plotted the original data (shown in the figure below for the reviewer's benefit). Although there was no significant difference between the means of the off rate constant in dark and light conditions in the original submission, when the data is plotted as paired measurements, which is entirely appropriate as the off rate constant is measured in the same cell in the dark and light, then light always leads to an increase in off rate constant (top left panels below). Following a very insightful suggestion from reviewer#3, we now allow different on rates in the dark and light. Thus, in the new analysis the magnitude of the increase in off rate constant in the light is increased. It also leads to tighter clustering of the rate constants in the dark. In all cases, blue light shifts the ratio of off and on rate constants in favour of the off rate constant (shown on the right below) Figure 2). Right hand plot shows the change in ratio between Off and On rate constants for cells in the Dark and Light. Wilcoxon paired non-parametric statistical testing is reported. The reviewer may also consider the absolute values determined by the two methods. These are reassuringly similar; with FRAP methodology reporting the off-rate constants to be 0.01-0.02s -1 in the dark and 0.03-0.075s -1 in the light and the opto-release methodology reporting 0.005-0.07s -1 in the dark and 0.024-0.9s -1 in the light. The main difference is the wider range of values reported using the opto-release methodology. Overall, it is striking how concordant the results are given the differences in methods.
3.
We have replaced the images with clearer higher magnification images in Figures 1, 2, and 5.
Regarding the more general question about release in the dark, even high affinity interactions between two proteins are associated with on and off rate constants. It is not the case that, once bound, two proteins that exhibit a high affinity interaction remain bound forever. Hence, there are measureable on and off rates even in the dark. In the new fitting of the data there are no significant differences in the off or on rate constants for mCherry, YAP1, and TAZ (either in the dark or the light-Kruskal-Wallis multiple testing). The increase in off rate constant in the light is consistently highly significant, whereas the increase in the on rate constant is of varying significance. Once again, this confirms that the major effect of blue light illumination is to increase the off rate constant.
The experimental conditions, especially regarding the transient transfection of the constructs, are not fully explained, and there is no indication, whether/how this was optimized. The ratio of the different components of the optogenetic system are bound to be critical for the functioning of the system, and possible overexpression artifacts (and the presence of endogenous proteins in the system) should be controlled for, but these are not addressed or discussed. Different expression levels could be a source of the high cell to cell variability.
The reviewer is correct that the relative ratio of constructs is an important factor. To achieve the maximum number of cells transfected with both constructs we typically transfected 1:1 or 1:2 ratios of TOM-LOV and Zdk-fusion constructs. Additionally, we now explain that cells were selected for low to moderate levels of fluorescent protein expression and effective sequestration on mitochondria in the dark (Methods lines 757 -758). We have now performed new immunofluorescence analysis to address the relative stochiometry of the TOM-LOV and Zdk-fusion proteins in cells that met the criteria for analysis. Briefly, we stained in parallel wells transfected with either Zdk-Flag-mCherry-YAP1 and untagged TOM20-LOV or Zdk-mCherry-YAP1 and Flag-tagged TOM20-LOV. Anti-Flag immuno-staining was then performed. This was followed by identification of cells with levels and sequestration of mCherry consistent with their selection for optogenetic analysis. These cells were then imaged for the intensity of anti-Flag signal, which enabled comparison of the levels of expression of the TOM20-LOV construct and Zdk-mCherry fusion in cells that met the selection criteria for optogenetic analysis. This analysis revealed a 2x fold excess of the TOM20-LOV construct -now shown in Supp. Fig. 1D.
In addition to the analysis above of relative LOV and Zdk expression levels, we also compared the level of Zdk-FP-YAP1 over-expression in cells meeting the criteria for opto-genetic analysis with endogenous YAP1 levels in neighbouring un-transfected cells. These data are now shown in Supp. Fig. 3B and reveal a 2-4 fold level of over-expression. When this is considered with the additional information that roughly 10-15% of the sequestered protein is released by light, we can deduce that our opto-genetic methodology releases an amount of protein equivalent to 20-50% of the endogenous. This level is well-suited for imaging and, crucially, is unlikely to overwhelm the regulatory mechanisms that govern endogenous YAP1 or TAZ localisation.
The reviewer rightly asks whether expression level of the construct bears any relationship to the values that are measured. Supplementary Figure 2B now shows there is no relationship between the mean fluorescent intensity and the import or export rate constants. Similar results were obtained if plots were generated considering the total fluorescent intensity, but these are not shown due to space constraints.
As mentioned above, the ability to measure the transport rates of two proteins simultaneously in the same cell is very exciting. However, it seems that this system gives different results compared to system measuring only one protein. In figure 2G, TAZ is imported significantly faster than YAP, when measured separately. In figure 5B, YAP and TAZ import rates do not differ, when measured from the same cell.
The reviewer raises an important issue about different methodologies yielding the same results. In the original submission, although the rates measured for YAP and TAZ import and export were broadly concordant between the single opto-release, double opto-release, and cyto-FLIP methods there were some small discrepancies (e.g. the difference in YAP vs TAZ import was significant in the single, but not double or cyto-FLIP measurements). As outlined above, we have now re-fitted all the original data with the modelling using starting mitochondrial off and on rate constants determined by FRAP and a variable on rate constant in the lit condition. This has reduced the spread of the data, with some of the outlying values for TAZ being reduced. The result is that the originally reported difference between YAP and TAZ is less and not statistically significant. We had not placed much emphasis on this result in the original submission, using the phrase 'slightly faster' to describe the difference between TAZ and YAP import and export rates. We have now removed that claim.
More generally, the reviewer is quite right that the methods should yield the same data. Therefore, we now present overlaid plots of the single, double, and cyto-FLIP import and export measurements for YAP and TAZ (single and cyto-FLIP plot is shown in Supp. Fig. 3H and single and double plot is shown in Figure 6D). For the benefit of the reviewer, all three are overlaid below and reveal good concordance between the different methods.
Reviewer Figure 2: Comparison of the import and export rate constants derived using single channel opto-release (grey), double channel opto-release (light blue), and cytoplasmic FLIP (red).
Specific points
In figure 1B, the cell images could be considerably bigger, since this is the proof-of-principle for the experiment, and it would be nice to easily see the effects. Also, the nuclear intensity appears very low.
The reviewer makes a good point, we have replaced the images throughout the manuscript with higher magnification examples. The movies have also been improved. Figure 1B&C now concentrate on mCherry and demonstrating that the system is functioning with quantification of the changes in shown in Figure 1C provided in Figure 1D. Figure 3 now contains the data for YAP1. The relatively low nuclear signal is a reflection that for confluent HaCAT cells, the equilibrium position of both endogenous and exogenous YAP1 is in the cytoplasm and that the Zdk-FP-YAP1 is released into the cytoplasm.
It would be good to show the excitation/emission spectra for all the constructs, since the advantage of utilizing specifically these fluorophores is mentioned several times in the manuscript. Also, it is not clear from the materials and methods, how the excitations were done on the confocal, although this is very clearly explained for the lattice light sheet.
We agree that the excitation and emission information should be provided for the confocal imaging and now do so in the Methods (lines 769-780). We also explain that if the goal is to only image one protein, then Venus excited at 514nm, Mitotracker Red excited at 561nm, and DRAQ5 excited at 633nm is a good combination. If the ultimate goal is to image two proteins simultaneously, then Venus and mCherry are a good pairing.
In 1C (and applies actually also to 2B,C), why does the fluorescence signal on mitochrondria rise above the initial values during the recovery phase? It might be useful to plot also the total cell intensity.
The original plots showed the average of the all the cells and the elevation of the recovery signal above the starting phase was caused by a couple of cells with outlier signals following normalization. In the re-submission we show exemplars for single cells and the improved fitting and normalization methodology described in reviewer #3 point #5 means that these slightly erroneous fits are no longer generated.
In figures 1F and G, it would be good to spell open the mCherry above the graphs.
We have now made sure that we use consistent and clear labelling across all figures. Figure 1 now shows Zdk-Venus images. For space reasons, we still use abbreviated forms of fluorophore names in some places with 'Ven' used to connote Venus and 'mCh' used to connote mCherry.
Figure legend for 1F is weird.
We apologise for this and have re-written all the figure legends. Figure 3C lacks the r and p values for correlation. We have now added these to Figures 4B (which was previously 3C) and 6E. Figure 4 contains a lot of data, and it is slightly difficult to identify the most meaningful data. I suggest to make a much more clearer distinction between those correlations that were statistically significant compared to those that were not. In addition, one parameter that should have been included here is intensity of the utilized construct in order to examine for the possible overexpression problems. It might be interesting to also think about other possible parameters, such as ratio of import and export or sum of import and export (overall shuttling speed) to draw out possible biologically meaningful correlations.
We thank the reviewer for these good suggestions. We now provide a plot showing the relationship between intensity and various metrics in Supplementary Figure 2B. The plots showing the correlations have been simplified to focus on possible associations between import and export and morphological features of the cells. This means that trivial correlations, such as between cell area and perimeter, are no longer included. Regarding the statistical significance, we have modified the presentation to make the scaling of the circles vary more dramatically as a function of statistical significance. We are reluctant to make simple binary distinction between significant and non-significant because the choice of cut-off is always somewhat arbitrary. By including the graphical representation of the p values, the reader has the best overview of the relative strengths of different correlations. Finally, we have streamlined the whole section to focus on two observations, the correlation between import and export and the correlation between the import/export ratio and nuclear/cytoplasmic area.
The use of lattice light sheet in Figure 6 is interesting, and especially the use of repeated pulses of release as "technical replicates" within the same cell, is exciting. However, the present analysis of the results do not really add too much to the story, and there is no attempt to study nuclear transport with this method. The images in figure 6B and C could be clearer, and the channels should be shown separately, because the mCherry signal masks the mito signal. How were the regions in 6E chosen? Figure 6F might be clearer, if shown only with the lines.
The reviewer requests greater depth and improved presentation of the light-sheet data. We now present both the merged image showing Zdk-mCh-YAP1, mitochondria, and the nucleus in Figure 5B and only the Zdk-mCh-YAP1 in Figure 5C. We provide a supplementary panel (Supp. Fig. 5B) showing the different cytoplasmic regions that are analysed in Figure 5E. To integrate the lightsheet data more thoroughly, we now additionally use confocal methods to explore the variation in cytoplasmic dynamics in more detail using confocal methods. Finally, the reviewer comments that we do not study the nuclear transport using the light-sheet method. The original purpose of the light-sheet analysis was to study the distribution of the Zdk-fusion construct throughout the whole cell. Our analysis of the data revealed greater variation in the Zdk-fusion protein in the cytoplasm, hence we focused on this.
We have also moved the light-sheet data to be both the 'double opto-release' experiments. This makes in more integrated with the rest of the manuscript.
Discussion line 336: it is stated that the peptide does not interfere with the endogenous activity of YAP1 and TAZ with reference to figure S1E, which is actually a Western blot showing the expected sizes of the proteins. S1F is showing a luciferase assay, which demonstrate that the constructs can activate transcription, but with the presented data, it is too strong statement to say that the peptide does not interfere with endogenous activity.
The reviewer is correct that this is an over-statement based on the data presented. We have modified this to state that the Zdk-mCherry-YAP1 was able to efficiently activate transcription from a TEAD reporter (lines 231-236, data are now in Supp. Fig. 3D). Furthermore, similar to the endogenous YAP1, the Zdk-FP-YAP1 fusion is dependent on TEAD binding and is inhibited by phosphorylation (analysis of mutants in Supp. Fig. 3D).
Reviewer 2
Advance Summary and Potential Significance to Field: This manuscript describes the development of a new optogenetic tool for interrogating nuclear/cytoplasmic shuttling as well as a model and software package to analyze the associated data. These tools are then used to study the shuttling of YAP and TAZ. This is an exciting tool and should be used widely by the field. However, there are some missing controls and unjustified assumptions that prevent the manuscript from being suitable for publication at this time. Also, in places the manuscript is poorly written and there are many small errors in the text and figures.
We are delighted that the reviewer comments that 'this is an exciting tool and should be used widely by the field'. We also note a wide range of thoughtful suggestions to improve the work.
MAJOR CONCERNS
1. The optogenetic tool is based on LOV-TRAP system and is initially validated with mCherry. A natural control to verify proper functionality would have been to use mCherry with a nuclear localization and/or exportation sequences. Seeing the expected differences in the import and export rates of these constructs would further establish that the system is working as expected and show that accurate rates can be determined with the overall procedure. The authors should either add such controls or justify why they were not completed in the text.
The reviewer proposes a logical test for the verification of the system. While we agree that this should show a difference in import and export rates between mCherry with or without an NLS, it would not confirm that the values derived were quantitatively correct. We consider this latter point a priority and therefore pursued verification via the route of comparing our optogenetic method with more conventional FLIP measurements. In the revised version of the manuscript, we have now extended this approach to independently measure the changes in mitochondrial binding rates in the dark and light (overlay plot is shown in Supp. Fig. 3H). The high level of concordance between independent FRAP measurements, in some cases performed by different individuals on different microscopes, provides confidence that our system is providing reliable data.
In addition, we have added new data in Supplementary Figure 3E&F showing the exit of the Zdk-FP-YAP1 from the nucleus is slower in the presence of leptomycin B, which blocks Crm1-mediated nuclear export of protein containing either canonical or non-canonical NES sequences. This observation confirms that the method can report on well-established mechanisms of nuclear export and is consistent with previous reports indicating that YAP1 nuclear export depends on Crm-1 acting on a non-canonical export signal (Ege, Dowbaj, et al., Cell Systems 2018 and references therein).
COVID note -In normal circumstances, we would have sought to generate and test an NLS version of the mCherry construct and directly address the reviewer's comment. However, since receiving the reviewers' comments in October London has had very high COVID rates and three lockdowns of varying severity. This has meant that the lab has only been allowed to function at 25-30% of normal capacity until March and we have had to prioritise experiments very carefully. Following interaction with the editor, we prioritized the experiments showing that the system was responding appropriately to light in terms of mitochondrial binding, diffusion, heterogeneity in cytoplasmic YAP1 dynamics, and the relative levels of LOV, Zdk-YAP1, and endogenous YAP1.
2. Key controls / analyses are missing or at least nor clearly presented in the multicolor experiments. Do YAP-Venus and YAP-mCherry report the same import and export rates with imaged individually? If the YAP-Venus and YAP-mCherry constructs are imaged simultaneously, do they show the same import and export rates? Similarly, does TAZ-mCherry report the same import/export rates when imaged alone as well as imaged with YAP-Venus? Without these simple controls the efficacy of this experimental set-up cannot be verified. (https://creativecommons.org/licenses/by/4.0/). 13 The reviewer makes a good point. We now present data in Supplementary Figure 6D that the same rates are measured in both the single and double experiments. For the benefit of the reviewer we present an overlay of single and double opto-release measurements with cytoplasmic FLIP.
Reviewer Figure 2: Comparison of the import and export rate constants derived using single channel opto-release (grey), double channel opto-release (light blue), and cytoplasmic FLIP (red).
3.
A major portion of the work is the development of a model to describe the observed data, however, the role of diffusion is ignored in this model. This omission is confusing as the authors have done this type of modelling before (Ege, Cell Sys, 2018). The justification for ignoring diffusion in this work should be established quantitatively and stated in the text. Additionally, the data acquired the lattice light sheet demonstrating variablility of import/export rates throughout the cell would be most easily explained by local differences in YAP diffusivity.
The reviewer is correct that diffusion could be an important factor. To address this we have implemented a method that directly measures diffusion and the on and off rates of binding to an immobile partner. The measured diffusion rates are 20-40μm 2 s -1 ( Figure 1F, 1J, & Supp. Fig. 5F)/ These rates are sufficiently fast relative to nuclear import and export that it will have only minimal impact of the rate constants that we measure. A mathematical justification of this is now provided in the supplementary information. Also pertinent to this point, we see limited variation in diffusion across different regions of cells and between different cells. In contrast, the measured rates of YAP1 binding to an immobile partner show much greater variability. These new data generated using a novel analysis method applied to conventional FRAP data are presented in Figure 1 with the mathematical basis of the analysis explained in a paper accepted at the Journal of Mathematical Biology. Together, these data provide support to the light-sheet data. Moreover, they suggest that the regional differences in YAP1 behaviour represent the variable distribution in an unknown binding partner, not diffusion.
4.
The data obtained with the lattice light sheet seem very preliminary, as the number of measurements seems quite low. Also, it is challenging to interpret these results as presented with a model that does not contain diffusion. Additionally, experiments with mCherry should be included to establish that the observed spatial variation is related to YAP functionality and not physical process, such as molecular crowding. The authors should either substantially increase the quality of this data or consider removing it. Its inclusion does not bolster the main points of the paper, so this reviewer recommends removing it.
The reviewer raises valid points regarding the light-sheet data. We have now performed additional experiments using confocal methodology, both FRAP and optogenetic, to study the cytoplasmic dynamics of YAP1. As now presented in Supp. Fig. 5F, we have now measured diffusion of YAP1 and find that it shows low variation. In contrast, we observe rather high levels of variation in the inferred proportion of YAP1 bound to an immobile partner (Supp. Fig. 5G). We additionally show that Venus and mCherry are not subject to such clear differences in their dynamics ( Figure 5G & 6C). Together, these analyses support the findings presented based on the light-sheet data and make it more integrated. We are keen to keep the light-sheet data in the manuscript as they demonstrate the applicability of the method in 3D, whereas FRAP and FLIP methodology are not well suited to 3D imaging.
COVID note -The pandemic precluded trying to travel to Janelia Farm to run further light-sheet experiments.
5.
The "Differential equation to model nuclear import and export" section should be re-written, or the title of this section should be changed to reflect the fact that there is substantially more in this section rather than just the equations. This reviewer suggests the comparison to FLIP be given its own section, as this is key to the validation of the approach.
The reviewer makes a good point and we have re-titled the section (lines 176-177).
6. The authors should discuss how the estimates of YAP1 import/export and TAZ import/export compare with previous measurements in the "Application of opto-release methodology to YAP and TAZ" section. The consistency with FLIP demonstrates internal consistency of the study, but consistency with previous measurements should also be established.
The reviewer raises a good point. The nuclear import and export rate constants that we measure in this study of epithelial HaCaT cells are mostly in the range 0.0025-0.015s -1 . This is slightly slower than those that we measured in fibroblastic cells, which are in the range 0.01-0.075s -1 (Ege, Dowbaj et al., Cell Systems 2018). This is now mentioned in lines 415-417.
As transient transfections were used to create the system, large variations in expression levels
between cells in the population are likely. The authors should show data demonstrating that the results are not dependent on the absolute expression levels of the transfected components.
This is a good point. We have now plotted the relationship between metrics and expression levelshown in Supplementary Figure 2B. Moreover, we have undertaken new analysis to determine the level of YAP1 over-expression. We stained cells transiently transfected with Zdk-FP-YAP1 with an antibody that recognises both endogenous and exogenous YAP1. The exogenous Zdk-FP-YAP was also stained for the Flag epitope tag, thereby enabling quantification of the YAP1 levels in transfected vs untransfected cells. These measurements reveal that the exogenous construct is over-expressed between 2-5 fold (Supp. Fig. 3B). When combined with the measurements of the proportion of Zdk-FP released from the mitochondria (~10%) this leads us to the conclusion that we are releasing between 20-50% of the endogenous YAP1 levels. This modest level of release compared to the endogenous further suggests that our measurements are unlikely to be artefactual due to high expression levels.
8. The maximal nuclear accumulation of YAP using this system (for example in Fig 2), is quite low throughout the experiments in the manuscript. It can't be determined if this was due to low levels of YAP nuclear localization or incomplete release of YAP from the mitochondria due to a defect in the optogenetic tool. The authors should perform experiments distinguishing between these two possibilities.
The reviewer asks a good question. We are confident that the low levels of nuclear accumulation observed are due to the real behaviour of YAP1, not a defect in release from the mitochondria. Indeed, we designed the system so that only a relatively small amount of protein would be released (also discussed in relation to reviewer #1 point#1). We show below for the benefit of the reviewer that YAP1 is largely cytoplasmic in confluent island of HaCaT cells. Moreover, the optogenetic measurements of YAP1 import and export are in agreement with those measured using FLIP (shown in Supp. Fig. 3H), which does not rely on the release of the FP-YAP1 protein from sequestration.
Reviewer Figure 3: Images show endogenous YAP1 localisation (grayscale) and F-actin and DAPI staining (green and magenta, respectively) in HaCaT cells. Scale bar is 25μm.
9. The text in the "Import and Export Rates" section is vague and unorganized. It should be rewritten to provide more precise explanation and interpretation of the data. Also, the title needs adjusting. I believe the main point is the import and export rates are correlated for YAP but not other constructs.
We apologise for the lack of clear structure in this section and have now re-written and re-titled it.
10. The development of the semi-automated software is presented as a significant part of the work. Including a supplemental figure that demonstrates the proper functionality of the software on simulated data would provide definitive proof that the code is functioning properly.
We thank the reviewer for this good suggestion. Supplementary Figure 2 Figure below shows the distribution of import and export rate constants inferred by our app with increasing levels of noise added to the simulated data. When no noise is added (standard deviation = 0) then the import and export rate constants are correctly inferred giving a value of 1. As noise is added the inferred rate constants start to differ slightly from those used in the simulation. However, even with added noise of 0.08, the vast majority of values returned by the app are between 0.7 and 1.4 of the value used in the simulation. The dashed line at 0.023 indicates the level of noise typically found in our data. Thus, we are confident that the model is capable of accurately determining import and export rate constants. If the reviewer and editor think it is appropriate, we can add this analysis to Supplementary Figure 2. In addition to this approach, in the revised manuscript we provide independent FRAP measurements on the mitochondrial off and on rate constants ( Figure 1H&I) and confirm that the opto-release inferred nuclear import and export measurements are consistent with FLIP data (Supp. Fig. 3H). Thus, there is orthogonal data to support the validity of the measurements derived using the app.
Reviewer Figure 4: Analysis of simulated data. A) Left panel shows the simulated levels of protein in the nucleus (magenta), cytoplasm (lilac), and mitochondria (cyan) based on the ODE system described in Figure 2. The other panels in (A) show the simulated data with increasing levels of noise added. B) 500 simulations for each noise level where analysed using the model-fitting part of the app and the inferred import and export rate constants were divided by the value used in the simulation. The plots show the distribution of inferred import (left panel) and export (right panel) values in simulations with increasing noise. The vertical line indicates the level of noise consistent with the residuals from the fitting of our YAP1_WT data, which provides a good indication of the noise levels in our data.
11. Figure captions generally lack key details, like the number of cells in each experiment and number of experimental days. More detail should be added to these captions.
We have added much greater detail to the figure legends.
12. The availability of the MATLAB code is not stated.
The code is available on github via the following link: https://github.com/RobertPJenkins/opto_analyser We have not publicised its availability yet, but will do so in the publication.
The time courses in all figures should be converted to time from frame number.
We agree with this suggestion. The frame rates are not always the same for every experiment as they depend on the microscope used and sometimes on the size of the opto-release area. Therefore, stating the time is more informative and this is what we now do throughout the manuscript.
2. Is the mCherry data repeated from Figure #1 to Figure #2? If so, this should be stated. Also, how does this effect the statistical comparisons?
The reviewer is correct, we now make that clear in the figure legend for the new Figure 3D (line 565). The statistical comparisons are not affected. Figure #3C is the distribution of YAP_S94A bimodal?
In
In the revised manuscript, we have now re-fitted all the data and the YAP_S94A data no longer suggests a bi-model distribution. Figure #5, the positions of the various ROI should be shown. Are the 5 regions equidistant from the nucleus?
In
We have now generated a figure with the location of the ROIs indicated (Supp. Fig. 5B). The reviewer's question regarding relative distance from the nucleus is a good one. We do not observe any clear relationship between the difference YAP1 behaviours and distance from the nucleus. If there was a simple relationship with distance from the nucleus, then this would be visible in the new 'maps' of the heterogeneity in YAP1 increase in Figure 5G. Figure #1 shows YAP1/TAZ in the schematic but all of the data regard mCherry.
5.
We have now changed Figure 1A to be more generic and it refers to a 'protein of interest'. In response to this and other comments (such as Reviewer #2 comment #7), we have now generated new high-resolution exemplar images for all the figures. Figure 1 now shows an exemplar using Zdk-Venus. The choice of Venus, rather than mCherry, was informed by the desire to simultaneously use Mitotracker Red and DRAQ5.
6. What is shown in Sup Fig #8 and its relevance to the manuscript is unclear.
We have now extensively re-organised the presentation of the light-sheet data and added new analysis of confocal data regarding the different YAP1 behaviours in the cytoplasm. These changes should have addressed the issue with the previous Supplementary Figure 8.
7.
Image size/quality is generally-low throughout the manuscript and ideally larger-size images (in both manuscript footprint and quality) should be included.
We have now replaced the images in Figures 1, 2, and 5 with images of higher quality and also given them more space on the page. This is a good point that we now discuss, with appropriate citation of the paper (lines 424-425).
Reviewer 3 Advance Summary and Potential Significance to Field:
The paper "An optogenetic method for interrogating-…" authored by Dowbaj and colleagues reports on the development and use of an AsLOV-based optogenetic tool to control the cellular localization (including mitochondria, cytoplasm and nucleus) of the YAP transcription factor. Using this approach, the authors quantify, using a MATLAB-based app, the rates of nuclear entry/exit under a variety of conditions. Finally, they combine the optogenetic tool with the use of lighsheet microscopy to measure the dynamics of the transcription factor within the cell in 3D. I think the paper has very interesting and novel elements (such as the use of a YAP optogenetic tool and the capability of its 3D tracking), however the quantification part suffers from multiple fundamental mathematical flaws that unfortunately massively impact the quality of the work. I honestly hope that the points below help the authors with re-analysis.
We were gratified that the reviewer found the manuscript to contain 'very interesting and novel elements'. We also note the significant concerns around some of the mathematical aspects and genuinely thank the reviewer for several suggestions that proved very useful in improving the manuscript.
Reviewer 3 Comments for the Author:
Major points 1. The model has 2 different rate constants for protein unbinding from the mitochondria, depending on whether blue light is on or off i.e. whether the LOV domain is excited or relaxed. However, the rate constant for protein binding to the mitochondria must also take on 2 different values depending on the LOV domain conformation. Without taking this into account, the model is unphysical and, due to all obtained rate constants being interdependent (since they are fit simultaneously), all rate constants obtained with the model in its current form suffer from this.
The reviewer makes a good point about allowing for the mitochondrial binding rate constant to vary depending upon blue light illumination. We had initially been reluctant to do this as it increases the degrees of freedom; however, the reviewer's point is correct and we now allow for the mitochondrial binding rate to be different in the lit condition ( Figure 2A). The model fitting is now initiated with rates for mitochondrial binding and unbinding starting at the values measured using FRAP methodology ( Figure 1H&I). This yields rate constants that are in good agreement with the FRAP measurements ( Figure 2C&D). For the benefit of the reviewer, we show in the Reviewer Figure 1 how the old data relates to that in the revised version. The new method also reveals that the mitochondrial binding rate is faster during blue light illumination. However, this is more than compensated for by the faster unbinding rate. This is demonstrated by the ratio of LOV-Zdk binding to unbinding rate constants dropping for all cells when illuminated (right hand panel in the figure below and represented in a different way in Figure 2E).
We additionally explored the benefit of permitting two on rates using Akaike Information Criterion model probabilities. This revealed that when using two free on rates 77.6% of all fits had an AIC probability >0.9, which compared very favourably with using a single free on rate which yielded only 12.9% of fits having an AIC probability >0.9. We thank the reviewer for his/her thoughtful suggestion and we believe that its incorporation has strengthened the manuscript.
Reviewer Figure 1( (Figure 2). Right hand plot shows the change in ratio between Off and On rate constants for cells in the Dark and Light. Wilcoxon paired non-parametric statistical testing is reported.
2. On lines 756-758, the authors state that relaxation of AsLOV2 domains occurs on the timescale of seconds. Therefore, for illumination to be considered constant, light should be supplied at a rate at least an order of magnitude higher than the relaxation rate, ideally continuously (simply achieved using widefield illumination). If light is supplied to LOV domains at a rate equal to their relaxation rate, between illumination pulses the fraction of stimulated domains will drop exponentially to 0.37=exp(-1)). This is particularly important given the interconnected nature of the measured rate constants.
The reviewer is correct that there are indeed many pertinent things occurring that impinge upon the release and re-binding of the Zdk-fusion to the LOV domain anchored on mitochondria. This includes the rate at which light induces the conformational change of LOV from its high Zdk affinity state to its lower Zdk affinity state and the reverse relaxation in the dark, which the reviewer raises in his/her comment. In addition, there is the complexity of the illumination regime, which is discontinuous at two levels -the blue light is off during the image acquisition and during the opto-release illumination it is raster scanned across the region of interest -the rather severe challenges of fitting such a complex oscillating regime are well articulated by Pitt and Banga (BMC Bioinformatics 2019). On top of this, there are potential complications caused by rapidly changing mitochondrial shape. Thus, the question about the validity of reducing this to a single binding/unbinding rate constant is a good one. The importance of this matter prompted us to implement an entirely distinct method to measure on and off rate constants in the dark and the light. Crucially, this method used uniform LED illumination to trigger release, which is in line with the reviewer's suggestion. It also accounted for diffusion. Reassuringly, the results of this analysis are concordant with the measurements using our opto-release modelling ( Figure 1H&I and Figure 2C&D). With FRAP methodology reporting the off-rate constants to be 0.01-0.02s -1 in the dark and 0.03-0.075s -1 in the light and the opto-release methodology reporting 0.005-0.07s -1 in the dark and 0.024-0.9s -1 in the light. The main difference is the wider range of values reported using the opto-release methodology. Overall, these analyses provide orthogonal validation of our approach.
The reviewer suggests widefield illumination, which has a logic to it. However, when we considered the practicalities of this we ran into several issues. These included the lack of optical sectioning, which is a problem when working with epithelial cells with significant amounts of cytoplasm above and below the nucleus, and the challenges of capturing all the different channels simultaneously using filters and dichroics. Sequential capture would be a possibility, but this would lead to discontinuous blue light illumination, which is exactly the problem that the reviewer poses. For our new FRAP analysis, we removed the condenser from an inverted confocal manually positioned a blue LED array on top of the stage. The LEDs were then manually controlled. While this approach was successful for the FRAP analysis, without major engineering it is not possible to integrate the control of an external LED array with the manufacturer's software for confocal microscopes. In the absence of such integration, toggling the blue light on and off during confocal acquisition in a precise manner is not possible. Thus, to conclude this long response, we provide evidence supporting the validity of our approach and would like to highlight that our methodology is designed to be implemented on regular confocal microscopes without bespoke modification. Fig. 1F for mCherry and other figures for other constructs) there is no significant difference in the rate constant of mitochondrial release when the light is on ("mito light") or off ("mito dark"). The former should be many times greater than the latter as they are, these values would state that stimulation does not work.
A particularly worrying point is that, after analysis (in
The reviewer raises an important point (similar to that made by reviewer#1 point#1) that we address in several ways. 1. We have implemented a new orthogonal new FRAP data to analyse the difference in binding and unbinding of LOV to Zdk in the dark and light (described in the response to the point above). 2. We also realised that the presentation of data in the original Figure 1F was sub-optimal.
It should have been plotted as paired measurements, which is entirely appropriate as the on and off rate are measured in the same cell in the dark and light. As shown in response to reviewer #3 point #1, when plotted and analysed in this manner there is a highly significant increase in off rate is seen in the dark. 3. Thanks to the reviewer's great suggestion, in our new methodology for fitting both the on and off rate constants can vary between the light and the dark states. These are now plotted in Supp. Fig. 3G, 4B, & 6. In all cases, there is a highly significant increase in the off rate constant under blue light. The on rate constant also increases, but this is less dramatic and not always statistically significant. Figure 2E shows that the overall effect of the changes in both off and on rate constants under blue light always favours the release of the Zdk peptide. The reviewer may perhaps expect a bigger difference in the off-rate constant between dark and light; however, we selected the Zdk2 / LOV domain pairing for stability not dynamic range. The incomplete release of the Zdk2 / LOV domain pairing is shown in the original Wang et al., Nature Methods 2016 publication. The images in this paper suggest that it takes a couple of seconds for the dissociation to occur, which is only marginally faster than the timing predicted based on the rate constants that we measure. This work also reports that optimal dissociation is achieved when the Zdk is anchored to the mitochondria, but it in this work we swap the Zdk peptide onto the protein of interest. This is to avoid adding the larger LOV domain to the protein of interest.
4. The rate constants (k's) found here have inherent dependence on cellular parameters such as cytoplasmic/nuclear volume. This would be clear if the differential equations were derived from first principles. E.g. Timney et al. JCB 215, 57 (2016) use differential equations derived from first principles, so measured rate constants can be transformed into quantities independent of nuclear/cytoplasmic volumes and number of NPCs etc. Only after a transformation such as this can correlations be investigated. A similar transformation needs to be applied to rate constants measured in this study before any correlations between rate constants and cellular parameters, or between import and export rate constants, can be performed fairly.
The reviewer is correct that there are a multitude of factors that will influence the rate constants that we derive experimentally. As he/she states, these have been investigated in elegant detail by Timney et al. Our goal is this work is to develop a set of molecular tools that can be used to interrogate the transit of two proteins simultaneously between cellular compartments, to provide a simple analytical tool to derive rate constants relating to nuclear entry and exit, and to demonstrate the utility of the method using YAP1, YAP1 mutants, and TAZ. Our goal is not to study the details of transit through the nuclear pore complex in great detail. While we agree that there would be value and interest in applying the methods of Timney et al., using our tools, this is not our goal in this report and would require the acquisition of considerable additional information about nuclear surface area, the volume of the cytoplasm that is inaccessible to YAP1 due to the presence of other organelles, the number and occupancy of YAP1 binding sites on chromatin in the nucleus, and so on. This sort of analysis would constitute an entire manuscript in its own right.
We agree with the reviewer that it is important to discriminate which of the correlations that we report in Figure 4 are trivial, such as between area and perimeter, and which are unexpected and potentially interesting. In the revised manuscript we have made several changes in our presentation of the correlations. 1. We have moved them to the supplementary figures (Supp. Fig. 4) to reflect that the analysis of the correlations is not the primary objective of this manuscript. 2. Re-plotted them such that trivial correlations between metrics (such as area and perimeter) are no longer shown. 3. Concentrated our focus on the correlation of import and export, which is repeatedly observed and confirmed in multiple experiments. We believe that this is of interest because of the wide range of import and export values. This does not follow from first principles and likely reflects that differences in either the functionality or integrity of the nuclear envelope and its pore complexes between cells. Sources of such variation could include transient rupture of the nuclear envelope, which has been reported by the Piel, Lammerding, and Petronzcki groups (to highlight a few), or differential levels of mechanical stress of the nucleus, which was reported by the Roca-Cusachs group. 4. The other correlations that are evident involve the import/export ratio and the Nuc/Cyto area and Nuc/Cyto concentration. The construction of the ODE system dictates that the import/export ratio will equal the Nuc/Cyto total protein ratio. The latter is the product of the concentration and area. Thus, at one level the correlation is entirely trivial and simply a reflection that our experimental data is fitting our model well. However, the model and system of ODEs does not indicate whether change in the Nuc/Cyto total protein are driven by changes in concentration or changes in the relative areas of the compartments. Our analysis reveals that the Nuc/Cyto area ratio correlates with the import/export ratio, but does not correlate with the Nuc/Cyto concentration. These experimental measurements, which are independent of the model fitting, indicate that cells are able to maintain relatively stable concentrations of proteins in the nucleus and cytoplasm even if the relative size of the compartments varies. This is not something that is predictable from our system of ODEs. Therefore, we believe that this is of biological interest. Clearly, a further investigation of this homeostatic mechanism would be of interest, but is beyond the scope of this work.
Other major points Major. 5. The methods of "bleaching intensity normalisation" and "non-conserved intensity correction" are overcomplicated and introduce a troubling number of free parameters into the data processing. If simply c(t)/m(t)/n(t) are the cytoplasmic/mitochondrial/nuclear intensities, normalised by their combined intensity (i.e. whole cell intensity) then photobleaching will automatically be accounted for and c(t)+m(t)+n(t)=1 at all times, thus an outflow-inflow function is not required.
The reviewer raises the issue of correcting for bleaching and other variations in the total signal during analysis. We have now completely overhauled the normalization and re-analysed all the original data. Of particular note, we have done the following: 1. Removed the outflow-inflow function as the reviewer suggested. 2. Clarified the normalization process. In brief, we normalize to the combined intensity of the cytoplasm, nucleus, and mitochondria. To account for photobleaching and to smooth out noise in the total intensity value, we fit two functions; one that describes the bleaching during the blue light illumination and the other describes bleaching in the absence of blue light illumination. These functions are used to generate a 'smoothed' value of the total intensity at each time point and this is used for the normalization.
3.
We have analysed how robust the metrics that we are inferring are to different normalization methods. For the benefit of the reviewer, we present below the export and mito off values for Zdk-mCherry and Zdk-mCherry-YAP1 using a range of different normalization methods. This shows that the signal extraction was robust to different segmentation and normalisation methods with minor differences in distribution of each of the above metrics. Visual inspection suggested that using a moving percentile window size of half the total movie length alongside bi-linear normalisation may lead to marginally superior results. The bi-linear normalization reflects the possibility that there might be a subtle increase in photobleaching as a result of blue light illumination. The inflowoutflow function or resizing the mitochondria (to possibly take account of Zdk binding to only the surface of mitochondria, not the whole area that is measured) made very little difference to the model fit. As the reviewer suggests, we did not implement them because of the cost of adding extra degrees of freedom to the model fitting. Nonetheless, the Figure below indicates that the use of slightly different methods in the original submission would not have yielded wildly different results.
Reviewer Figure 4. Plots show the normalised (to the median of all points) off rate constants when illuminated and export rate constants for Zdk-mCherry-YAP1 when subject to different normalisation methods. The labelling is as follows: 1,2, or 4 refers to the number of spatial anchor points for segmentation S or D refers to whether photobleaching was fitted at a single rate for the whole experiment (S) or with different rates for the blue light and dark phases (D) IC or INC refers intensity conservation (IC) or non-conservation (INC -inflow/outflow function) MR or MNR refers to mitochondrial rescaling (MR) or signal kept at extracted levels (MNR) The red rectangle indicates the method that we now employ throughout the manuscript.
6. Fig. 1B supposedly shows H2B-mTurquoise labelling the nucleus. Can the authors also show this channel individually (not merged), as it has a sparse, speckled appearance? In addition, with reference to lines 760-762, it is hard to see how imaging of H2B-mTurquoise does not interfere with optogenetic activation despite using the same wavelength. Fluorescent imaging typically needs much higher intensity than optogenetic stimulation.
The reviewer is correct that mTurquoise excitation could interfere with the optogenetic release. To achieve the imaging of the nucleus, mitochondria, and two different Zdk-tagged fluorophores requires four channels and we solved this by using mTurquoise with very low excitation and high gain. This level of illumination did not interfere with the intentional opto-release mechanism. However, the result of very low illumination and high gain for H2B-mTurquoise is low quality images -hence the speckled appearance that the reviewer comments up. In the methods, we explain that nuclear segmentation was based upon integrating the nuclear images over time to generate a smaller number of higher quality images that were used for image segmentation. In addition to the explanation provided above, we have also replaced the images referred to with higher quality images. In the case of single Zdk-fusion experiments, we now show images that use the near infra-red fluorophore DRAQ5 to label the DNA in the nucleus. Finally, we have expanded the text regarding the choice of fluorophores and articulate that if only a single protein is to be tracked then Venus/Mitotracker Red/DRAQ5 is probably the optimal choice (lines 753-756).
7. There are 2 examples of data being processed inequivalently: lines 160-162, constant thresholding is applied to some cells, dynamic thresholding to others; lines 1016-1020, an inflowoutflow function is applied to some cells, and not others. Making any comparison or assimilation of data that has not been processed in exactly the same manner is difficult.
As articulated in response to point#5, we have now repeated all the analysis using consistent fitting methods. Reassuringly, all the main conclusions of the analysis presented in the initial submission still stand.
8. Almost every instance of "rate" throughout the paper, in the text and figures, should be "rate constant" -the model used finds rate constants. This distinction is very important and conceptually crucial.
The reviewer makes a correct point and we now refer to rate constants. 9. Throughout this work, data has been normalised, but it is never made clear with respect to what. For example, in Fig. 1C, and similar graphs, the vertical axis is called intensity, but it is clearly normalised (I assume to the intensity of the entire cell, and these numbers represent the fraction of intensity that comes from the mitochondria, cytoplasm, and nucleus, but this is not made clear). Since these values are input into quantitative modelling, it is vital that they are clearly explained. In particular, I wonder whether the data has been normalised by the the cellular area (in confocal miscoscopy) or the cellular volume (in lightsheet microscopy).
We have replaced the plots for all the confocal analysis. They now show the normalised total fluorescent intensity for different compartments. For the light-sheet data, the mean fluorescent intensity is shown for the regions of interest normalised for photobleaching at each individual timepoint. This is appropriate because are primarily seeking to compare different regions of the same cell, and not seeking to fit import and export rate constants.
10. Several statements are not supported by the evidence provided: lines 111-113, neither figure shows information on expected localisation; line 120-122, Fig. S1C does not show enrichment to mitochondria; lines 125-127, Fig. 1B does not show an increase in cytoplasmic fluorescence; lines 228-231, YAP1_5SA has a low import rate as well as a low export rate, so the claim that nuclear persistence is a result of a low export is not justified; lines 247-250, there is no data/figure to support this claim.
We thank the reviewer for the careful reading of the work and have clarified the issue he/she raised. In particular, we have done the following: Replaced all the images in Figure 1 and Supp. Fig. 1 with better ones Plotted the increase in cytoplasmic fluorescence for the example now shown in Figure 1C We now say that the 'most prominent effect' of the 5SA mutations in reduced nuclear export, which is an accurate reflection of the data and does not make a specific assertion about YAP_5SA nuclear import. For clarification, although the median is lower that for wild-type YAP1, the reduction is only on the edge of being statistically significant (p=0.0544). More generally, the reviewer is completely correct that the nuclear import/export rate constant ratios are not sufficient to explain the variation in nuclear/cytoplasmic distribution. We now discuss this explicitly for YAP1_WT and YAP1_S94A, and speculate that this is likely due to a lack of sequestration in long-lived interactions with partners in the nucleus (lines 430-433).
We have now re-written the section on correlations between metrics. The original claim was based on the figure summarising the Pearson correlation metrics and their statistical significance. In the re-submission, this claim is supported by the lowest right circles in the plots in Supp. Fig. 4 We have now reached a decision on the above manuscript.
To see the reviewers' reports and a copy of this decision letter, please go to: https://submitjcs.biologists.org and click on the 'Manuscripts with Decisions' queue in the Author Area.
(Corresponding author only has access to reviews.) As you will see, one of the reviewers was unavailable, one is completely supportive, and one is 98% supportive and asks that you attend to two small details to improve clarity for readers. I hope that you will be able to carry these out because I would like to be able to accept your paper, depending on further comments from reviewers. If you disagree with either point you can instead rebut and explain why your current approach is preferable.
We are aware that you may be experiencing disruption to the normal running of your lab that makes experimental revisions challenging If it would be helpful, we encourage you to contact us to discuss your revision in greater detail. Please send us a point-by-point response indicating where you are able to address concerns raised (either experimentally or by changes to the text) and where you will not be able to do so within the normal timeframe of a revision. We will then provide further guidance. Please also note that we are happy to extend revision timeframes as necessary.
Please ensure that you clearly highlight all changes made in the revised manuscript. Please avoid using 'Tracked changes' in Word files as these are lost in PDF conversion.
1.
The authors have done an excellent job justifying and explaining why only a small fraction of the YAP signal changes with release. However, the accumulation of YAP in the nucleus upon release is still hard to see in the images. In Fig 1B and 1C adding an inset (perhaps in the bottom right corner of the images) displaying the nuclei at different brightness scale would enable visualization of increased intensity in nucleus. Currently the changes in the images are dominated by the larger variations in the mitochondria.
2.
There is large variability in the data. To accommodate this, the authors have broken the axis in Fig 2f. Perhaps use a log-scale axis would be a more quantitative means to display the data.
Second revision
Author response to reviewers' comments Reviewer 1 Advance Summary and Potential Significance to Field: This manuscript describes a novel approach to study protein dynamics, and especially nucleocytoplasmic shuttling of proteins, with the help of optogenetics.
|
v3-fos-license
|
2023-07-22T15:29:24.042Z
|
2023-07-01T00:00:00.000
|
260017187
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2077-0383/12/14/4785/pdf?version=1689825362",
"pdf_hash": "ddcfa44442443c4678c46beed99240983a43631d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2623",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "f434e0cfc9b7d8e281f8f9331f7a352e4225b6e0",
"year": 2023
}
|
pes2o/s2orc
|
Short- and Long-Term Outcomes in Elderly Patients Following Hand-Assisted Laparoscopic Surgery for Colorectal Liver Metastasis
(1) Background: Hand-assisted laparoscopic surgery (HALS) has engendered growing attention as a safe procedure for the resection of metastatic liver disease. However, there is little data available regarding the outcomes of HALS for colorectal liver metastasis (CRLM) in patients over the age of 75. (2) Methods: We compare the short- and long-term outcomes of patients >75-years-old (defined in our study as “elderly patients” and referred to as group 1, G1), with patients <75-years-old (defined in our study as “younger patients” and referred to as group 2, G2). (3) Results: Of 145 patients, 28 were in G1 and 117 were in G2. The most common site of the primary tumor was the right colon in G1, and the left colon in G2 (p = 0.05). More patients in G1 underwent laparoscopic anterior segment resection compared with G2 (43% vs. 39% respectively) (p = 0.003). 53% of patients in G1 and 74% of patients in G2 completed neoadjuvant therapy (p = 0.04). The median size of the largest metastasis was 32 (IQR 19–52) mm in G1 and 20 (IQR 13–35) mm in G2 (p = 0.001). The rate of complications (Dindo-Clavien grade ≥ III) was slightly higher in G1 (p = 0.06). The overall 5-year survival was 30% in G1 and 52% in G2 (p = 0.12). (4) Conclusions: Hand-assisted laparoscopic surgery for colorectal liver metastasis is safe and effective in an elderly patient population.
Introduction
Colorectal cancer (CRC) is the third most common malignancy worldwide. It accounts for 1.8 million newly diagnosed cases and 900,000 deaths annually, with metastatic tumors being the most common cause of death. The majority of CRC cases are observed in patients over the age of 65 [1].
Colorectal liver metastases (CRLM) occur in almost half of all patients with colorectal cancer and surgical resection for colorectal liver metastases has a 60% long-term survival rate at 5 years [2,3].
Over the past 3 decades, the development of minimally invasive surgery revolutionized the landscape of abdominal surgery, especially surgery of the hepatobiliary tract. Minimally invasive liver resection has become a standard practice and a good alternative to open liver resection in many instances. In 2008, The Louisville Consensus Conference divided hepatic laparoscopic procedures into three main categories: pure laparoscopy, hand-assisted laparoscopic surgery (HALS), and a hybrid technique [4]. The advantages of 2 of 8 HALS for the surgeon are improved intraoperative bleeding control, detection of deeper intraparenchymal lesions, and better exposure of difficult tumor locations. Studies have shown HALS for CRLM to have similar outcomes to pure LLR and open liver surgery, including the early and late oncological outcomes, blood loss, conversion to open rate, operative time, overall morbidity and mortality, and length of hospital stay [5]. However, there is very little data regarding the safety and feasibility of HALS for CRLM in patients over the age of 70-years [6]. With an increased patient load in the geriatric age group, it has become increasingly relevant to assess whether elderly patients might benefit from a minimally invasive surgical approach. there is no constant definition for elderly age in the literature in the context of liver resection. few studies used 70 years as the cutoff [6], and others divided their cohort into 3 subgroups based on age (70-74, 75-79, and >80-years old) [2]. We decided to use age 75 as the cutoff to define elderly in our cohort. The aim of this study was to examine the perioperative and long-term outcomes of HALS for CRLM in patient over the age of 75.
Materials and Methods
We selected patients for our study from the surgical databases of the Rabin Medical Center (Petah Tikva, Israel) & the Carmel Medical Center (Haifa, Israel). These were patients who underwent HALS for CRLM between December 2004 and January 2019. The patients' records were examined retrospectively. Criteria for inclusion to our study included pathology results, demographics, surgical history, and oncologic follow-up records. Patients ≥75 years old were assigned to group 1 (G1) and patients <75 years old to group 2 (G2). Primary outcomes were defined as the perioperative and histological results, and the secondary outcomes were defined as the overall survival over the follow up period.
The study was approved by the institutional review board (IRB) of both institutions. Surgical indications were determined during a weekly multidisciplinary conference. Preoperative workup included biochemical analysis for blood count, chemistry, and tumor markers. The patients also underwent imaging which included MRI, CT, & PET-CT. This facilitated the indentification of tumors, their size, number, location, and interrelation with the vascular and biliary anatomy.
Metastasis with vascular contact were considered high risk for HALS approach and those patients underwent conventional laparotomy for liver resection. Patients with synchronous metastasis treated with liver first approach, colon first or combined approaches were done in case of complication of the primary tumor (bowel obstruction, perforation, bleeding). All patients underwent standard evaluation for major surgery by an attending anesthesiologist. They were informed about the procedure by the attending surgeon, including the risks and benefits, and written consent was obtained before surgery.
Surgical Technique
The approach for HALS for CRLM was performed as described by Sadot et al. [7]. In summary, patients were placed in a supine position. The surgeons inserted two 12 mm and one 5 mm trocar in the upper abdomen at the midline. A hand-assisted device was placed in the right abdomen. We used a supraumbilical cut to establish pneumoperitoneum with a 12-mm port in the majority of patients. However, taking into consideration the possibility of peritoneal adhesions, the surgeons performed a right abdominal horizontal incision in any patient with a history of abdominal surgery. Any adhesions were lysed. Following that, the hand port and a 12 mm trocar were inserted. CO 2 gas was used to generate a pneumoperitoneum with a pressure of 12-15 mmHg. The abdomen was then explored visually with a 30 • laparoscope. Meticulous laparoscopic intraoperative ultrasonography of the liver was routinely performed. Using the LigaSure™ device, liver mobilization and lysis of adhesions was performed. Biopsies and resections of the liver were performed using LigaSure™, Endo GIA™ staplers, and Cavitron Ultrasonic Surgical Aspirator. Following the resection, careful examination was performed to check for bile leakage and/or bleeding. An abdominal drain was placed through one of the port sites. Following the deflation of the pneumoperitoneum, the abdomen was closed. The specimens were sent to the pathology department for inspection of the surgical margins.
We defined metastasis by the presence of tumor cells at the time of diagnosis or during post-surgical follow up. Blood loss was estimated using the volume of blood aspirated from the abdominal cavity during the procedure. Operative time was defined as the time elapsed from the skin incision until closure. Postoperative hospital stay was defined as the number of hospitalized days from the day of operation until the day of discharge, inclusive. We used the Clavien-Dindo grading system to characterize any post-operative complications occurring within 30 days of surgery [8]. Tumor size and resection margins were determined according to the pathological reports from the permanent sections of tissue samples. Any specimen with no tumor cells seen on a microscopic level were defined as R0.
After discharge, the patients were followed by our multidisciplinary team during the first month, every 4 months for the first 2 years, and twice a year thereafter. Follow-up included clinical examinations, blood work-up including carcinoembryonic antigen (CEA), and spiral CT of the chest-abdomen or PET-CT as indicated.
Statistical Analysis
All statistical analyses were performed using IBM statistics (SPSS) vs. 24. Continuous variables were summarized with mean ± SD or median & IQR, as appropriate. Categorical variables were presented as numbers and proportions. Disease free (DFS) and overall survival (OS) were estimated using Kaplan-Meier curves and compared between groups by the log-rank test. p < 0.05 was considered statistically significant.
Results
From December 2004 until January 2019, HALS was performed in one hundred and forty-five patients for CRLM. Twenty-eight patients (19%) were ≥75 years old, and were assigned to group 1. Patient demographics and tumor characteristics are summarized in Table 1. The median age of group 1 was 80 (IQR 77-83) years and 68% were males. In 47% of the patients, the right colon was the origin of the primary tumor and in 39% of patients, the left colon was the origin of the primary tumor. 75% of patients had single liver metastasis, with a median size of 32 (IQR 19-52) mm for the largest metastasis. 82% had low calculated clinical risk score, and 53% completed neoadjuvant therapy.
Perioperative characteristics and outcomes are described in Table 2. 21% underwent formal lobectomy. Anterior segmental resection was performed in 43%.
The median operative time was 168 (IQR 147-235) minutes. In three patients, the surgery was converted to open resection. 29% required intraoperative blood transfusion. There was no patient mortality within the first 30 days post-resection. Surgical complications occurred in 10 patients (36%) (Dindo-Clavien grade ≥ III). Median hospital stay was 7 (IQR 5-10) days. R0 margins were achieved in 93% of the specimens. Adjuvant chemotherapy was successfully completed in 82% of the patients.
Comparison between Patients in G1 and G2
Group 1 consisted of 28 patients and was compared with a separate control group of 117 consecutive patients who underwent HALS for CRLM between December 2004 and January 2019, and were <75 years old (group 2, G2) (Tables 1-3).
Comparison between Patients in G1 and G2
Group 1 consisted of 28 patients and was compared with a separate control gr 117 consecutive patients who underwent HALS for CRLM between December 20 January 2019, and were <75 years old (group 2, G2) (Tables 1-3).
There was no statistically significant difference between G1 and G2 in terms der, Fong clinical risk score, operative time, blood transfusion, hospital stay, ad chemotherapy, 30-day mortality rate, and history of pervious abdominal surgery primary tumor (Tables 1 and 2). In terms of conversion rates, there was no stati significant difference between the groups. However, in 3 out of 28 patients (11%) in 1, and 5 out of 117 (4%) in group 2 we had to convert due to technical difficulties progressing. No emergent conversion for bleeding and vital instability was neede most common site of the primary tumor was the right colon in G1, and the left c G2 (p = 0.05). 53% of patients in G1 and 74% of patients in G2 completed neoad therapy (p = 0.04). The median size of the largest metastasis was 32 (IQR 19-52) mm and 20 (IQR 13-35) mm in G2 (p = 0.001). The majority of G1 (75%) and a little le half of G2 (47%) had one liver metastasis (p = 0.03) ( Table 1). More patients in G1 went anterior segment resection compared with more posterior segmental resect G2-43% vs. 39% and 18% vs. 49% respectively (p = 0.003). Patients in G1 tended t more complications (Dindo-Clavien grade ≥ III) than patients in G2, without reachi tistical significance (36% vs. 19%) (p = 0.060). In G1 and G2, the R0 resection rate w There was no statistically significant difference between G1 and G2 in terms of gender, Fong clinical risk score, operative time, blood transfusion, hospital stay, adjuvant chemotherapy, 30-day mortality rate, and history of pervious abdominal surgery for the primary tumor (Tables 1 and 2). In terms of conversion rates, there was no statistically significant difference between the groups. However, in 3 out of 28 patients (11%) in group 1, and 5 out of 117 (4%) in group 2 we had to convert due to technical difficulties and no progressing. No emergent conversion for bleeding and vital instability was needed. The most common site of the primary tumor was the right colon in G1, and the left colon in G2 (p = 0.05). 53% of patients in G1 and 74% of patients in G2 completed neoadjuvant therapy (p = 0.04). The median size of the largest metastasis was 32 (IQR 19-52) mm in G1 and 20 (IQR 13-35) mm in G2 (p = 0.001). The majority of G1 (75%) and a little less than half of G2 (47%) had one liver metastasis (p = 0.03) ( Table 1). More patients in G1 underwent anterior segment resection compared with more posterior segmental resections in G2-43% vs. 39% and 18% vs. 49% respectively (p = 0.003). Patients in G1 tended to have more complications (Dindo-Clavien grade ≥ III) than patients in G2, without reaching statistical significance (36% vs. 19%) (p = 0.060). In G1 and G2, the R0 resection rate was 93% and 91%, respectively (p = 0.78) ( Table 2). The overall 1-year and 5-years survival was 96% and 30%, respectively in G1, and 95% and 52%, respectively in G2 (p = 0.12) (Table 3, Figure 1).
Discussion
Our study suggests that HALS for CRLM in patients over 75 years old is safe and effective, with comparable outcomes to other surgical approaches.
Advances in medical care has led to an increasing number of patients who are now in their eighth and ninth decades. Many of these patients are having increasingly complex medical conditions [9,10].
Colorectal liver metastasis (CRLM) occurs in more than 50% of patients with colorectal cancer (CRC) [11]. Moreover, the majority of patients who have been diagnosed with CRC are older than 65 years and given the worldwide trend in population aging, more elderly patients will be presenting with potentially resectable CRLM [12,13].
Minimally invasive surgery (MIS) for liver neoplasms has become increasingly widespread. There are three approaches to minimally invasive hepatic surgery; standard laparoscopy, hand-assisted laparoscopy, and a combined approach. In the standard laparoscopic procedure, the entire operation is completed through laparoscopic ports. In hand-assisted laparoscopic surgery (HALS) a hand port is used to assist the procedure. Lastly, in the hybrid technique, the patient undergoes standard laparoscopy or HALS, but the liver resection is done through a mini-laparotomy incision [4].
Three consensus guidelines (Louisville [4], Morioka [14] and Southampton [15]) on laparoscopic liver resection (LLR) estimates that pure laparoscopic liver resection, HALS, and the hybrid technique appear to have equivalent outcomes and are simply a matter of surgeon preference and case selection.
In a previous study, we found that HALS is a safe and effective approach in a specific subset of patients with colon cancer and liver disease. Results are comparable to the pure laparoscopic and open techniques [5]. We believe this is the first study to evaluate treatment outcomes of patients ≥75-years old who have underwent HALS for CRLM. There have been a few reports about minimally invasive surgery in an elderly patient population without sub-analyses for the HALS group to date [6].
The results of this study suggest that HALS for CRLM in patients over the age of 75 is safe, effective, and does not increase the rates of morbidity or mortality. We had a lower mortality rate compared with a series of open hepatectomies [16,17], and the same rates compared with pure laparoscopic hepatectomies [18]. Our complication rate was slightly higher in the elderly patient group (p = 0.06). However, this still compares well with reports of both pure laparoscopic and traditional open hepatectomy [16][17][18]. The median operation time was approximately 2.8 h in both groups with no significant differences. There was no statistically meaningful difference between the groups in terms of the conversion rates, and the main reason to convert was due to technical difficulties and a lack of progression. No emergent conversion for bleeding and vital instability was needed.
The traditional open liver resection may increase the risk of cardiopulmonary complications through several mechanisms, such as painful limitation of the thoracic cage, resulting in a 50-60% reduction of the vital capacity and a 30% reduction in functional residual capacity [19]. HALS is less traumatic to the abdominal wall and typically results in decreased postoperative pain and early postoperative rehabilitation. It therefore may provide improved cardiopulmonary function recovery and shorten the hospital stay [20]. The hospital stay was comparable in the two groups, despite the fact that the conversion rate, blood transfusion rate, and major complications rate were relatively higher in group 1.
Radical (R0) liver resection is the gold standard for CRLM and offers longer survival [21][22][23]. In Martínez-Cecilia et al. study, which compared the perioperative and oncological outcomes of laparoscopic and open liver resection for colorectal liver metastases in the elderly, the R0 rate was 88% in both the laparoscopic and open approaches [2]. Nomi et al. provided an important bridge to this conclusion with results showing that laparoscopic surgery is indeed safe in elderly patients with an 84% R0 resection rate [18]. In this study, we found that using HALS combined with meticulous laparoscopic intraoperative ultrasonography, was safe and did not compromise the oncological outcomes in the elderly patients. This was evidenced by the 93% of R0 resections in group 1, compared with 91% in group 2.
Our results showed shorter median long-term overall survival in group 1 (45 months vs. 71 months, p = 0.12). We believe that shorter survival can be attributed to the difference between the median ages of the groups during liver resection (64 years vs. 80 years). This is likely due to more limited survival expectancy, and not to the surgical technique nor to the oncological causes. These results compare well with those reported by Martínez-Cecilia et al. Their 1-year, 3-year, and 5-year survival rates were 93%, 68%, and 43% respectively, vs. 96%, 67% and 30% respectively in our study [2].
The retrospective nature of our present study and the relatively small sample size confer some limitations. We believe that a multi-center prospective randomized control trial or using the propensity score matching method in a larger cohort will be the ideal study design to analyze the short-and long-term outcomes in elderly patients following hand-assisted laparoscopic surgery for colorectal liver metastasis.
Conclusions
This study demonstrates that HALS for CRLM in elderly patients is safe and effective with acceptable perioperative complications and long-term outcomes that are similar to those in younger patients. This suggests that advanced age itself should not be regarded as a contraindication for HALS for CRLM.
|
v3-fos-license
|
2021-05-01T06:17:15.743Z
|
2021-04-21T00:00:00.000
|
233462939
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2075-4418/11/5/742/pdf",
"pdf_hash": "b04fbf4b25d4ac62e8f23b73a504c5692d0259d7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2625",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"sha1": "2357c947dc260bed283b53a6105c55919272d5ad",
"year": 2021
}
|
pes2o/s2orc
|
Identification of Tumor-Specific MRI Biomarkers Using Machine Learning (ML)
The identification of reliable and non-invasive oncology biomarkers remains a main priority in healthcare. There are only a few biomarkers that have been approved as diagnostic for cancer. The most frequently used cancer biomarkers are derived from either biological materials or imaging data. Most cancer biomarkers suffer from a lack of high specificity. However, the latest advancements in machine learning (ML) and artificial intelligence (AI) have enabled the identification of highly predictive, disease-specific biomarkers. Such biomarkers can be used to diagnose cancer patients, to predict cancer prognosis, or even to predict treatment efficacy. Herein, we provide a summary of the current status of developing and applying Magnetic resonance imaging (MRI) biomarkers in cancer care. We focus on all aspects of MRI biomarkers, starting from MRI data collection, preprocessing and machine learning methods, and ending with summarizing the types of existing biomarkers and their clinical applications in different cancer types.
Introduction
Imaging is routinely used for cancer diagnosis and staging, for monitoring treatment efficacy, for detecting disease recurrence, or generally for cancer surveillance [1][2][3][4]. Understanding the anatomical and physiological aspects of medical images allows experts to distinguish aberrant from normal appearance [5]. Advances in analytical methods and the application of machine learning methods enabled the use of medical images as biomarkers that can potentially optimize cancer care and improve clinical outcome [5]. The imaging biomarkers that are currently, and successfully, used for clinical diagnosis have attracted many researchers' attention as described in multiple publications [1,[5][6][7][8][9][10][11][12][13][14][15][16][17][18].
Magnetic resonance imaging (MRI) is a diagnostic imaging technique that applies strong magnetic and radio waves to generate high quality MRI scans of body organs facilitating the diagnosis of tumors and other conditions such as brain and spinal cord diseases. Currently, MRI is one of the of the big data producers in biomedicine, and is being exploited as important generator of cancer biomarkers. In essence, a biomarker is a characteristic that is measured as an indicator of a biological condition of interest (i.e., normal biological processes, pathogenic processes, or responses to a therapeutic intervention) [19,20]. The process of biomarker prioritization starts with a theory and ends with biomarker validation in an experimental setting. However, the current dogmas in biomedicine may hinder the process of unbiased hypothesis generation due to the complexity of cancer phenotypes and patient attributes, which makes it harder for human
MRI Biomarkers
MRI can be exploited to extract numerous variables according to diverse inherent tissue properties such as proton density, diffusion, and T1-and T2 relaxation times [1]. In addition, MRI can probe the alterations in parameters due to the association of macromolecules and contrast agents [5]. For example, the apparent diffusion coefficient (ADC) is an extensively used criterion in cancer identification [16,62], diagnosis, and treatment assessment [63,64]. However, post-processing tools to derive absolute quantitation are widely disputed [65][66][67], although the protocol itself is versatile and reliable for cancer detection [68]. Quantification of T1 relaxation has an impact on cardiovascular MRI rather than depending on image contrast [69]. T1 values are significant in differentiating cardiac inflammation [70], multiple sclerosis [71,72], liver fat and iron concentration [73,74], and endocrine glands [75].
MRI Data Preprocessing
Applying machine learning directly on raw MRI scans often yields poor results due to noise and information redundancy. Furthermore, machines read and store images in the form of number matrices. Raw MRI data are transformed into numerical features that can be processed by machines while preserving the information in the original data set.
Machine Learning for MRI Data
Machine learning (ML) algorithms are becoming useful components of computeraided disease diagnosis and decision support systems. Computers seem to be able to recognize patterns that humans cannot perceive. Hence, ML provides a tool to analyze and utilize a massive amount of data more efficiently than the conventional analysis carried by human. This realization has led to heightened interest in ML and AI applications to medical images. Recently, employing ML in analyzing big data resulting from medical images, including MRI data, have been useful in obtaining significant clinical information that can aid physicians in making important decisions regarding clinical diagnosis, clinical prognosis, or treatment outcome [55,85,86]. ML can be used also to prioritize MRI biomarkers. The workflow for prioritizing MRI biomarkers using ML is summarized in Figure 1.
Image Representation by Numeric Features
The success of machine learning relies on data representation [87]. MRI images are represented in terms of features which are numeric values that can be processed by machines. These numeric values could be actual pixel values, edge strengths, variation in pixel values in a specific region of the MRI image, or any other value [88]. Non-image features can be also used in the machine learning process and may include age of the patients, the outcome of the laboratory test, sex, and other available patient or laboratory attributes. Features can be combined to form a feature vector which is also called the input vector [88].
Feature Extraction
Feature extraction, also known as feature engineering, is the process of identifying the most distinguishing characteristics in imaging signals that characterize MRI images and describe their behavior, allowing machine learning methods to process imaging data and learn from these data. Features can be referred to as descriptors. Feature extraction can be accomplished either manually or automatically.
Image features are usually classified into two main groups: global and local. Global features are generated as a d-dimensional feature vector which represents a specific pattern [89]. Global features usually describe the color, shape, and texture, and are commonly applied in content-based image retrieval (CBIR) systems [90][91][92][93][94][95][96]. Local features refer to certain patterns or specific structures on images that distinguish them from their surroundings. Examples of local features include blobs, corners, and edge pixels [97].
Data Set Division for Model Building, Model Tuning and External Validation
Many machine learning methods require model training with previously labeled MRI data. For generating these models, the data is divided into three sets: training set, test set and an external validation set that is not used in any way for model building. The modeling set (that remains after splitting out the validation set) is split additionally into training and testing (or tuning) sets. If models fail to predict the external validation set, such models are discarded and not used to make predictions. Additionally, other independent validation sets may become available after the completion of the modeling studies, and then can be used as additional validation sets. We have shown earlier that training-set-only modeling is not sufficient to obtain reliable models that are externally predictive [98,99]. Models that are highly predictive on training and testing data should be retained for the majority voting on external validation sets. Finally, only those models shown to be highly predictive on both testing and external validation sets are used as robust classifiers for MRI imaging data.
Machine Learning Algorithms
Machine learning algorithms generate models that can classify MRI images into malignant and benign based on extracted local and global image features. The generated ML model is a mathematical model that can predict outcome by generalizing their learned experience on training set data, to deliver a correct prediction of new MRI images unseen by the developed models. The learning exercise can be supervised, semi-supervised or unsupervised. However, for imaging data we rely heavily on supervised methods that can be applied to class-labeled data.
There are three main challenges to applying machine learning in medical imaging for cancer diagnosis: classification, localization, and segmentation. We need ML methods to overcome all these challenges. Herein, we review the most popular ML algorithms applied for MRI biomarkers, and results summarized in Figure 2. We also discuss advantages and disadvantages of each method ( Table 2).
ML Method Diagnostic Characteristics
Artificial Neural Network (ANN) The mathematics behind the classification algorithm is simple. The non-linearities and weights allow the neural network (NN) to solve complex problems. Long training time is required for numerous iterations over the training data. Tendency for overfitting. Numerous additional tuning hyperparameters including # of hidden layers/hidden nodes are required for determining optimal performance. Can perform both image analysis (deep feature extraction) and construction of a prediction algorithm, eliminating the need for separate steps of extracting radiomic features and using that that to train a prediction model. Can learn from complex datasets and achieve high performance without requiring prior feature extraction. Permits massive parallel computations using GPUs. Requires additional hyper-parameters tune the model for better performance including the number of convolution filters, the size of the filters, and parameters involved in the pooling. Requires large training sets and it is not an optimal approach for pilot studies or internal data with small datasets. Computationally-expensive.
k Nearest Neighbor (kNN) Easy to implement as it only requires the calculation of the distance between different points on the basis of data of different features. Computationally-expensive for large datasets.
Does not work well with high dimensionality as this will complicate the distance calculating process to calculate distance for each dimension. Sensitive to noisy and missing data. Requires feature scaling. Prone to overfitting.
Logistic Regression
Constructs linear boundaries, i.e., it assumes linearity between dependent and independent variables. However, linearly separable data is rarely found in real-world scenarios.
Naïve Bayes
Models are faster to train and are simple, datasets and inferior performance on larger datasets. The Naïve Bayes classifier has generally shown to have superior performance in comparison to the Logistic Regression classifier on smaller datasets. Less potential for overfitting. Shows difficulties with complex datasets due to being linear classifiers.
Random Forests (RFs)
Less prone to overfitting, and it reduces overfitting in decision trees and helps to improve the accuracy. Outputs the importance of features which is a very useful for model interpretation.
Works well with both categorical and continuous values, for both classification and regression problems.
Tolerates missing values in the data by automating missing value interpretation.
Output changes significantly with small changes in data.
ML Method Diagnostic Characteristics
Self-supervised Learning (SSL) Suitable for large unlabeled datasets, but its utility on small datasets is unknown.
Reduces the relative error rate of few-shot meta-learners, even when the datasets are small and only utilizing images within the datasets.
Support Vector Machines (SVM)
Simple mathematics are behind the decision boundary Can be applied in higher dimensions. Time-consuming for large datasets, especially for datasets with larger margin decision boundary. Prone to overfitting. Sensitive to noisy and large datasets.
Artificial Neural Networks
Learning with artificial neural networks (ANNs) is one of the most famous machine learning methods that was introduced in the 1950s, and is being employed for classifying MRI data [103]. The generated neural network consists of a number of connected computational units, called neurons which are arranged in layers. There is an input layer that allows input data to enter the network, followed by hidden layer or layers transforming the data as it flows through, before ending at an output layer that produces the neural network's predictions. The network is trained to generate correct predictions by identifying predictive features in a set of labeled training data, fed through the network while the outputs are compared with the actual labels by an objective function [103]. Furthermore, message passing neural network (MPNN) has distinguished morphological aspects in benign and malignant cancers [104]. Diverse morphological features have been recognized including elliptic-normalized circumference (ENC), elliptic-normalized circumference (ENC), long axis to short axis (L:S), abrasions' sizes, and lobulation index (LI) [67].Further features have been distinguishes such as branch form, nodule brightness, lobulations' numbers, and ellipsoid features [105].
The ANN method is composed of three learning schemas: (1) the error function which measures how good or bad an output is for a given input, (2) the search function which defines the direction and magnitude of the change required to reduce the error function, and (3) the update function which defines how the weights of the network are updated on the basis of the search function values [88]. This is an iterative process which keeps adjusting the weights until there is no additional improvement. ANN models are very flexible, capable of solving complex problems, but they are difficult to understand and very computationally expensive to train [103].
Logistic Regression (LR)
Logistic regression is a statistical model that uses a logistic function to model binary dependent variable (y) in MRI classification data. It models the probability of that the MRI is for tumor versus normal tissue by using a linear model to predict the log-odds that that y = 1; and then uses the logistic/inverse logit function to convert the log-odds values into probabilities [106]. However, LR models tend to overfit high-dimensional data. Therefore, regularization methods are often used to prevent overfitting to training set data. Regularization is achieved by using a model that tries to fit the training data well, while at the same time trying not to use regression weights that are too large [107]. The most common approaches are L1 regularization, which tries to keep the total absolute values of the regression weights low, and L2 or ridge regularization, which tries to keep the total squared values of the regression weights low.
Contrastive Learning
Contrastive learning is a ML technique that can learn the general features of a dataset (i.e., the MRI dataset) without labels, by teaching the model which data points are similar or different. This can be formulated as a dictionary look-up problem. This algorithm is considered a particular variant of self-supervised learning (SSL) that is particularly useful for learning image-level representations [108]. One of the advantages of this method is that it can be applied for semi-supervised learning problems when clinical annotations are missing from MRI data. This method permits the use of both labeled and unlabeled data to optimize the performance and learning capacity of the classification model. A method that has gained popularity in the literature recently is the unsupervised pre-train, supervised fine-tune, knowledge distillation paradigm [109].
Deep Learning
Deep learning which is also known as deep neural network (DNNs), or deep structured learning, is a machine learning method based on artificial neural networks which allows computational models that are composed of multiple processing layers (typically more than 20 layers) to learn representations of data with multiple levels of abstraction [110]. In deep learning, the algorithm learns useful representations and features automatically, directly from the raw imaging data. By far the most common models in deep learning are various variants of ANNs, but there are others as well [103]. Deep learning methods primarily differ from "classical" machine learning approaches by focusing on feature learning, i.e., automatically learning representations of data [103]. In medical imaging the interest in deep learning is mostly triggered by convolutional neural networks (CNNs) [111]. Features are automatically deduced and optimally tuned for the desired outcome. Deep learning protocols have been applied in cancer prognosis such as melanoma, breast cancer, brain tumor, and nasopharyngeal carcinoma [112][113][114][115].
However, models based on deep learning are often vulnerable to the domain shift problem, which may occur when image acquisition settings or imaging modalities are varied [108]. Further, uncertainty quantification and interpretability may additionally be required in such systems before they can be used in practice. Many strategies have been used to improve the performance of DNNs including contrastive learning, selforganized learning, and others. Recently, FocalNet has become one of the preferred iterative information extraction algorithms to be used with DNNs. This algorithm uses the concept of foveal attention to post-process the outputs of deep learning by performing variable sampling of the input/feature space [116]. FocalNet is integrated into an existing task-driven deep learning model without modifying the weights of the network, and layers for performing foveation are automatically selected using a data-driven approach [116].
k-Nearest Neighbors (kNN)
The kNN method is based on the k nearest neighbors' principle and the variable selection procedure for feature selection reviewed elsewhere [98,117]. The procedure starts with the random selection of a predefined number of features from all selected features. The generated model can then classify an input vector of a new MRI image (i.e., a collection of MRI image features) by assigning it to the most similar class based on the number of neighbors (i.e., k) with known class labels, that vote on which class the input object belongs to. The predicted class will be the result of majority voting of all k nearest neighbors.
Support Vector Machines (SVM)
Support-vector machines (SVM) are supervised learning models that apply associated learning algorithms for data analysis; they can be used for classification and regression tasks [118,119]. They are named support vector machines because they transform input data in a way that produces the widest plane, or support vector, of separation between the two classes. SVMs gained popularity because they can classify data that are not linearly separable.
Random Forests
The random forests algorithm is a ML technique that uses an ensemble model to make predictions [120]. It essentially uses a bundle of decision trees to make a classification decision. Since, ensemble models implement the results from many different models to calculate a response or to assign a class, they perform better than individual models, and increasingly being used for image classification [98,121]. Random forests algorithm can handle big data, can estimate missing data without compromising accuracy, less prone to overfitting than decision trees, it works well for unbalanced datasets and for classification problems. However, it works like a black box with minimum control on what the model does, and models are difficult to interpret.
Self-Supervised Learning
Self-supervised learning (SSL) provides a strategy to pre-train a neural network with unlabeled data, followed by fine-tuning for a downstream task with limited annotations, e.g., such as in clinical data, to yield high predictive performance [109,122]. However, extensive validation of the automated algorithms is essential before they can be used in critical decision making in healthcare. One of the self-supervised learning methods that showed improved performance on deep learning models applied a strategy based on 'context restoration' to handle unlabeled imaging data [122]. The context restoration strategy is characterized by: (1) its ability to learn semantic image features; (2) it uses the learned image features for subsequent image analysis tasks; and (3) it is simple to implement [122].
Naïve Bayes
The Naïve Bayes classifier is a probabilistic classifier based on applying the Bayes theorem under strong independence assumptions between features [123]. It is considered a supervised learner. A query image is represented by a set of features which are assumed to be independently sampled from a class-specific feature space. Then a kernel density estimation allows the Bayesian network models to achieve higher accuracy levels [123,124]. The Naïve Bayes Classifier can produce very accurate classification results with a minimum training time in comparison with conventional supervised or unsupervised methods.
Decision Trees
Decision trees use tree-like models of decisions and their possible effects producing human-readable rules for the classification task [125]. Decision trees take the form of yes or no questions and therefore they are easily interpreted by people. The learning algorithm applies a rapid search for the many possible combinations of decision points to find the points that will give the simplest tree with the most accurate results. When the algorithm is run, one sets the maximal number of decision points, i.e., the depth, and the maximal breadth to be searched. At the end the algorithm determines how many decision points are required to achieve better accuracy. A decision tree model has high variance and low bias which leads to unstable output, and very sensitive to noise.
Other Machine Learning Methods
New approaches such as federated learning, interactive reporting, and synoptic reporting may help to address data availability problem in the future; however, curating and annotating data, as well as computational requirements, remain substantial barriers to machine learning applications for MRI data [126].
Which ML Method Is Best for Identifying Diagnostic MRI Biomarkers
The best ML methods applied for MRI data analysis should be able to learn useful semantic features from MRI imaging data and lead to improved models for performing medical diagnosis tasks efficiently [122]. However, training good ML models requires large amount of labelled data that may not be available; it is often difficult to obtain a sufficient number of labelled images for training models. In many scenarios the dataset in question consists of more unlabeled images than labelled ones. Therefore, boosting the performance of ML models by using unlabeled as well as labelled data is an important but challenging problem [122].
Many ML methods, particularly deep learning, has boosted medical image analysis for disease diagnosis over the past years. Around 2009, it was realized that deep artificial neural networks (DNNs) were outperforming other established modeling methods on a number of important benchmarks [65]. Currently, deep neural networks are considered the state-of-the-art machine learning models across a variety of areas, from MRI image analysis to natural language processing, and widely deployed in academia and industry [103]. However, there are many challenges for the introduction of deep learning in clinical settings. Challenges are related to data privacy, difficulties in model interpretability and workflow integration.
Despite the large number of retrospective studies (Figure 2), there are fewer applications of deep learning in the clinic on a routine basis [127]. The three major use cases that deep learning can have in MRI diagnostics: (1) model-free image synthesis, (2) modelbased image reconstruction, and (3) image or pixel-level classification [127]. Hence, deep learning has the potential to improve every step of the MRI diagnostic workflow and to provide value for every user, from the technologists performing the scan, the physicians ordering the imaging, the radiologists providing the interpretation, and most importantly, the patients who are receiving health care.
Assessment of Model Performance
For classification models, model performance is usually assessed by generating a confusion matrix and calculating several statistics indicative of model accuracy. In the case when MRI images belong to two classes (e.g., cancer and non-cancer), a 2 × 2 confusion matrix can be defined, where N (1) and N (0) are the numbers of MRI images in the data set that belong to classes (1) and (0), respectively. TP, TN, FP, and FN are the number of true positives (malignant MRI predicted as malignant MRI), true negatives (benign MRI predicted as benign MRI), false positives (benign MRI predicted as malignant MRI), and false negatives (malignant MRI predicted as benign MRI), respectively. The following classification accuracy characteristics associated with confusion matrices are widely used in classification machine learning studies: the true positive rate (TPR) also known as recall (R) or sensitivity (SE = TP/N (1) ), specificity (SP = TN/N (0) ), the false positive rate (FPR) which is 1-specificity, precision (p = TP/TP + FP) and enrichment E = (TP)N/[(TP + FP)N (1) ]. Normalized confusion matrices can be also obtained from the non-normalized confusion matrices by dividing the first column by N (1) and the second column by N (0) . Normalized enrichment can be defined in the same way as E but is calculated using a normalized confusion matrix: E n = (2TP)N (0) /[(TP)N (0) + (FP)N (1) ]. E n takes values within the interval of [0, 2] [98,128].
The receiver operating characteristic (ROC) curve is then created by plotting the TPR against the FPR at various thresholds. ROC and precision-recall (PR) analyses are usually performed side by side, and the area under the curve (AUC) is calculated to assess model performance in each case [129]. Both ROC-AUC area under the curve of receiver operating characteristic curves and PR-AUC area under the curve of precision-recall curves are widely used to assess the performance of ML methods for MRI biomarkers [100,129,130]. However, other model performance metrics have been calculated for imbalanced datasets that are usually encountered in the classification datasets. One of these metrics is the correct classification rate CCR which has been suggested as a better measure of model accuracy [98,99], using the equation below: where and are the number of correctly classified and total number of compounds of class j (j = 1, 2).
Prognostic Biomarkers
Prognostic imaging biomarkers are used for cancer staging in order to divide patients into different risk groups [1]. MRI is considered the basic staging probe for diverse cancers such as rectal cancer [1]. The TNM stage indicates inclusive survival out of 5 years; stage I (localized, T1/2), node negative: 95% compared to stage IV (metastatic, any T or N: 11%). MRI reflects a predictive role including patellofemoral syndrome (PFS) and resection margin [139][140][141].
Response Biomarkers
Response biomarkers evaluate the tumor's response to treatment which is classified into four classes: progressive disease, stable disease, partial response, complete response. This classification depends on the size of modification for particular lesions which are >1 cm, or nodes which are >1.5 cm axis (Table 3) [1]. The RECIST protocol offers a structured and comprehensive measurement of response to treatment in clinical studies [32]. RECIST is significant response biomarker in clinical studies and is employed as a surrogate marker [1].
Semi-Quantitative Recording Systems
The output of semi-quantitative scores are extensively recruited because visual diagnosis is appropriate and related to scoring output [5]. The MRI recording systems for hypoxic-ischemic encephalopathy (HIE) in neonates by T1-weighted (W), T2-W, and diffusion-W images demonstrated higher post-natal scores accompanied with inadequate brain functions [142]. Similarly, high T2-W scoring of cervical spondylosis was linked to illness status and implications [143,144]. Imaging of osteoarthritis is significant for diagnosis process [145]. Internet-based knowledge transfer methods employing the well-established recording protocols showed harmony between imaging and medical specialty in explaining T2-W outcome [146]. Identical recording has been used in multiple sclerosis [147] and rectal wall diagnosis [148]. 18 Fluoro-2-deoxy-D-glucose ( 18 FDG) positron emission tomography-computed tomography (PET-CT) imaging has been applied in lymphoma evaluation [149]. Similar scoring has been used in breast, prostate, liver, thyroid, and bladder imaging cancers [150][151][152][153]. MRI scoring has been applied for identifying gynecological malignancies [154] and scoring of renal cancer [155]. Physical evaluation of lung nodule diameter and volume doubling time (VDT) has been widely used in diagnosis, identifying, screening, and response anticipating [156,157].
Quantitative Recording Systems
Quantitative assessment has been frequently used in size and/or volume measurement. Size contributes in measuring benign and malignant diseases [158]. Measuring of ventricular size on ECG is versatile and linked to medical protocol [158,159]. Left ventricular ejection fraction has been assessed by ultrasound and MRI. Rheumatoid arthritis with aberrant bone features has been recorded with CT as an indicator of the illness progress [160]. RECIST (1.0 and 1.1) [158] assesses cancer prognosis; RECIST measurements are simple, but ambiguous and not reliable [161,162]. The fact that diverse studies have related volume to disease diagnosis [163][164][165][166], volume has not been authenticated in clinical records due to the requirement of splitting of abnormal shaped cancers. Volume is a surrogate for disease progress and response [167]. The metabolic tumor volume (MTV) measuring by PET has been related to survival [168,169]. Furthermore, MTV is an indicator of lymphoma and is considered a biomarker for treatment response [170][171][172]. Eventually, the presence of automated volume partitioning is crucial for treatment approval [5].
Quantitative Imaging Biomarkers
Quantitative imaging biomarkers that delineate tissue hallmarks such as hypoxia, fibrosis, necrosis, perfusion, and diffusion elaborate the illness state and express histopathology [5]. Numerous quantitative hallmarks can be integrated into mathematical equations to evaluate disease progress and changes during time intervals [5]. Organization of physiological databases is elaborated based on disease existence and type accompanied with scoring according to clinical data to extract anticipative models that serve as diagnosis-support tools. Such model has been provided for brain data inquiring approved and well-organized databases [173]. Exploiting quantitative data embedded in images along with demanding protocols for accession and scoring linked with machine learning algorithms have been applied in neurodegenerative disease and treatment protocol [174,175].
Radiomic Signature Biomarkers
Radiomics elaborates the extraction and measurement of quantitative features from radiographic images [24,176]. Radiomics expresses abnormal physiological testing related with other "omics" like proteomics, metabolomics, and genomics [177]. Numerous radiomic hallmarks can be derived from a region or volume of interest (ROI/VOI), calculated manually, semi-automatically, or automatically by computational mathematical algorithms [5]. The summary of all hallmarks is the radiomics signature that is distinct for a tissue, patient, patient group, or disease [85,178]. Radiomics signature depends on imaging information type (PET, MRI, CT), image parameter and implementation, machine-learning, and VOI/ROI segmentation [179].
Though radiomic shot is diverse and not tissue selective, it identifies treatment prognosis, resistance, and survival [180]. Radiomics assist in decision making for treatment protocol and risk prioritization [5]. Interestingly, X-ray mammography, CT, MRI, PET, and single-photon emission computed tomography (SPECT) demonstrated potential results resulting in interpretation benign disease [181]. Improving of image property and data regulation is obligatory for expansive usage. Radiomic fingerprints are multi-component data and records for computational strategies such as neural networks Furthermore, reliability of signatures derived from CT and MRI data is adequate [182,183].
MRI Biomarker Standardization
The reproducibility of radiomic studies remains a non-trivial challenge for prioritizing MRI biomarkers. The lack of standardized definitions of radiomics features has resulted in studies that are difficult to reproduce and validate [184]. Additionally, inadequate reporting by these studies has impeded reproducibility further. As a result, the Image Biomarker Standardization Initiative (IBSI) was established to address these challenges by fulfilling the following objectives: "(a) establish nomenclature and definitions for commonly used radiomics features; (b) establish a general radiomics image processing scheme for calculation of features from imaging; (c) provide data sets and associated reference values for verification and calibration of software implementations for image processing and feature computation; and (d) provide a set of reporting guidelines for studies involving radiomic analyses" [184]. Additionally, the methodologic quality of radiomic studies to produce stable features that can be linked to cancer biology can be evaluated using the radiomics quality scoring (RQS) [185].
In order to address the problem of inadequate reporting, the American College of Radiology (ACR) endorsed a Reporting and Data Systems (RADS) framework which provides standardized imaging terminology and report organization to document the findings imaging procedures [2,4]. Additionally, modern picture archiving and communication systems (PACS) [186] possess digital modalities which are connected via the digital imaging and communications in medicine (DICOM) protocol [187]. The DICOM header usually provides information to interpret the body part examined and patient attributes such as position. The type of reported information can be adjusted from the machine settings before performing the imaging procedure.
MRI Biomarkers for Prostate Cancer
Prostate cancer (PCa) is one of the most prevalent cancers occurring in men. The early detection of PCa is essential for successful treatment and to increase survival rate [188]. Lately, magnetic resonance imaging (MRI), has gained a progressively significant role in the diagnosis and early detection of PCa [189]. Multiparametric MRI (mpMRI) has been proven as a valuable procedure in detection, localization, risk stratification and staging of clinically significant prostate cancer (csPCa). Multiparametric MRI is based on combining the morphological evaluation of T2-weighted imaging (T2WI) with diffusion-weighted imaging (DWI), dynamic contrast-enhanced (DCE) perfusion imaging and spectroscopic imaging (MRSI) to better assess prostate morphology and identify tumor growth [190][191][192][193][194][195].
In addition, mpMRI-targeted biopsies have been shown to provide more accurate diagnosis of csPCa and to reduce the number of repeated biopsies needed for correct diagnosis relative to the transrectal ultrasound-guided biopsies [196]. However, mpMRI still suffers from inter-personnel agreement and variability of diagnostic accuracy based on the specialist's experience [29,190,[197][198][199].
Numerous studies in the literature described the potential role of employing MRI and ML for the analysis of prostate gland tissues and cellular densities to detect PCa. For example, McGarry et al. [200] established an adequate model to obtain a stable fit for ML MRI detection of augmented epithelium and diminished lumen density areas asserting high-grade PCa.
In addition, the volumetric regions of interest (ROI) analysis of index lesions on mpMRI [201] that is based on data available from T2-weighted, DWI and DCE images in combination with a support vector machine (SVM) ML, has been shown to significantly increase he diagnostic performance of PI-RADS v2 in clinically relevant prostate cancer.
Another useful application of ML MRI has been reported for the accurate distinction of stromal benign prostatic hyperplasia from PCa in the transition zone, a challenging diagnosis particularly in the presence of small lesions. Using ML based statistical analysis of quantitative features such as ADC maps, shape, and image texture, immense diagnostic accuracy in the of differentiation between small neoplastic lesions from benign ones was demonstrated [202].
The implication and feasibility of multiparametric machine learning and radiomics have been frequently discussed in literature for the identification and segmentation of clinically significant prostate cancer [203]. A deep learning-based computer-aided diagnostic approach for the identification and segmentation of clinically significant prostate cancer in low-risk patients was recently reported by Arif et al. [204]. The average sensitivity was 82-92% at an average specificity of 43-76% with an area under the curve (AUC) of 0.65 to 0.89 for several lesion volumes ranging from >0.03 to >0.5 cc. In addition, supervised ML classifiers have been used to successfully predict clinically significant cancer prostate cancer utilizing a group of quantitative image-features and comparing them with conventional PI-RADS v2 assessment scores [205].
MRI Biomarkers for Brain Tumors
Brain tumors are graded to benign (grade I and II) and malignant tumors (grade III and IV). Non-progressive (benign tumors) are originated in the brain but grow slowly and tend not to metastasize to other parts of the body while the malignant tumors grow rapidly with poor differentiation. They maybe originated in the brain and metastasize to other organs (primary) or initiated elsewhere in the body and migrated to the brain (secondary tumor) [206,207].
Magnetic resonance imaging (MRI) is a universal method for differential diagnosis of brain tumors. However, imaging with MRI is always susceptible to human subjectivity and early brain-tumor detection usually depends on the expertise of the radiologist [208], thus accurate diagnosis requires additional medical procedures such as brain biopsy. Unfortunately, biopsy of the brain tumor requires major brain surgery that puts patients at risk. The advancement of new technologies, such as machine learning has had substantial impact on the use of MRI as diagnostic tool for brain tumors. In addition, imaging biomarkers are routinely used for prognosis, and following up on treatment approaches for brain tumors.
Cheng et al., developed databases to classify tumor types using augmented tumor region of interest, image dilatation, and ring-form partition. Intensity histogram and gray level co-occurrence matrix were used to extract features and achieve an accuracy of 91.28% [209]. Additionally, the convolutional neural network (CNN) has made enormous improvement in the field of image processing, with particular impact on segmentation and classification of brain tumors. Brain tumor segmentation methods can be generally classified into three groups: based on traditional image algorithms, based on machine learning, and based on deep learning. Therefore, the segmentation method based on the CNN is widely used in segmentation of lung nodules, retinal segmentation, liver cancer segmentation, and glioma segmentation [210]. Milica et al. [211] recently reported a new CNN architecture for brain tumor identification, with good generalization capability and good execution speed, that was tested on T1-weighted contrast-enhanced magnetic resonance images.
The use of machine learning and radiomics have been suggested for various applications in the imaging and diagnosis of meningiomas with promising outcomes [212]. Differentiating between meningeal-based and intra-axial lesions using MRI can be challenging in some cases. Banzato et al. [213] reported the use of CNN to extract and analyze complex sets of data to discriminate between meningiomas and gliomas in pre-and postcontrast T1 images and T2 images. In their study, an image classifier combining CNN and MRI, was developed to distinguish between meningioma and glioma lesions with accu-racy of 94% (MCC = 0.88) on post-contrast T1 images, 91% (MCC = 0.81) on pre-contrast T1-images and 90% (MCC = 0.8) on T2 images.
Assigning and Interpreting of Proper Imaging Biomarkers to Confirm Decision-Making
Computerized quantitative evaluations are convenient to implement in machine learning systems. Therefore, the limit values, that determine the possibility of disease occurrence compared to no disease, should be recognized [214]. Such recognized values potentiate the use of imaging a computational biopsy. Assignment of biomarker selection depends on treatment protocol and disease response. Non-selective treatment, tissue necrosis is considered; therefore, biomarkers that evaluate increased free water (CT Hounsfield units) or decreased cell density (ADC) are beneficial. However, selective-treatment such as antiangiogenesis therapy, perfusion measurements (CT, MRI, and US) as selective biomarkers are considered [215]. Non-selective and selective agents terminate cancer metabolism; therefore, in glycolytic cancers fluorodeoxyglucose (FDG) assessments are reliable [216]. The deformity of tissues after surgery or changes in normal tissues after radiotherapy [217] as well as decrease in quantitative variations between metastatic and non-metastatic tissue [218] should be considered.
Progress in Quantitative Imaging Biomarkers as Decision-Making Tools in Clinical Practice
Biomarkers should be reliable, reproducible, in addition to being biologically, clinically and cost effective [18]. While reproducibility is a necessity, it is not frequently observed in practice [219] because incorporating of fundamental research in clinical studies is an arduous task for both patients and investigators. Technical verification determines whether a biomarker can be reproduced in different places on diverse panels. Technical validation may take place after biological validation especially for biological changes that modify imaging biomarker traces that endorse the values assigned to biomarkers. Correlation between clinical and technical validation precedes the assignment of biomarker for specific use. The implementation of imaging biomarkers in clinical diagnosis is assessed as a parameter in medical management such as circulating cancer DNA is specific for cancer identification. The incorporation of imaging biomarkers such as tissue and liquid biomarkers replaces old and simple protocols. The robustness of biomarker's cost is significant in economically limited medical systems [220]. Further imaging protocols are expensive in contrast to liquid-and tissue-derived biomarkers. Health financial measurement is beneficial for incorporating a new biomarker in clinical diagnosis. The use of imaging biomarkers is a key tool in supporting medical diagnosis protocols.
The Challenges for Prioritizing MRI Biomarkers
Despite major advancements in big data analysis and machine learning methods, the development of quantitative imaging biomarkers that can be exploited effectively in medical decisions is hampered by major challenges related to data availability, variability and lack of reliability [3]. Data availability is impacted by limitations related to data sharing, data ownership and patient privacy [221]. Furthermore, the absence of international standard protocols along with quality assurance (QA) and quality control (QC) procedures contributes in an inadequate quantification and interpretation of MRI biomarkers [4,18,222]. This prevents physicians from extracting the required clues for interpreting disease status [223], or for assessing the efficacy of treatment protocols [22]. Additionally, it decreases our capability of merging MRI biomarkers that have been extracted from different imaging methods [1].
Conclusions
In this article, we have provided an overview of ML and MRI data. We discussed the nature of MRI data, local and global features, and most frequently used ML methods for model building to prioritize MRI biomarkers. These biomarkers have the potential to revo-lutionize cancer care, providing a platform for personalized, high-quality, and cost-effective health care for oncology patients. The application of ML methods for the analysis of MRI data has led to the development of disease-specific biomarkers for many cancers including hematological, lymphatic and solid tumors. Neural networks, contrastive learning and deep learning are becoming the leading methods for prioritizing MRI biomarkers. The performance of MRI biomarkers is now exceeding 80% for most methods and cancer types. MRI biomarker performance for disease classification (i.e., malignancy vs. benign) is exceeding 90% for deep learning, neural networks and SVM. Advances in deep learning and AI are expected to revolutionize MRI biomarkers and increase their utility for preclinical and clinical applications in oncology.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2018-04-03T02:38:32.125Z
|
2009-01-16T00:00:00.000
|
25561058
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/284/3/1559.full.pdf",
"pdf_hash": "ace1c7fd2d9a15ba144c15eedc979b28db47d63b",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2626",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "a52b4df1c17dfa8be4f435a05e8e401fed499413",
"year": 2009
}
|
pes2o/s2orc
|
Phosphorylation of Tyr-398 and Tyr-402 in Occludin Prevents Its Interaction with ZO-1 and Destabilizes Its Assembly at the Tight Junctions*
Occludin is phosphorylated on tyrosine residues during the oxidative stress-induced disruption of tight junction, and in vitro phosphorylation of occludin by c-Src attenuates its binding to ZO-1. In the present study mass spectrometric analyses of C-terminal domain of occludin identified Tyr-379 and Tyr-383 in chicken occludin as the phosphorylation sites, which are located in a highly conserved sequence of occludin, YETDYTT; Tyr-398 and Tyr-402 are the corresponding residues in human occludin. Deletion of YETDYTT motif abolished the c-Src-mediated phosphorylation of occludin and the regulation of ZO-1 binding. Y398A and Y402A mutations in human occludin also abolished the c-Src-mediated phosphorylation and regulation of ZO-1 binding. Y398D/Y402D mutation resulted in a dramatic reduction in ZO-1 binding even in the absence of c-Src. Similar to wild type occludin, its Y398A/Y402A mutant was localized at the plasma membrane and cell-cell contact sites in Rat-1 cells. However, Y398D/Y402D mutants of occludin failed to localize at the cell-cell contacts. Calcium-induced reassembly of Y398D/Y402D mutant occludin in Madin-Darby canine kidney cells was significantly delayed compared with that of wild type occludin or its T398A/T402A mutant. Furthermore, expression of Y398D/Y402D mutant of occludin sensitized MDCK cells for hydrogen peroxide-induced barrier disruption. This study reveals a unique motif in the occludin sequence that is involved in the regulation of ZO-1 binding by reversible phosphorylation of specific Tyr residues.
Epithelial tight junctions (TJs) 2 form a selective barrier to the diffusion of toxins, allergens, and pathogens from the external environment into the tissues in the gastrointestinal tract, lung, liver, and kidney (1). Disruption of TJs is associated with the gastrointestinal diseases such as inflammatory bowel disease, celiac disease, infectious enterocolitis, and colon cancer (2)(3)(4) as well as in diseases of lung and kidney (5,6). Numerous inflammatory mediators such as tumor necrosis factor ␣, interferon ␥, and oxidative stress (7)(8)(9)(10)(11)(12) are known to disrupt the epithelial TJs and the barrier function. Several studies have indicated that hydrogen peroxide disrupts the TJs in intestinal epithelium by a tyrosine kinasedependent mechanism (11,12).
Four types of integral proteins, occludin, claudins, junctional adhesion molecules, and tricellulin are associated with TJs. Occludin, claudins, and tricellulin are tetraspan proteins, and their extracellular domains interact with homotypic domains of the adjacent cells (1,2,13). The intracellular domains of these proteins interact with a variety of soluble proteins such as ZO-1, ZO-2, ZO-3, 7H6, cingulin, and symplekin (14 -23); this protein complex interacts with the perijunctional actomyosin ring. The interactions among TJ proteins are essential for the assembly and the maintenance of TJs. Therefore, regulation of the interactions among TJ proteins may regulate the TJ integrity. A significant body of evidence indicates that numerous signaling molecules are associated with the TJs. Protein kinases and protein phosphatases such as protein kinase C (PKC), PKC/ (24), c-Src (25), c-Yes (26,27), mitogenactivated protein kinase (28), PP2A, and PP1 (29) interact with TJs, indicating that TJs are dynamically regulated by intracellular signal transduction involving protein phosphorylation. Additionally, other signaling molecules such as calcium (30), phosphatidylinositol 3-kinase (31), Rho (32), and Rac (33) are involved in the regulation of TJs.
Occludin, a ϳ65-kDa protein, has been well characterized to be assembled into the TJs. Although occludin knock-out mice showed the formation of intact TJs in different epithelia (34), numerous studies have emphasized that it plays an important role in the regulation of TJ integrity. Occludin spans the membrane four times to form two extracellular loops and one intracellular loop, and the N-terminal and C-terminal domains hang into the intracellular compartment (35)(36)(37). In epithelium with intact TJs, occludin is highly phosphorylated on Ser and Thr residues (38), whereas Tyr phosphorylation is undetectable. However, the disruption of TJs in Caco-2 cell monolayers by oxidative stress and acetaldehyde leads to Tyr phosphorylation of occludin; the tyrosine kinase inhibitors attenuate the disruption of TJs (39,40). Furthermore, a previous in vitro study dem-onstrated that Tyr phosphorylation of the C-terminal domain of occludin leads to the loss of its interaction with ZO-1 and ZO-3 (25).
In the present study we identified the Tyr residues in occludin that are phosphorylated by c-Src and determined their role in regulated interaction between occludin and ZO-1 and its assembly into the TJs. Results show that 1) Tyr-379 and Tyr-383 in chicken occludin and Tyr-398 and Tyr-402 in human occludin are the exclusive sites of phosphorylation by c-Src, and these Tyr residues are located in a highly conserved sequence of occludin, YET-DYTT, 2) deletion of YEDTYTT or point mutation of Tyr-398 and Tyr-402 in human occludin attenuates the phosphorylation-dependent regulation of ZO-1 binding, 3) Y398D/Y402D mutation of human occludin leads to loss of ZO-1 binding and prevents its translocation to the plasma membrane and cell-cell contact sites in Rat-1 cells, 4) Y398D/Y402D mutation of occludin delays its assembly into the intercellular junctions during the calcium-induced assembly of TJs, and 5) expression of Y398D/Y402D mutant occludin sensitizes cell monolayers for hydrogen peroxide-induced disruption of barrier function.
Chemicals
Cell culture reagents and supplies, G418, Lipofectamine-R, and Plus reagent were purchased from Invitrogen. FuGENE was purchased from Roche Diagnostics, and glutathione (GSH), leupeptin, aprotinin, pepstatin A, phenylmethylsulfonyl fluoride, protease inhibitor mixture, GSH-agarose, Triton X-100, and vanadate were purchased from Sigma. The QuikChange XL site-directed mutagenesis kit was from Stratagene, La Jolla, CA. Active c-Src (recombinant protein) was purchased from Upstate Biotechnology, Inc. (Lake Placid, NY). All other chemicals were of analytical grade and were purchased either from Sigma or Fisher.
Plasmids and Recombinant Proteins
cDNA for the C-terminal tail of chicken occludin (amino acids 358 -504) was a kind gift from Dr. James Anderson (University of North Carolina, Chapel Hill, NC); this was used to amplify and insert into pGEX2T vector. The C-terminal tail of human occludin 378 -522 was amplified from a full-length human occludin in pEGFP and then shuttled into pGEX2T vector. Site directed mutations were induced in both chicken and human occludin (C-terminal domain as well as full-length occludin). The sequences of the primers used for this are provided in Table 1. The mutations were confirmed by sequencing. pGEX2T constructs containing wild type occludin C-terminal domain (GST-cOcl-C and GST-hOcl-C) were transformed into BL21DE3 cells, and recombinant proteins were purified. The full-length human occludin (wild type and mutants) in pEGFP vector was used for transfection into Rat-1 and MDCK cells.
Mass Spectrometric Analysis
Trypsin Digestion-GST-cOcl-C WT , tyrosine-phosphorylated by c-Src, was suspended in 10% acetonitrile in 10 mM ammonium bicarbonate, pH 8.5, and incubated at 37°C overnight with TPCKtreated trypsin (enzyme to substrate molar ratio of 1:10). The digest was passed through 0.45-m filters. Clear supernatant was lyophilized to dryness. Air-dried samples were equilibrated in an aqueous solution containing 0.1% trifluoroacetic acid and desalted by passing through C-18 ZipTip (Millipore, Bedford, MA) using the manufacturer's protocol. Peptides extracted from the ZipTip were subjected to phosphopeptide extraction.
Extraction of Phosphopeptides-The phosphopeptides from trypsin digestion of Tyr-phosphorylated GST-cOcl-C were isolated using a phosphopeptide isolation kit (Pierce). Phosphopeptides were bound to immobilized gallium matrix at acidic pH (Ͻ3.5) and eluted in 50 mM ammonium bicarbonate at pH 10. Phosphopeptide extracts were then subjected to MALDI and LC/MS/MS analysis.
MALDI-TOF-Phosphopeptide extracts were dried under vacuum and reconstituted in 2 l of matrix (␣-cyano-4-hydroxycinnamic acid) and spotted for crystallization. Crystals were analyzed for mass by MALDI-TOF using Voyager Biospectrometry work station DE (delayed extraction technology) (Perseptive Biosystems Inc., Framingham, MA) and Data Explorer (Perseptive Biosystems). A Prescriptive Biosystems MALDI time-of-flight instrument incorporating a nitrogen laser (Laser Science, Newton, MA) was used to obtain MALDI mass spectra. Samples solubilized in 85% acetic acid and mixed (1:3 v/v) with ␣-cyano-4-hydroxycinnamic acid matrix were spotted in 1-l aliquots and air-dried. Typically, 100 -250 laser shots were used to obtain one mass spectrum. Mass scale was calibrated with peptide internal standards.
LC/MS/MS Analysis-Sequence analysis of tryptic peptides was performed by injecting 3 l of the ZipTip-purified sample onto a capillary C-18 LC column on-line with a Finnigan LCQ DECA (Thermoquest, San Jose, CA) ion-trap mass analyzer that is equipped with a nanoelectrospray ionization source. The capillary C-18 column was prepared in-house using New Objective Pico Frit (360-m outer diameter, 75-m inner diameter, 15-m tip, 10.4-cm length) and Magic C18AQ packing material (5-m beads, 200 A°pores). The peptides were fractionated using 0.1% formic acid in water as solvent A and 90% acetonitrile as solvent B. The acquired spectra were visualized using Qual-browser in the X-Calibur software suite. Raw
Sequences of primers used to generate various mutations
Sequences in bold substitute tyrosine residues in the wild type occludin.
Tight Junction Regulation by Tyr Phosphorylation of Occludin
data thus obtained was analyzed against a protein data base generated from Swissprot using the Sequest software suite (Sequest Technologies Inc., Lisle, IL).
Cell Culture and Transfection
Caco-2, Rat-1, and MDCK cells were cultured in Dulbecco's modified Eagle's medium from Invitrogen and supplemented with 10% fetal bovine serum, 1 mM sodium pyruvate, and 2 mM glutamine as per the ATCC guidelines. MDCK cells were seeded on 6-well plates a day before transfection to achieve 50 -60% confluency. The cells were transfected using 1 ml of antibiotic-free Dulbecco's modified Eagle's medium containing 10% fetal bovine serum, 1 g of DNA plasmid (empty vector pEGFP or vector carrying hOcl WT or its mutants), 1 l of Plus reagent, and 3 l of Lipofectamine-R for each well. After 20 h, the cell monolayers were trypsinized and seeded onto 100-mm plates. The cells were subjected to G418 selection (0.7 mg/ml) for 2 weeks. Resistant cells were sorted to obtain only GFPexpressing cells by fluorescence-activated cell sorter. Cells were maintained in the medium that was supplemented with 0.3 mg/ml G418. Rat-1 cells were transfected using FuGENE reagent as per the manufacturer's protocol, and the GFP-positive cells were sorted by fluorescence-activated cell sorter. Stably transfected cells were selected using G418 as described above.
Immunofluorescence Microscopy
Cell monolayers (12-mm transwells) were washed with phosphate-buffered saline and fixed in acetone-methanol (1:1) at 0°C for 5 min. Cell monolayers were blocked in 3% nonfat milk in TBST (20 mM Tris, pH 8.0, containing 150 mM NaCl and 0.5% Tween 20) and incubated for 1 h with primary antibodies (rabbit polyclonal anti-ZO-1 and mouse monoclonal anti-GFP) followed by incubation for 1 h with secondary antibodies (Cy3conjugated anti-rabbit IgG and AlexaFluor 488-conjugated anti-mouse IgG). The fluorescence was visualized using a Zeiss LSM 5 laser scanning confocal microscope, and images from Z-series sections (1 m) were collected by using Zeiss LSM 5 Pascal Confocal Microscopy Software (Release 3.2). Images were stacked using the software, Image J (NIH), and processed by Adobe Photoshop (Adobe Systems Inc., San Jose, CA).
Occludin Phosphorylation in Vitro
Recombinant GST-cOcl-C WT or GST-hOcl-C WT (5 g) was incubated with 500 ng of active c-Src in 250 l of kinase buffer (50 mM Hepes, pH 7.4, 1 mM EDTA, 0.2% -mercaptoethanol, 3 mM MgCl 2 ) containing 100 M ATP at 30°C for 3 h on a shaking incubator. Control reactions were done in the absence of ATP.
GST Pulldown Assay
To determine the interaction of occludin with ZO-1 and ZO-3, GST-hOcl-C GST-cOcl-C (2.5-10 g) was incubated with Caco-2 whole cell extract made in phosphate-buffered saline containing 0.2% Triton X-100, 1 mM sodium vanadate, and 10 mM sodium fluoride for 16 h at 4°C on an inverter. GST-occludin-C (GST-conjugated C-terminal tail of occludin) was pulled down with 20 l of 50% GSH-agarose slurry at 4°C for 1 h. The amounts of ZO-1 and ZO-3 bound to GSH-agarose were determined by immunoblot analysis. Nonspecific binding was determined by carrying out the binding with GST.
Immunoblot and Densitometric Analysis
Proteins were separated by 7% SDS-polyacrylamide gel electrophoresis and transferred to polyvinylidene difluoride membranes. Membranes were blotted for ZO-1, ZO-3, and p-Tyr by using specific antibodies in combination with HRPconjugated anti-mouse IgG or HRP-conjugated anti-rabbit IgG antibodies. HRP-conjugated anti-GST antibody was used for immunoblot analysis of GST or GST-occludin. The blot was developed using ECL chemiluminescence method (Amersham Biosciences). Quantitation was performed by densitometric analysis of specific bands on immunoblots by using the software, Image J.
Hydrogen Peroxide Treatment and Paracellular Permeability
MDCK cell monolayers that stably express GFP-hOcl WT , GFP-hOcl Y398A/Y402A , or GFP-hOcl Y398D/Y402D were exposed to varying concentrations (20 -2500 M) of hydrogen peroxide for 2 h, and paracellular permeability was evaluated by measuring the unidirectional flux of inulin as described before (11).
TJ Assembly by Calcium Switch
MDCK cell monolayers that stably express GFP-hOcl WT , GFP-hOcl Y398A/Y402A , or GFP-hOcl Y398D/Y402D were incubated overnight with low calcium medium followed by calcium replacement as described before (29). TJ assembly was evaluated by measuring transepithelial electrical resistance, inulin permeability, and confocal microscopy.
Immunoprecipitation
GFP was immunoprecipitated from cells under native or denatured conditions as described before (29). Anti-GFP immunocomplexes at native conditions were immunoblotted for ZO-1, whereas complexes under denatured conditions were immunoblotted for p-Tyr.
Statistics
Comparison between two groups was made by Student's t tests for grouped data. Significance in all tests was set at 95% or greater confidence level.
Tyr-379 and Tyr-383 in Chicken Occludin Are the Sites of
Phosphorylation by c-Src-A previous study showed that Tyr phosphorylation of occludin C-terminal domain by c-Src resulted in the loss of its interaction with ZO-1 (25). In the present study we identified the phosphorylation sites in occlu-
Tight Junction Regulation by Tyr Phosphorylation of Occludin
din C-terminal domain by mass spectrometric analysis. GST-fused C-terminal region (150 amino acids) of chicken occludin (GST-cOcl) was prepared and phosphorylated by incubation with c-Src and ATP. Tyr-phosphorylated GST-cOcl-C was digested with trypsin. Generation of five different tryptic peptides containing Tyr residues was predicted (Fig. 1A); the mass of these peptides was expected to increase by 80 Da with phosphorylation of each Tyr residue. MALDI mass spectrometric analysis of the phosphopeptide extracts from the tryptic digest detected several phosphopeptides with masses slightly deviated from the predicted mass analysis (Fig. 1B). Fig. 1D. Three different Tyr-phosphorylated peptides were identified. All three peptides were identified as the derivatives of tryptic peptide, P1 with single or double phosphorylation of Tyr residues. These results determine that two Tyr residues in occludin C-terminal region corresponding to the sequence of P1 were singly or doubly phosphorylated. These two Tyr residues correspond to Tyr-379 and Tyr-383 in chicken occludin.
Sequence alignment of occludin from different species (Fig. 2A) demonstrated that Tyr-379 and Tyr-383 are located in a highly conserved sequence of occludin (YETDYTT) and that Tyr-398 and Tyr-402 are the corresponding Tyr residues in human occludin. Therefore, we induced mutations in chicken and human occludin C-terminal region (cOcl-C and hOcl-C). The YETDYTT was deleted or the tyrosine residues in this region were subjected to point mutation in cOcl-C and hOcl-C (Fig. 2B) and inserted into pGEX2T vector to generate GST-fused mutant proteins. Tyr-398 and Tyr-402 in full-length human occludin in pEGFP vector were mutated to phenylalanine or aspartic acid and expressed in Rat-1 or MDCK cells as GFP fusion proteins.
Y379F and Y383F Mutation of Chicken Occludin Attenuates Its Phosphorylation and Regulation of ZO-1 Binding-GST pulldown assay for ZO-1 binding showed that GST-cOcl-C WT , GST-cOcl-C Y379F , GST-cOcl-C Y383F , and GST-cOcl-C Y379F/Y383F bind ZO-1 and ZO-3 in a dose-dependent manner (Fig. 4A). Incubation with c-Src in the presence of ATP showed a partial phosphorylation of single mutants, GST-cOcl-C Y379F and GST-cOcl-C Y383F , whereas phosphorylation was undetectable in the double mutant, GST-cOcl-C Y379F/Y383F (Fig. 4B). ZO-1 binding was not significantly different among unphosphorylated occludin (Fig. 4, B and C), except that ZO-1 binding of GST-cOcl-C Y379F was slightly greater than that of GST-cOcl-C WT . Incubation in the presence of ATP and c-Src resulted in a reduced ZO-1 binding by GST-cOcl-C WT and GST-cOcl-C Y379F (Fig. 4, B and C). However, incubation with c-Src in the presence of ATP did not alter the ZO-1 binding by GST-cOcl-C Y383F and GST-cOcl-C Y379F/Y383F (Fig. 4, B and C).
Mutation of Tyr-398 and Tyr-402 in Human Occludin Prevents Phosphorylation and Alters ZO-1 Binding-The sequence analysis indicated that Tyr-398 and Tyr-402 are the residues in human occludin that correspond to Tyr-379 and Tyr-383 in chicken occludin. Therefore, we mutated Tyr-398 and Tyr-402 in hOcl-C. Similar to GST-cOcl-C WT , incubation with c-Src in the presence of ATP induced Tyr phosphorylation of GST-hOcl-C WT , whereas phosphorylation was undetectable in GST-hOcl-C Y398F/Y402F and GST-hOcl-C Y398D/Y402D mutants (Fig. 6A). GST pulldown assay showed that GST-hOcl-C WT binds to ZO-1, and this binding was reduced by incubation with c-Src in the presence of ATP. GST-hOcl-C Y398D/Y402D showed only a trace amount of ZO-1 binding. ZO-1 binding to GST-hOcl-C Y398F/Y402F was lower than GST-hOcl-C WT ; however, the ZO-1 binding was not further reduced by incubation with c-Src in the presence of ATP (Fig. 6, A and C). Unlike reduced binding to GST-hOcl-C Y398F/Y402F , the ZO-1 binding to GST-hOcl-C Y398A/Y402A was similar to that of GST-hOcl-C WT (Fig. 6B). Once again, GST-hOcl-C Y398A/Y402A showed no Tyr phosphorylation or regulation of ZO-1 binding when incubated with c-Src in the presence of ATP (Fig. 6B). Densitometric analysis (Fig. 6C) confirmed that ZO-1 binding to GST-hOcl-C Y398A/Y402A and GST-hOcl-C Y398D/Y402D is significantly lower than that of GST-hOcl-C WT , whereas the binding to GST-hOcl-C Y398A/Y402A was similar to that of GST-hOcl-C WT . Furthermore, incubation with c-Src in the presence of ATP significantly reduced ZO-1 binding to GST-hOcl-C WT but not to GST-hOcl-C Y398F/Y402F , GST-hOcl-C Y398A/Y402A , or GST-hOcl-C Y398D/Y402D when compared with corresponding ZO-1 binding in the absence of ATP.
Y398D and Y402D Mutation in Human Occludin Prevents Its Localization at Plasma Membrane and Cell-Cell Contact
Sites in Rat-1 Cells-The regulation of ZO-1 binding by Tyr-398 and Tyr-402 raised the question of whether phosphorylation of Tyr-398 and/or Tyr-402 of occludin affects its localization at the TJs. To determine the effect of mutation of Tyr-398 and Tyr-402 on the distribution of occludin at the plasma membrane and the intercellular junctions, we transfected Rat-1 cells (occludin null) with GFP-hOcl, GFP-hOcl Y398A/Y402A , and GFP-hOcl Y398D/Y402D and visualized the cells by confocal microscopy. GFP-hOcl WT and GFP-hOcl Y398A/Y402A were localized to both the plasma membrane and the intracellular compartment (Fig. 7). A greater level of occludin was found at the cell-cell contact sites, which was associated with the redistribution of ZO-1 at the cell-cell contact and the plasma membrane. In contrast, GFP-hOcl Y398D/Y402D was localized exclusively at the intracellular compartment with no trace of distribution at the plasma membrane or cell-cell contact sites (Fig. 7).
Y398D/Y402D Mutation of Occludin Delays Its Assembly at the TJs and Sensitizes MDCK Cells for TJ Disruption by Hydrogen Peroxide-Unlike
Rat-1 cells, in MDCK cell monolayers, GFP-hOcl Y398D/Y402D appeared at the intercellular junctions. However, during the calcium switch-induced assembly of TJs, GFP-hOcl Y398D/Y402D localized predominantly at the intracellular compartment, whereas GFP-hOcl WT and GFP-hOcl Y398A/Y402A appeared at the intercellular junctions 1 h after calcium replacement (Fig. 8A). The inulin permeability in cell monolayers that express GFP-hOcl Y398D/Y402D was significantly greater than those in cell monolayers expressing GFP-hOcl WT or GFP-hOcl Y398A/Y402A (Fig. 8B). As reported before (11), hydrogen peroxide induced a dose-dependent increase in inulin permeability in MDCK cell monolayers that stably express GFP-hOcl WT (Fig. 9A). A hydrogen peroxide-induced increase in inulin permeability was significantly lower in cell monolayers that express GFP-hOcl Y398A/Y402A , whereas it was significantly higher in cells expressing GFP-hOcl Y398D/Y402D . Incubation of cell monolayers that express GFP-hOcl WT , GFP-hOcl Y398A/Y402A , and GFP-hOcl Y398D/Y402D with 500 M hydrogen peroxide for 1 h increased inulin permeability (% flux/h/cm 2 ) from 0.025 Ϯ 0.005 to 0.035 Ϯ 0.01, 0.02 Ϯ 0.01 to 0.025 Ϯ 0.006, and 0.07 Ϯ 0.007 to 0.25 Ϯ 0.03, respectively. These observations were confirmed by analyzing the junctional distribution of GFP and ZO-1 after hydrogen peroxide treatment. Hydrogen peroxide induced a slight redistribution of GFP in cells expressing GFP-hOcl WT , whereas the hydrogen peroxideinduced redistribution of GFP from the junctions was much more dramatic in cells that express GFP-hOcl Y398D/Y402D
FIGURE 3. Deletion mutation of occludin prevents phosphorylation and attenuates regulation of ZO-1 binding.
A, varying amounts of GST-cOcl-C WT or GST-cOcl-C (⌬378 -385) was analyzed for ZO-1 binding by GST pulldown assay using Caco-2 cell extract. GST pulldown was immunoblotted (IB) for ZO-1, ZO-3, and GST. Binding to GST (5 g) was performed as a control. The labels p44, p42, and p22 correspond to the molecular weight of GST-cOcl-C WT , GST-cOcl-C (⌬378 -385) , and GST, respectively. B, densitometric analysis of ZO-1 and ZO-3 binding to 5 g of wild type (WT) or mutant (⌬378 -385) occludin. Values are the mean Ϯ S.E. (n ϭ 3). Asterisks indicate the values that are significantly (p Ͻ 0.05) different from corresponding value for WT group. C, GST-cOcl-C and GST-cOcl-C (⌬378 -385) , 2.5 g, were incubated with c-Src in the absence or presence of ATP and analyzed for ZO-1 binding by GST pulldown assay. GST pulldown assays were immunoblotted for ZO-1, p-Tyr, and GST. D, densitometric analysis of ZO-1 bands from three different experiments described in panel B. In each experiment, ZO-1 band density for GST-cOcl-C WT incubated without ATP was designated to 100, and corresponding bands in other groups were normalized as a percent of that number. Values are the mean Ϯ S.E. (n ϭ 3). The asterisk indicates the value that is significantly (p Ͻ 0.05) different from corresponding value for ϪATP group. (Fig. 9B). ZO-1 distribution in hydrogen peroxide-treated cell monolayers paralleled the distribution of GFP.
DISCUSSION
A significant body of evidence suggests that Tyr phosphorylation of TJ proteins may play an important role in the regulation of epithelial TJs. Occludin is highly phosphorylated on Ser and Thr residues in the intact epithelium, and Tyr phosphorylation is undetectable (38). Previous studies, however, demonstrated that occludin undergoes Tyr phosphorylation during the disruption of TJs by hydrogen peroxide (11,39). Furthermore, a recent in vitro study demonstrated that Tyr phosphorylation of the C-terminal region of occludin reduces its ability to interact with ZO-1, a TJ plaque protein (25). In the present study we identified the Tyr phosphorylation sites in the C-terminal region of occludin and demonstrated their role in regulation of ZO-1 binding.
The Tyr phosphorylation sites in the C-terminal region of chicken occludin were determined by mass spectrometric analysis of phosphooccludin. Mass analysis of phosphopeptide extracts from tryptic digests detected the presence of one phosphopeptide corresponding to the predicted peptide fragment of chicken occludin (amino acids 371-393) with phospho-Tyr residues. There are two Tyr residues within this sequence (Tyr-379 and 383), which were found to be phosphorylated as the mass of this peptide was 160 daltons greater than the predicted mass value. Another phosphopeptide detected by MALDI showed a molecular mass of 2915.7, which is 156 daltons greater than the predicted mass of the monophosphate peptide fragment. LC/MS/MS analysis determined that this peptide corresponds to the sequence 370 - A, GST-cOcl-C WT , GST-cOcl-C Y379F , GST-cOcl-C Y383F , and GST-cOcl-C Y379/383F were analyzed for ZO-1 binding by GST pulldown assay. GST pulldown assays were immunoblotted (IB) for ZO-1, ZO-3, and GST. B, GST-cOcl-C, GST-cOcl-C Y379F , GST-cOcl-C Y383F , and GST-cOcl-C Y379/383F were incubated with c-Src in the absence or presence of ATP. Five g of phosphorylated and non-phosphorylated occludins were analyzed for ZO-1 binding by GST pulldown assay. GST pulldown assays were immunoblotted for ZO-1, p-Tyr, and GST. Control binding was performed by using 5 g of GST. C, densitometric analysis of ZO-1 bands from three different experiments described in panel B. In each experiment, ZO-1 band density for GST-cOcl-C WT incubated without ATP was designated to 100, and corresponding bands in other groups were normalized as a percent of that number. Values are the mean Ϯ S. E. (n ϭ 3). Asterisks indicate the values that are significantly (p Ͻ 0.05) different from corresponding value for ϪATP group. 393 of chicken occludin with an extra Arg (Arg-370) at the N terminus compared with the predicted 371-393 fragment. This is possibly caused by a misdigestion by trypsin due to the presence of sequential Arg residues in this region of the occludin sequence. LC/MS/MS analysis also detected two types of 2915.2-dalton peptides; one in which Tyr-379 was phosphorylated and another in which Tyr-383 was phosphorylated. Therefore, the sequences of all three phosphopeptides identified in the study demonstrate that Tyr-379 and Tyr-383 are the phosphorylation sites in chicken occludin; Tyr-398 and Tyr-402 are the corresponding tyrosines in human occludin. These two tyrosines are located in a highly conserved sequence of occludin HYETDYTT. BLAST analysis of this sequence demonstrated that this is a unique motif that is not present in other proteins, including claudins.
Deletion of HYETDYTT (378 -385) from chicken occludin abrogated c-Src-induced Tyr-phosphorylation of the occludin C-terminal region, confirming the mass spectrometric data that Tyr-379 and Tyr-383 are the phosphorylation sites in chicken occludin. Point mutation of Tyr-379 or Tyr-383 to phenylalanine resulted in a partial decrease in c-Src-induced Tyrphosphorylation. Decrease in Tyr phosphorylation was greater in Y383F mutants compared with that in Y379F mutants, sug-FIGURE 5. Y379D and Y383D in chicken occludin attenuates its binding to ZO-1. A, GST-cOcl-C WT , GST-cOcl-C Y379D , GST-cOcl-C Y383D , and GST-cOcl-C Y379/383D were analyzed for ZO-1 binding by GST pulldown assay. GST pulldown assays were immunoblotted (IB) for ZO-1 and GST. B, GST-cOcl-C, GST-cOcl-C Y379D , GST-cOcl-C Y383D , and GST-cOcl-C Y379/383D were incubated with c-Src in the absence or presence of ATP. Ten g each of phosphorylated and non-phosphorylated occludins were analyzed for ZO-1 binding by GST pulldown assay. GST pulldown assays were immunoblotted for ZO-1, p-Tyr, and GST. C, densitometric analysis of ZO-1 bands from three different experiments described as in panel B and selected ZO-3 bands in panel A. In each experiment, ZO-1 (or ZO-3) band density for GST-cOcl-C incubated without ATP was designated to 100, and corresponding bands in other groups were normalized as a percent of that number. Values are the mean Ϯ S.E. (n ϭ 3). Asterisks indicate the values that are significantly (p Ͻ 0.05) different from corresponding value for ϪATP group, and the symbols # indicate the values that are significantly different (Ͻ0.05) from corresponding value for WT group. FIGURE 6. Mutation of Tyr-398 and Tyr-402 in human occludin attenuates its phosphorylation and altered regulation of ZO-1 binding. A, GST-hOcl-C WT , GST-hOcl-C Y398F/Y402F , and GST-hOcl-C Y398D/Y402D were incubated with c-Src in the absence or presence of ATP. Five g of phosphorylated or nonphosphorylated occludin was analyzed for ZO-1 binding by GST pulldown assay. GST pulldown assays were immunoblotted (IB) for ZO-1, p-Tyr, and GST. B, five g of phosphorylated or non-phosphorylated GST-hOcl-C and GST-hOcl-C Y398A/Y402A were incubated with c-Src in the absence or presence of ATP and analyzed for ZO-1 binding by GST pulldown assay. GST pulldown assays were immunoblotted for ZO-1, p-Tyr, and GST. C, densitometric analysis of ZO-1 bands from three different experiments described in panel B. In each experiment, ZO-1 band density for GST-cOcl-C incubated without ATP was designated to 100, and corresponding bands in other groups were normalized as a percent of that number. Values are the mean Ϯ S.E. (n ϭ 3). The asterisk indicates the value that is significantly (p Ͻ 0.05) different from corresponding value for -ATP group. # indicates the values that are significantly (p Ͻ 0.05) different from value for non-phosphorylated wild type occludin.
gesting that Tyr-383 is a preferred phosphorylation site. Double mutation of Tyr-379 and Tyr-383 abolished c-Src-induced Tyr phosphorylation. Similarly, mutation of Tyr-398 and Tyr-402 in the C-terminal region of human occludin also abrogated the c-Src-induced Tyr phosphorylation. Previous studies showed that c-Src plays an important role in hydrogen peroxide-induced disruption of TJs and barrier dysfunction in Caco-2 cell monolayers (11). Expression of inactive c-Src significantly reduced hydrogen peroxide-induced Tyr-phosphorylation of occludin and TJ disruption, suggesting that c-Src-induced Tyrphosphorylation may be involved in this process. The present study suggests that hydrogen peroxide may induce phosphorylation of Tyr-379 and Tyr-383 in Caco-2 cells. A previous study demonstrated that Tyr phosphorylation of the C-terminal region of occludin results in the loss of its interaction with ZO-1 (25). ZO-1, a major adaptor protein of TJs, interacts with C-terminal region of occludin on one hand and with actin cytoskeleton on the other (12,14).
The interaction between the C-terminal region of occludin and ZO-1 is crucial for the assembly and the maintenance of occludin at the TJs (14). Truncation of the C-terminal region of occludin resulted in a loss of its interaction with ZO-1 and prevented its assembly into TJs. In the present study we determined the role of phosphorylation of specific Tyr residues in the C-terminal region of occludin in the regulation of its interaction with ZO-1. GST pulldown assays demonstrated that C-terminal regions of both chicken and human occludin bind to ZO-1. Deletion of the sequence HYET-DYTT in chicken occludin resulted in a significant reduction in binding to ZO-1 at higher concentrations of occludin; however, at low concentrations the deletion mutant bound to ZO-1 at a similar level to that of the wild type occludin. When wild type occludin C-terminal domain was incubated with c-Src and ATP, there was a significant reduction in ZO-1 binding; however, this was not observed with the deletion mutant, indicating that the phosphorylation of Tyr-379 and Tyr-383 is important in the regulation of interaction between occludin and ZO-1. This was confirmed by point mutations of Tyr-379 and -383. Y379F mutation partially reduced c-Src-induced regulation of ZO-1 binding, whereas Y383F or Y379F/Y383F mutation completely attenuated c-Src-induced regulation of ZO-1 binding. Similarly, Y398F/Y402F mutation in human occludin attenuated c-Src-induced regulation of ZO-1 binding. However, Y398F/Y402F mutation by itself resulted in significant reduction in ZO-1 binding. On the other hand, Y398A/Y402A mutation did not affect ZO-1 binding in the absence or presence of c-Src. Therefore, the results of this study demonstrate that Tyr-398 and Tyr-402 are important in regulation of ZO-1 binding by human occludin.
The crystal structure of the occludin C-terminal region (383-522) has been determined recently (15). This coiled-coil region C-terminal to the Tyr phosphorylation sites does bind to ZO-1 quite well. This indicates that the main function of the phosphorylation is not ZO-1 binding; rather, it plays a role in the regulation of ZO-1 binding. Similar to ZO-1 binding, ZO-3 binding to GST-hOcl-C was also altered by mutation of Tyr-398 and -402. Interestingly, ZO-3 binding to Y398F mutant was greater than its binding to GST-hOcl-C WT . However, at present, the reason for this enhanced binding is not clear and needs further studies.
Tight Junction Regulation by Tyr Phosphorylation of Occludin
To determine the effect of Tyr-398 or Tyr-402 phosphorylation on the assembly of occludin into intercellular junctions, we induced point mutations to GFP-tagged full-length human occludin. The GFP-hOcl WT and its mutants, GFP-hOcl Y398/Y402A , GFP-hOcl Y398/Y402D , and the corresponding single mutants were transfected into Rat-1 fibroblasts or MDCK cells. Confocal immunofluorescence microscopy demonstrated that the GFP-hOcl WT and its Y398/Y402A mutant were localized at the plasma membrane of Rat-1 cells, forming intercellular contact sites. Rat-1 cells express high levels of ZO-1, but it is predominantly localized in the intracellular compartment. However, transfection of GFP-hOcl WT or GFP-hOcl Y398/Y402A induced a recruitment of ZO-1 at the plasma membrane and the intercellular contact sites. Y398A/Y402A mutation did not alter the distribution of occludin at the plasma membranes. However, Y398D, Y402D, and Y398/Y402D mutants of occludin failed to localize at the plasma membrane or cell-cell contact sites; rather, they were distributed in the intracellular compartment. This indicates that mimicking the phosphorylation of Tyr-398 and Tyr-402 by mutation to aspartic acid results in the loss of its ability to assemble at the plasma membranes and cell-cell contact sites. This may be due to the loss of its ability to bind to ZO-1.
A significant portion of GFP-occludin in vesicular structure did appear near the plasma membrane, suggesting that the mutation did not result in a defect in its ability to integrate into the plasma membrane; rather, these occludin mutants are internalized into the cells due to the lack of their interaction with ZO-1 and inability to integrate into TJ structure. Both single mutants, Y398D and Y402D, similarly failed to localize at the intercellular junctions, indicating that phosphorylation of either Tyr residues is enough to alter its ability to bind ZO-1 and integrate into TJs. In vitro phosphorylation and ZO-1 binding studies indicated that Tyr-383 is more important than Tyr-379 in chicken occludin. However, this is in contrast to cell data, which shows that both Tyr-398 and Tyr-402 in human occludin are important for its localization at the cellcell contact sites. This may be explained by the lower preference of c-Src for Tyr-379 compared with Tyr-383. In the cell, Tyr-379 may be phosphorylated by some other tyrosine kinase.
In MDCK cells, wild type and Y398A/Y402A mutant occludin were localized at the intercellular junctions. However, during the early stages of TJ assembly by calcium replacement, we saw a delay in the organization of GFP-hOcl Y398D/Y402D at the intercellular junctions. One hour after calcium replacement, Y398D/Y402D mutant occludin was distributed predominantly in the intracellular compartment, whereas WT and Y398A/ Y402A mutant occludin were organized at the intercellular junctions. This confirmed that phosphorylation of Tyr-398 and Tyr-402 of occludin does prevent its ability to integrate into the TJs. The organization of Y398D/Y402D mutant at the intercellular junctions may be mediated by dimerization of mutant with the endogenous occludin. Expression of Y398D/Y402D mutant of occludin did disrupt the junctional distribution of ZO-1. This dominant negative effect is evident only at higher levels of mutant expression. The expression of mutant in relation to endogenous occludin is difficult to assess. However, ZO-1 redistribution was seen only in cells with higher level of mutant expression. Therefore, it is not clear whether such an effect can be seen at endogenous levels of phospho-occludin.
Furthermore, MDCK cell monolayers that express Y398D/ Y402D mutant of occludin were dramatically more sensitive to hydrogen peroxide-induced disruption of barrier function, whereas cell monolayers expressing Y398A/Y402A mutant occludin showed significant resistance to hydrogen peroxide compared with cell monolayers expressing wild type occludin. The present study also shows that hydrogen peroxide failed to induce Tyr phosphorylation in MDCK cell monolayers, demonstrating that Tyr-398 and Tyr-402 are the phosphorylation sites in hydrogen peroxide-treated cells. The loss of co-immunoprecipitation of ZO-1 with GFP-hOcl Y398D/Y402D in MDCK cells confirms our observation made in in vitro studies that FIGURE 9. Y398D/Y402D mutation of occludin sensitizes MDCK cell monolayers for hydrogen peroxideinduced disruption of TJs. A and B, MDCK cell monolayers that express GFP-hOcl WT , GFP-hOcl Y398A/Y402A , or GFP-hOcl Y398D/Y402D were exposed to varying concentrations of hydrogen peroxide. Inulin permeability was measured 1 h after hydrogen peroxide (A), and fixed cell monolayers were double-stained for GFP and ZO-1 by the immunofluorescence method (B). C, MDCK cell monolayers that express GFP-hOcl WT , GFP-hOcl Y398A/Y402A , or GFP-hOcl Y398D/Y402D were incubated with or without (control) hydrogen peroxide for 1 h. Anti-GFP immunocomplexes prepared under denatured conditions were immunoblotted (IB) for p-Tyr and GFP. The arrow with p92 label corresponds to GFP-occludin. IP, immunoprecipitates. D, anti-GFP immunocomplexes prepared under non-denaturing conditions from MDCK cells expressing wild type or mutant occludins were immunoblotted for ZO-1 and GFP. Density of ZO-1 bands from three different experiments was measured. The arrow with p92 label corresponds to GFP-occludin. phosphorylation of Tyr-398 and Tyr-402 does prevent its interaction with ZO-1.
As shown in Fig. 9, the GFP-hOcl Y398D/Y402D eventually tends to organize at the intercellular junctions, although not as discretely as GFP-hOcl WT or GFP-hOcl Y398A/Y402A . This is possibly due to oligomerization of GFP-hOcl Y398D/Y402D with the endogenous occludin. The inulin flux in hydrogen peroxide-treated cell monolayers that express GFP-hOcl Y398D/Y402D is significantly higher than that in cells that express GFP-hOcl WT or GFP-hOcl Y398A/Y402A . The mechanism for greater sensitivity to hydrogen peroxide or delayed assembly is not clear at this point. However, we speculate that mixed oligomerization of GFP-Ocl Y398D/Y402D and endogenous occludin facilitates the dissociation of ZO-1 by hydrogen peroxide due to weak interaction between GFP-Ocl Y398D/Y402D with ZO-1; this may work synergistically with the hydrogen peroxide-induced Tyr phosphorylation of endogenous occludin and loss of its interaction with ZO-1. In summary, this study identifies Tyr-398 and Tyr-402 as the phosphorylation sites in human occludin and demonstrates that phosphorylation of these Tyr residues results in the loss of interaction between occludin and ZO-1 and attenuation of its integration into the epithelial TJs.
|
v3-fos-license
|
2022-02-17T16:05:56.873Z
|
2022-02-15T00:00:00.000
|
254894105
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-1204696/latest.pdf",
"pdf_hash": "10f1c89070c8a609886339c7c024688317ab0151",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2627",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "10f1c89070c8a609886339c7c024688317ab0151",
"year": 2022
}
|
pes2o/s2orc
|
A novel structure of all-optical optimised NAND, NOR and XNOR logic gates employing a Y-shaped plasmonic waveguide for better performance and high-speed computations
All-optical devices have demonstrated a broad range of applications in the communication field. These devices serve as the fundamental building blocks of sophisticated integrated circuits. By integrating these devices into fields such as signal processing, chip design, and network computations, it is possible to achieve a much more efficient device. This paper describes the design of all-optical logic gates such as NAND, NOR, and XNOR using a plasmonic-based Y-shaped power combiner. The combiner employs the concept of linear interference to generate the desired logic gates. The work is simulated and analysed using MATLAB and finite-difference time domain method. The current work is framed within a 60 µm2 area which is less than the size of the existing structures. The parameters characterising insertion loss, transmission efficiency, and extinction ratio are calculated and compared to those of a variety of other designs.
Introduction
The demand for faster communication is surging at an unexpected rate. While the researchers are carrying several experiments to meet the needs of high-speed transmission, a new offspring called plasmonic evolved from its predecessors (Gibbs 1985;Hu et al. 2008). In the earlier days, semiconductor-based transistors served the need of electronics. It often sits back due to limitations like low speed, high power consumption and heat dissipation (Cotter et al. 1999;Priya et al. 2021). Having eliminated these, photonic evolved with high-speed communication by replacing electrons with photons (Tang et al. 2010;Holmgaard and Bozhevolnyi 2007;Wu 2004). However, it is limited by diffraction when the order of design is close to the operating wavelength (Kumar et al. 2017). Also, the optical sources are bulk and are difficult to fabricate, making the devices larger. Plasmonics solved these limitations by combining the advantages of electronics and photonics (Hayashi and Okamoto 2012). The surface plasmon polaritons evolved as a result of surface plasmon resonance phenomenon confining at nanoscale level thereby eliminating the diffraction limit (Barnes et al. 2003;Zhang et al. 2009;Talebi et al. 2008;Gramotnev and Bozhevolnyi 2010). Various configurations of plasmonic waveguides like MIM, IMI, DLSPPW etc., are used to design the plasmonic structures (Feng et al. 2007;Zia et al. 2006;Pile et al. 2005;Charbonneau et al. 2005;Jung 2010). All optical logic gates are being proposed by many researchers around the world using several design techniques like electro-optic (EO), semiconductor optical amplifier based mach-zehnder interferometer (SOA-MZI), photonic crystals (PhC) and plasmonic (Rao et al. 2021;Fakhruldeen and Mansour 2018;Pal et al. 2020;Birr et al. 2015;Gogoi and Sahu 2015). Using these techniques, all the basic gates like NOT, AND, XNOR, OR, XOR and universal gates like NAND and NOR are realized (Taflove and Hagness 2000;Anguluri et al. 2021;Kumar et al. 2015;Kim et al. 2006;Nozhat et al. 2017). These can be used to create a variety of other combinational and sequential circuits (Singh et al. 2019;Rao et al. 2020;Isfahani et al. 2009;Nozhat et al. 2015;Singh et al. 2014). Using a Y-shaped power combiner, all optical universal logic gates such as NAND and NOR, as well as all-optical XNOR gates are presented and proven using the FDTD method (Kotb and Guo 2020;Moradi et al. 2019;Fu et al. 2012;Moniem 2017;Swarnakar et al. 2021). Surface plasmons can be seen in noble metals like silver and gold at visible and near-infrared wavelengths. Because of their absorption, scattering, and coupling properties, which are reliant on basic factors such as nanoparticle geometry, size, and position; plasmonic nanoparticles are appealing candidates for application in optical systems. In sub-wavelength plasmonic MIM waveguides with slot cavity resonators for telecommunication wavelengths, the usage of all-optical NAND, NOR and XNOR (NNX) gates is demonstrated. Because of their simple configuration and capacity to enable light confinement at the nanoscale, low crosstalk, and suitable propagation distances, two-dimensional (2D) MIM waveguides were chosen for the proposed logic device, among other reasons. It is feasible to conduct a range of logic functions with the MIM logic device without changing the phase of the input signals in subsequent implementations. The present work is categorized into several sections. The design and operation of NNX gate utilize the same structure covered in Sect. 2. Section 3, and 4 gives the simulation results and assessment of all-optical NNX respectively. Section 5 gives the conclusion of the present work.
Design and operation of all-optical NAND, NOR and XNOR logic gates
All-optical NNX logic gate is built using a Y-shaped plasmonic waveguide. The schematic consists of two Y-shaped power combiners which are cascaded such that the output of one Y-combiner is given as input to the second Y-combiner. The two inputs of the first combiner are considered as the ports for input signals (A and B) while third input signal is considered as the reference signal (R). The second combiner's output serves as the output port of the designed structure. The layout of the design is depicted in Fig. 1. The plasmonic waveguide structure is designed with a refractive index of 2.01 with a metal material of Silicon Oxynitride . The linear interference principle governs how the Y-combiner works. A linear waveguide of length (L) 2.9 µm connects the two Y-combiners. The phase shift of the input is controlled by an external phase shifter. The entire schematic is designed under the footprint of 12 µm × 5 µm. The two inputs to logic gates are applied to the first and second ports of first Y-combiner and the output is given to the second combiner. The second Y-combiner's second input is connected to R, which is always high. The phase of R is adjusted in accordance with the desired output of the logic gate. The principle of constructive and destructive interference occurs at the junction of each combiner. According to linear interference, if the input's path difference is zero and phase is in the order of even multiples of π, constructive interference will occur, resulting in modulation of the optical signal, and vice versa. Similarly, if the path difference is zero and phase is in odd multiples of π, the destructive interference will occur there by cancelling the optical signal. The design specifications of the structure are shown in below Table 1.
Simulation results and discussion
The FDTD method employs the device analysis. A continuous transverse electric (TE) optical wave with perfectly matched boundary conditions is considered as the input. In this design, low intensity signals have an optical intensity of 1e9 W/m, while high intensity signals have an optical intensity of 3e9 W/m are used to excite the input ports with 1550 nm wavelength (Anguluri et al. 2021). All the input states of two-input NNX logic Fig. 1 Schematic of NAND, NOR and XNOR using Y-combiner gate is provided with the change in phase of the inputs with either 0 0 or 180 0 to satisfy the gate's output. The parameter specifications for designing NNX gate are shown in Table 2.
NAND logic gate
The nand gate works like, when both inputs are high, the output is low; otherwise, it is high. To improve the feasibility of constructing complicated digital circuits, universal gates must be used. The simulation and analysis of NAND gate is carried using FDTD approach. The practical outcomes are then compared to the theoretical outcomes derived using MATLAB. The NAND schematic's simulation parameters are shown in Table 2. For 2-input NAND gate, four combinations of inputs are applied to the designed waveguide. The output of NAND gate is obtained from the output port of second combiner. The output normalized power (P out ) along with the input phase of each input combination is shown in Table 3.
From Table 3, it is clear that the output behaviour of NAND gate is satisfied. The high and low intensity output signal powers are noted. Transmission efficiency is calculated by multiplying the highest output power with 100 as represented in Table 3. Figure 2 displays the timing diagram of a NAND gate verified using MATLAB. Figure 3 shows the light propagation across NAND gate for all input signal combinations.
Case (i)
Here, a low-intensity signal of 1e9 W/m is applied as input to both of the first Y-combiner ports. A phase of 0 0 is applied to input A and 180 0 is applied to the input B. The intensity of 3.3e9 W/m with a phase of 180 0 is applied as input to R. The first combiner receives inputs that are out of phase and have the same intensity, there exists destructive interference at the output and the input to the next Y-combiner is assumed to be low. The other port receives R as high intensity signal and the same is driven towards the output, thereby receiving the high signal at the output as represented in Fig. 3a.
Case (ii)
Here, the first input port is given the signal intensity of 1e9 W/m with phase of 0 0 and the second input port is given the intensity of 3e9 W/m with a phase of 180 0 . Due to the outof-phase nature of these signals, destructive interference occurs at the output of the first combiner, resulting in a low signal at the next input. The R being high signal as the other input causes constructive interference at the output leading to high signal as represented in Fig. 3b.
Case (iii)
Similar to the above case, the first and second input ports are given high and low intensity signals of intensity 3e9 W/m and 1e9 W/m respectively. The phase shift of the input ports remains same as 0 0 and 180 0 for first and second ports respectively. Due to destructive interference, the low signal reaches the next combiner and the R will reach the output port as a result of constructive interference i.e., logic '1' as shown in Fig. 3c.
Case (iv)
In this case, the two input ports are given high intensity signal of intensity 3e9 W/m. The phase shift of two inputs ports is made 0 0 . Then according to constructive interference, the output signal will be high and of more intensity as the input signal. The R of intensity 3.3 W/m is given as the compensating signal to cancel the previous output so as to obtain the logic '0' at the output. The low intensity output is observed due to occurrence of destructive interference at the output junction as shown in Fig. 3d.
NOR logic gate
The output of NOR gate is examined with the same structure as depicted in Fig. 1. The output of NOR gate goes low when any of the input is high and high when both inputs are low. The FDTD approach is used to simulate and analyse the NOR gate. The obtained practical results are then compared with the theoretical results obtained from MATLAB. The simulation specifications of NOR schematic is given in Table 2. The P out and input phase for all-optical NOR gate is shown in Table 4. From Table 4, it is clear that the output behaviour of NOR gate is satisfied. The transmission efficiency is also formulated and provided in Table 4. The timing diagram of NOR gate is verified using MATLAB and is shown in Fig. 4.
Case (i)
In this case, the low intensity signal of 1e9 W/m with phase shift of 0 0 is applied as input to both the input ports of first Y-combiner. The R of intensity 3.3e9 W/m is applied as input with a phase shift of 180 0 . Since the first combiner takes inputs having same intensity and in phase, constructive interference will occur at the output and the input to the next Y-combiner is assumed to be low. The other port receives R as high intensity signal and the same is driven towards the output, thereby receiving the high signal at the output i.e., logic '1' as shown in Fig. 5a.
Case (ii)
In this combination, the first input port is given the low signal of intensity 1e9 W/m and the second input port is given high signal of intensity 3e9 W/m with a phase of 0 0 for both ports. A high signal is obtained at the output of the first combiner as a result of constructive interference. The R being high and is out of phase with the output from first combiner causes destructive interference at the output leading to low signal as shown in Fig. 5b.
Case (iii)
The first and second input ports are provided with intensity of 3e9 W/m and 1e9 W/m respectively with 0 0 phase shift. Due to the constructive interference, the high signal reaches the next combiner and the R with high intensity which is out of phase caused destructive interference and low signal will reach the output port as in Fig. 5c.
Case (iv)
In this case, the two input ports are given high intensity of 3e9 W/m with a phase shift of 0 0 . The combiners output gives logic high signal as a result of constructive interference. This high signal is combined with R of opposite phase. Due to destructive interference at the output junction, the low signal is received at the output as shown in Fig. 5d.
XNOR logic gate
The same structure as depicted in Fig. 1 can be used to getthe output of XNOR gate. The output goes low when inputs are different and high when inputs are same. The simulation and analysis of XNOR gate is carried using FDTD method. The practical findings are compared to the theoretical results generated using the MATLAB simulink tool. The simulation parameters of XNOR schematic are given in Table 2. For 2-input XNOR gate, four combinations of input values are applied as input to the design. The normalized P out along with the input phase is shown in Table 5 for all-optical XNOR gate. From Table 5, it is clear that the output of XNOR gate is determined. The high and low intensity output signal powers are noted. The transmission efficiency is also formulated and provided in Table 5. The NAND gate timing diagram is presented in Fig. 6, which was simulated using MATLAB. The simulation results of the XNOR gate are shown in Fig. 7. The input combinations are applied based on the truth table of XNOR gate.
Case (i)
In this combination, the low optical signal of intensity 1e9 W/m is applied as input to both the input ports of first Y-combiner. Phase of 0 0 and 180 0 is applied to first and second input ports. The R of intensity 3.3e9 W/m is applied as input to the reference. The output of first combiner will be XOR of A and B and is given to second combiner. The other port receives R as high intensity signal and the same is driven towards the output, thereby receiving the high signal at the output as shown in Fig. 7a.
Case (ii)
Here, the first port is given the low signal of intensity 1e9 W/m with 0 0 phase shift and the second input port is given high signal of intensity 3e9 W/m with a phase of 180 0 for both ports. Due to constructive interference, high signal is obtained at the output of first combiner. The R being high signal and is out of phase with the output from first combiner causes destructive interference at the output leading to low output as shown in Fig. 7b.
Case (iii)
Similar to the above case, the first and second input ports are given high and low intensity signals of intensity 3e9 W/m and 1e9 W/m respectively with 0 0 and 180 0 phase shift. Due to constructive interference, the high signal reaches the next combiner and n.r. a n.r. a n.r. a n.r. a n.r. a n.r. a n.r. a 11 Morad et al. the R which is out of phase causes destructive interference and low signal will reach the output port i.e., logic '0' as shown in Fig. 7c.
Case (iv)
In this case, the two input ports are given high intensity signal of 3e9 W/m with 0 0 and 180 0 phase shift. The combiner output gives low signal as a result of destructive interference. This signal is combined with the R of opposite phase difference. Due to constructive interference at the output junction, the high signal is received at the output as shown in Fig. 7d.
Performance analysis of NAND/NOR/XNOR gate
The performance analysis of NAND/NOR/XNOR gate is analyzed by calculating extinction ratio (ER), insertion loss (IL) and transmission efficiency. The P out matches the theoretical truth table of NAND/NOR/XNOR gate and the efficiency of the transmission is as high for NAND as 147% in case of '10' condition, NOR gate as high as 134% in case of '00' condition, XNOR gate as high as 143% in case of '00' condition.
On the basis of the output data, performance measures such as IL and ER are determined. The IL is given by where P out is the peak output power and P in is the peak input power (P in ). The ER is the proportion of peak P out in the ON state ( P out |ON ) to peak P out in the OFF state ( P out |OFF ). It is given by The IL and ER of the all-optical NAND gate are found to be 0.407 dB and 11.13 dB respectively. The IL and ER of the all-optical NOR gate are observed to be 1.25 dB and 24.29 dB respectively. The IL and ER of the all-optical XNOR gate are found to be 1.55 dB and 21.82 dB. The comparison of present work with the existing work is shown in Table 6.
Conclusion
The present work proposes a miniaturised design of all-optical logic gates such as NAND, NOR, and XNOR using a Y-shaped plasmonic combiner. The work is designed, simulated, and analysed using the FDTD method. The design of this work fits within a 60 µm 2 area, which is smaller than the area currently occupied by existing structures. Numerous performance-related parameters such as IL, transmission efficiency, and ER are calculated from the output results. NAND, NOR, and XNOR gates have an ER of 11.13 dB, 24.29 dB, and 21.82 dB, respectively. Due to its compact structure, the size of digital circuits will be significantly reduced, resulting in improved performance in optical computing.
|
v3-fos-license
|
2022-08-28T15:03:07.063Z
|
2022-08-25T00:00:00.000
|
251868642
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2075-4426/12/9/1376/pdf?version=1661497167",
"pdf_hash": "ea3d4e9a38200a2a8f2288e4dc1d5e5292fc5a0c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2628",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "46e51bdf49d007138b4b4ed6f47e6f207cbd5f2a",
"year": 2022
}
|
pes2o/s2orc
|
Potential and Pitfalls of Mobile Mental Health Apps in Traditional Treatment: An Umbrella Review
While the rapid growth of mobile mental health applications has offered an avenue of support unbridled by physical distance, time, and cost, the digitalization of traditional interventions has also triggered doubts surrounding their effectiveness and safety. Given the need for a more comprehensive and up-to-date understanding of mobile mental health apps in traditional treatment, this umbrella review provides a holistic summary of their key potential and pitfalls. A total of 36 reviews published between 2014 and 2022—including systematic reviews, meta-analyses, scoping reviews, and literature reviews—were identified from the Cochrane library, Medline (via PubMed Central), and Scopus databases. The majority of results supported the key potential of apps in helping to (1) provide timely support, (2) ease the costs of mental healthcare, (3) combat stigma in help-seeking, and (4) enhance therapeutic outcomes. Our results also identified common themes of apps’ pitfalls (i.e., challenges faced by app users), including (1) user engagement issues, (2) safety issues in emergencies, (3) privacy and confidentiality breaches, and (4) the utilization of non-evidence-based approaches. We synthesize the potential and pitfalls of mental health apps provided by the reviews and outline critical avenues for future research.
Introduction
Mobile mental health applications (i.e., apps) are virtual, smartphone-delivered platforms which provide self-directed or remotely facilitated mental health services in the areas of communication, self-monitoring, diagnosis, and treatment [1][2][3]. In order to circumvent user barriers associated with traditional treatment methods-including issues of poor availability, accessibility, and acceptability-these apps offer timely, cost-effective, and discreet channels for users to manage their mental health [3][4][5][6]. Specifically, help-seekers can overcome constraints of traditional clinical settings, such as long waitlists, restricted clinic hours, and living in regions with poor access to mental healthcare [3][4][5]7]. Instead of waiting an average of 14.5 days to consult a clinician [8], relevant information and interventions may be accessed in a timely manner and users may utilize apps for on-demand venting of thoughts and emotions [9,10]. Rather than seeking mental health support in face-to-face settings that require individuals to identify themselves, individuals may access support via apps anonymously and remotely, thus evading negative social evaluation [3][4][5]7].
Critically, as a reflection of the growing demand for mental healthcare [11], mental health apps have undoubtedly seen a rapid increase in their development and adoption. Between 2016 and 2018, they have grown threefold in number [5], offering help-seekers over 10,000 mental health apps to choose from [12]. Further, in a survey of 320 outpatient help-seekers from four clinics in the United States, 70% indicated interest in using apps to facilitate self-monitoring and management of mental health difficulties [13]. Considering their prominence and growing demand, therefore, it is important to inquire into how mental health apps may be utilized in conjunction with traditional interventions. 2 of 27 While an emerging body of research has investigated the utilization of mobile mental health apps in traditional treatment, however, findings have been scant and somewhat polarized. For instance, Torous et al. [14] focused on examination of challenges generated by mental health smartphone apps, while Eisenstadt et al. [15] concentrated on possibilities created by apps. On one hand, several studies have revealed the utility of mental health apps in supplementing different stages of traditional intervention, such as by providing education about treatment techniques prior to enrolment, facilitating symptoms-monitoring during the treatment process, and ensuring continued access to interventions after the treatment period [4,7,16]. On the other hand, a growing body of research has highlighted risks associated with app usage, such as the lack of safeguards around the privacy of users' information as well as utilization of non-evidence-based approaches [3,5,6]. Given this equivocality, there is a need for a more comprehensive view of the current mobile mental health apps landscape, to guide interested researchers toward a holistic understanding of apps as an adjunct to traditional treatment. As there is an increasing volume of reviews looking into the present mobile mental health landscape, we have chosen to conduct an umbrella review in the hope of presenting a big picture of the evidence base, as well as to discuss congruous or inconsistent findings. An umbrella review is a synthesis of systematic reviews, offering readers opportunities to look at a broad scope of factors investigated by scholars and consider whether consensus in the field has been met. Thus far, past investigations have provided an insightful outline of the current mobile mental health landscape, yet there is a relative lack of umbrella reviews that examined existing overviews. We aim to compile evidence from existing reviews to offer a higher level of summary.
Search Strategy and Selection Criteria
We included reviews of mental health apps that reported on: (1) the effectiveness and pitfalls of mobile mental health intervention program(s); and are (2) quantitative or qualitative reviews, rather than individual studies, aimed at reducing subclinical or clinical mental health symptoms. Eligible reviews, up to 31 May 2022, were identified from the Cochrane library, Medline (via PubMed Central), and Scopus databases by two co-authors (J.K., G.T.), using the following search terms: "mental health app *" OR "e%mental health" OR "mobile%based psychotherapy intervention *" OR "app%based mental health intervention *" OR "smartphone%based mental health intervention *" OR "digital mental health" OR "digital app * for mental health" OR "technology in psychotherapy" OR "mental health smartphone app *") AND ("review*" OR "synthesis" OR "meta-analysis" OR "meta-analytic").
Quality Assessment
We conducted a methodological quality assessment, using the JBI critical appraisal tool for systematic reviews [17], to evaluate the systematic reviews and meta-analyses included in our umbrella review. This critical appraisal tool comprises eleven items which are rated as "yes", "no", "unclear", or "not applicable". These include methodological evaluations of each review's inclusion criteria, search strategy, data synthesis, and strategies to minimize biases in data extraction and study appraisal. For each appraisal item, J.K. and G.T. conducted their evaluations independently and any disagreements were resolved through discussion after independent review. Assessments with at least five "yes" responses were included. In sum, the score (i.e., number of "yes" ratings) of the eligible reviews ranged from a moderate score of five or six (n = 4) to a high score of seven and above (n = 10). Our quality assessment identified that items four (i.e., "were the sources and resources used to search for studies adequate?") and six (i.e., "was critical appraisal conducted by two or more reviewers independently?") had the lowest proportion of "yes" ratings. This highlighted that (1) ensuring a comprehensive search strategy including grey literature; and (2) minimizing bias in critical appraisals are common methodological issues in systematic reviews and meta-analyses. Nevertheless, all fourteen eligible reviews for assessment had at least five "yes" ratings and, therefore, none were excluded from our umbrella review (see Table 1 for critical appraisal results).
Data Extraction
In line with Aromataris et al.'s [31] data extraction protocols for umbrella reviews, the following information was extracted from included reviews: (a) review details (author, year of publication, type of review, review objectives including interventions and outcomes assessed, total sample size, participant demographics, country), (b) search details (number of databases/sources searched, date range of included studies, number of studies included), and (c) analysis details (method of analysis, key findings). The extracted characteristics of included reviews are summarized in Table 2. The majority of programs were effective or partially effective in producing beneficial changes in the main psychological outcome variables. Evidence that e-mental healthcare can be a viable option for care delivery but that specific accessibility and acceptability factors must be considered.
Data Synthesis
Due to the heterogeneity of the included reviews in the study aims, mental health interventions, and outcome variables investigated across the included reviews, it was unfeasible to synthesize our results statistically. Instead, we narratively synthesized evidence from various systematic reviews, meta-analyses, scoping reviews, and literature reviews based on the primary findings of each review.
Results
The main search string returned 103 unique articles (see Figure 1 for PRISMA diagram [51]); and two additional articles were identified via Google Scholar. Thereafter, the article review proceeded in two phases. First, two co-authors reviewed the title and abstract for all 105 articles to determine initial eligibility based on our aforesaid selection criteria, and 48 articles were removed at this phase as they were protocols for literature reviews or articles that did not constitute quantitative or qualitative reviews (e.g., individual studies). In the second phase of article review, the remaining 57 articles were reviewed in full by two co-authors. At this stage, we excluded 13 articles which examined web-based mental health interventions and 5 articles which focused on the implementation of mobile mental health services (e.g., role of therapeutic alliances or gamification elements) [52,53]. In addition, three other reviews were excluded because they focused on outcomes other than mental health (i.e., academic performance [54]), the assessment of mobile mental health services [42], and the types of e-mental health systems and their degree of technological advancement [55]. As a result, a total of 21 articles were excluded at this stage, and 36 articles were included in the final review.
Data Synthesis
Due to the heterogeneity of the included reviews in the study aims, mental health interventions, and outcome variables investigated across the included reviews, it was unfeasible to synthesize our results statistically. Instead, we narratively synthesized evidence from various systematic reviews, meta-analyses, scoping reviews, and literature reviews based on the primary findings of each review.
Results
The main search string returned 103 unique articles (see Figure 1 for PRISMA diagram [51]); and two additional articles were identified via Google Scholar. Thereafter, the article review proceeded in two phases. First, two co-authors reviewed the title and abstract for all 105 articles to determine initial eligibility based on our aforesaid selection criteria, and 48 articles were removed at this phase as they were protocols for literature reviews or articles that did not constitute quantitative or qualitative reviews (e.g., individual studies). In the second phase of article review, the remaining 57 articles were reviewed in full by two co-authors. At this stage, we excluded 13 articles which examined web-based mental health interventions and 5 articles which focused on the implementation of mobile mental health services (e.g., role of therapeutic alliances or gamification elements) [52,53]. In addition, three other reviews were excluded because they focused on outcomes other than mental health (i.e., academic performance [54]), the assessment of mobile mental health services [42], and the types of e-mental health systems and their degree of technological advancement [55]. As a result, a total of 21 articles were excluded at this stage, and 36 articles were included in the final review.
Timely Support
In total, 16 out of 36 studies cited timely support as an advantage which mental health apps have provided, by transcending traditional help-seeking boundaries associated with waiting time and physical distance [9,12,15,25,29,[33][34][35][36]39,40,42,44,45,49,50]. Given that mental health apps provide in-the-moment support at the user's convenience, help-seekers can overcome constraints of traditional clinical settings, such as long waitlists, restricted clinic hours, and living in regions with poor access to mental healthcare [8]. For example, Struthers et al.'s [29] systematic review of 24 studies found that time-associated flexibility and level of control over treatment encourage the use of e-mental health services among youths, their parents, and their healthcare providers. Further, Chan and Honey's [9] integrative review identified that users perceive mobile mental health apps as being "easy to use", since app usage may be accessed on demand and can be easily integrated into the user's daily routines. Considering that delayed treatment contributes to more severe and enduring mental health difficulties [4,8,56], the timely nature of mobile mental healthcare is especially helpful in situations when an in-the-moment experience of relief is needed and traditional support might not be as helpful by the time it becomes available [33].
Cost-Effective
Further, as cited in 11 reviews, mental health apps afford users with the opportunity to access cost-effective treatment options according to their financial abilities [25,32,[34][35][36]42,44,46,[48][49][50]. For instance, Binhadyan et al.'s [35] literature review of 74 articles-which addressed e-mental health interventions for university students with ADHD-identified that the minimal (or no) fees for app-based interventions played a key role in enabling help-seekers to circumvent barriers to traditional mental healthcare. Echoing this, Oyebode et al.'s [46] thematic analyses of user reviews of 106 mental health apps found that the average price of 11 fee-based apps was USD 5.26, which is significantly lower than average psychotherapy fees ranging from USD 100 to USD 200 per session in the United States [57]. Hence, the lower cost of digital apps, compared to traditional psychotherapy, renders mental health apps a more accessible psychological tool for people of varying financial abilities.
Combat Stigma in Help-Seeking
Notably, eight studies noted that mental health apps provide the ability to access mental healthcare discreetly and thus circumvent the adverse stigma surrounding helpseeking [15,35,36,39,44,48,49]. For example, in Lal and Adair's [44] literature review of 115 articles about e-mental health interventions, it was highlighted that digital mental health interventions allow individuals who are uncomfortable with in-person treatment to receive help anonymously and bypass discomforts associated with identifying themselves and facing negative social evaluations. This may be especially helpful for people from collectivist cultures with prevalent "face" concerns, where conventional help-seeking has been found to be associated with poorer life satisfaction and lower positive affect [58]. Moreover, Wies et al.'s [50] review of 26 digital mental health treatments revealed that apps could serve as an initial point of contact and gradually facilitate transition to faceto-face interventions. In sum, mobile mental health apps potentially allow their users to overcome help-seeking barriers stemming from stigmatized attitudes toward conventional mental healthcare.
Enhance Therapeutic Outcomes
As highlighted in 25 studies, mobile mental health apps may also enhance therapeutic outcomes (see Table 3 for a summary of the target populations of the included reviews) including reducing symptoms of mood disorders [9,12,15,19,20,[22][23][24][25][27][28][29][30][35][36][37][38][40][41][42][46][47][48][49][50]. For instance, in Firth et al.'s [19] meta-analysis of 18 randomized controlled trials, smartphone interventions had a small-to-moderate effect in reducing depressive symptoms in an overall sample of 3, 414 adults from both clinical and nonclinical populations. Further, Petrovic and Gaggioli's [47] review of eight studies on mobile-based mental health tools showed that participants experienced reduced stress levels and improved coping skills after three weeks of app usage, suggesting that apps increase the likelihood of treatment success by providing opportunities to practice coping strategies in clients' natural environments. Harith et al.'s [41] umbrella review of seven studies also found significant evidence of the effectiveness of digital mental health interventions, including app-based programs, in alleviating depression, anxiety, stress, and eating disorder symptoms in university students. Table 3. Target population of included reviews.
Binhadyan et al. [35] ADHD
Borghouts et al. [18] Eisenstadt et al. [15] Firth et al. [19] Garrido et al. [20] Harith et al. [41] Lattie et al. [22] Leech et al. [23] Nicholas et al. [26] Petrovic & Gaggioli [47] Six et al. [28] Thach [48] Thach [49] Anxiety/Depression/Stress/Well-being Chan & Honey [9] Henson et al. [12] Anxiety, Depression, Schizophrenia spectrum and psychotic disorders Larsen et al. [21] Suicide/Self-harm Simblett et al. [27] Post-traumatic stress disorder (PTSD) More specifically, mental health apps can amplify treatment outcomes by complementing different stages of traditional interventions in line with their specific purpose. For example, in Hwang et al.'s [42] scoping review, certain mental health apps (e.g., MoodPrism, mHealth)-which track and monitor users' emotional state and psychological stress-were found to reduce symptoms of depression, anxiety, and stress. Hence, by providing on-thego documentation of users' psychological well-being, these apps can tailor relevant goals for each user in real-time and supplement traditional treatment. In addition, Oyebode et al.'s [46] thematic analysis of user reviews of 104 mental health apps revealed positive themes such as "reminder and notification", "in-app support", "logging", "analytics and visualization", "assessment", and "data export"; which indicate the unique features of mental health apps that are valued by help-seekers. Together, this suggests that users could utilize mental health apps in conjunction with traditional treatment to enjoy higher therapeutic success as compared to only receiving the traditional face-to-face intervention alone. Nonetheless, common themes for the pitfalls of mental health apps have been identified as well.
User Engagement Challenges
Six reviews referred to high attrition rates and poor rates of sustained engagement prevalent among mental health apps [14,20,22,24,29,30]. For instance, Garrido et al.'s [20] review of 32 digital mental health interventions found that 39% of studies reported attrition rates of over 20%-levels indicative of potential attrition bias. Further, in Struthers et al.'s [29] review of 24 studies on acceptability of e-mental health for youths, the number of participants who completed the full intervention ranged widely across studies from 29.4% to 87.5%, with two studies suggesting the decreasing usage of e-mental health interventions over time. As theorized by Torous et al. [14], user engagement may be hindered by factors including unsatisfactory functionality of these apps and usability concerns (i.e., difficulties using apps).
Safety Issues in Case of Emergency
According to two reviews [14,21], mental health apps may also be poorly equipped to assist users through emergencies. For instance, Larsen et al. [21] reviewed publicly available apps which address suicide, and found that none of these apps abided with the best practice of providing visible crisis support information within the app. Similarly, in Torous et al.'s [14] clinical review of challenges surrounding user engagement, it was suggested that the vast majority of apps are limited in their ability to respond effectively during emergencies related to suicide or self-harm, or recognize anticipatory warning signs. In the event of a time-sensitive mental health emergency such as risk of suicide, therefore, help-seekers might not be able to access critical support needed through mental health apps).
First, eight reviews found that mental health app users were commonly concerned with their confidential information being shared with third parties or used for unauthorized purposes such as marketing [9,14,18,29,39,45,46,50]. In Wies et al.'s [50] scoping review of ethical challenges in digital mental health, it was shown that mental health app users' main concerns centered around the consequences of confidential information being leaked to third parties, which would implicate professional, personal, and social domains of their lives. In particular, two reviews identified inadequate passcode protection (i.e., to prevent external access to users' data) as a privacy-related weakness of mental health apps [32,49]. For instance, a thematic analysis of user reviews of 106 mobile mental health apps revealed that mental health app users were dissatisfied with the lack of passcode protection (e.g., a unique PIN) to prevent external access to sensitive information [32].
A second concern was the lack of clear privacy policies which explain the protection of users' information, as highlighted by five reviews [21,26,32,46,49]. More specifically, only 22% of apps targeted at bipolar disorder and 29% of apps targeted at suicide or deliberate self-harm provided a clear privacy policy which informs users on how their data are used [21,26]. Moreover, Wies et al. [50] reported that there is insufficient clarity about the adequacy of consent obtained through digital mental health apps, in particular regarding the type of data processing or intervention that the user is consenting to. Taken together, therefore, the use of mental health apps is often accompanied by risks of being identified as a help-seeker or the leakage of personal information to third parties, thus endangering users' privacy and impeding trust and engagement with these apps.
Utilization of Non-Evidence-Based Approaches
Lastly, limited empirical and theoretical evidence has been found for both (1) the efficacy of mental health apps and (2) the basis of therapeutic interventions used in mental health apps.
First, 10 reviews found limited evidence for the effectiveness of mental health apps in reducing symptoms of psychological distress (e.g., depression, anxiety, stress) and improving socioemotional competency [14,23,24,36,38,40,42,45,47,50]. For example, Drissi et al.'s [38] systematic review of studies examining e-mental health interventions developed for healthcare workers found that only two studies (27%) included empirical evaluations of the reported interventions, and the empirical evaluations were based on a limited number of participants. Similarly, Gould et al.'s [40] review of mental-health-related apps created by the Veteran Affairs or the Department of Defense showed a pressing lack of evidence for the effectiveness of these apps, with the exception of two apps (PTSD Coach, Virtual Hope Box). Further, in studies examining the efficacy of app interventions, there has been a lack of empirical support for their long-term effectiveness. In Carter et al.'s [36] review of 37 digital mental health intervention studies, for instance, 23 studies (62%) reported results from less than 6-months follow-up. In addition, in Leech et al.'s [23] systematic review of mental health apps for adolescents and young adults, all 11 randomized controlled trials examined the immediate or short-term effects of app interventions, except four studies which incorporated 6-week to 6-month follow-up assessments. Together, this suggests that the long-term benefits of mental health app usage have not been established by empirical evidence, and help-seekers should not rely entirely on these platforms for mental health treatment.
Second, apart from the efficacy of mobile mental health apps, four reviews cited an insufficient theoretical and empirical basis for therapeutic techniques employed by mental health apps [14,42,47,50]. For example, Petrovic and Gaggioli [47] conducted a scoping review of digital mental health tools catered to informal caregivers in Europe, and found that only a small portion of their 16 reviewed papers defined a clear therapeutic rationale behind the interventions used, such as adopting principles of cognitive behavioral therapy or stress inoculation training. In addition, Hwang et al. [42] conducted a scoping review of 14 studies about mental-health-related apps for adults over 18 years of age, and identified two studies that did not provide theoretical evidence for their intervention methods, involving a breathing exercise app and mood-monitoring app. Seeing as such unsupported practices could unintentionally pose serious risks to the well-being of helpseekers in dangerous situations, it is crucial that clinicians and researchers remain astute as to the scientific evidence informing app-based mental healthcare.
Key Findings
In sum, mobile mental health apps can potentially circumvent barriers of traditional mental healthcare to provide timely, cost-effective, and discreet support which facilitates various stages of treatment and improves outcomes. On the other hand, it is imperative that app users (clinicians and help-seekers) are mindful of the pitfalls surrounding apps usage: these involve engagement challenges, safety issues, confidentiality breaches, and a lack of evidence-based practices (see Figure 2 for an overview).
In sum, mobile mental health apps can potentially circumvent barriers of traditional mental healthcare to provide timely, cost-effective, and discreet support which facilitates various stages of treatment and improves outcomes. On the other hand, it is imperative that app users (clinicians and help-seekers) are mindful of the pitfalls surrounding apps usage: these involve engagement challenges, safety issues, confidentiality breaches, and a lack of evidence-based practices (see Figure 2 for an overview).
Strengths and Limitations
Our review has several limitations that should be noted. First, given that our umbrella review provided a higher-level synthesis of a wide range of previous reviews, this introduced significant heterogeneity-regarding review methodologies (e.g., systematic reviews, narrative reviews, thematic analyses), primary focus of the reviews (e.g., efficacy, user engagement, ethical challenges), sample demographics (e.g., adolescents, caregivers, young adults), and outcome measures used (e.g., posttraumatic stress symptoms, depression symptoms, emotion regulation)-hence introducing difficulties with interpretation of common potential and pitfalls of mobile mental health apps. Nonetheless, this crossreview heterogeneity reinforces the need for the present umbrella review which identifies converging themes of mental health apps' advantages and downfalls despite varying aims and measures. Second, since we included only published peer-reviewed reviews written in English, unpublished work and reviews in other language mediums were not included in our search strategy; hence, this may have influenced the findings of this review. Further, as we searched three key databases, our search strategy may have excluded relevant reviews from other databases such as PsycINFO and EMBASE. Our inclusion criteria for reviews may have resulted in overlap of primary studies between reviews. Finally, due to the rapidly advancing nature of digital mental health interventions, it is possible that some of the mobile mental health apps assessed may now be outdated. In spite of these limitations, strengths of the present umbrella review include its strict adherence to methodology
Strengths and Limitations
Our review has several limitations that should be noted. First, given that our umbrella review provided a higher-level synthesis of a wide range of previous reviews, this introduced significant heterogeneity-regarding review methodologies (e.g., systematic reviews, narrative reviews, thematic analyses), primary focus of the reviews (e.g., efficacy, user engagement, ethical challenges), sample demographics (e.g., adolescents, caregivers, young adults), and outcome measures used (e.g., posttraumatic stress symptoms, depression symptoms, emotion regulation)-hence introducing difficulties with interpretation of common potential and pitfalls of mobile mental health apps. Nonetheless, this cross-review heterogeneity reinforces the need for the present umbrella review which identifies converging themes of mental health apps' advantages and downfalls despite varying aims and measures. Second, since we included only published peer-reviewed reviews written in English, unpublished work and reviews in other language mediums were not included in our search strategy; hence, this may have influenced the findings of this review. Further, as we searched three key databases, our search strategy may have excluded relevant reviews from other databases such as PsycINFO and EMBASE. Our inclusion criteria for reviews may have resulted in overlap of primary studies between reviews. Finally, due to the rapidly advancing nature of digital mental health interventions, it is possible that some of the mobile mental health apps assessed may now be outdated. In spite of these limitations, strengths of the present umbrella review include its strict adherence to methodology protocols for umbrella reviews (e.g., utilization of JBI critical appraisal checklist), holistic synthesis of evidence for both potential and pitfalls of mobile mental health applications, and inclusion of a broad evidence base including systematic reviews, meta-analyses, scoping reviews, and literature reviews.
Future Research Directions
To support the continued examination of app usage as an adjunct to traditional treatment, future research could inquire into three key areas.
App Functions
First, in terms of app functions, further research should examine the efficacy of mental health apps in supporting individuals with differing degrees of symptom severity. Considering that mental health apps are commonly designed and utilized to manage and relieve mild symptomatology [59], there is currently a lack of understanding regarding how these approaches may be applied to more severe symptoms. As app effectiveness may vary across the mild, moderate, and severe ranges of mental health conditions, future investigations could probe into how people with different levels of symptom severity (e.g., depression severity) respond to symptom relief provided by mental health apps.
App Regulation
Second, regarding app regulation, there is a need for further research to develop overarching evaluation guidelines for mental health apps. Due to the present lack of such guidelines, standardized criteria for "approved-for-use" apps remain unclear to both app developers and clinicians alike [5,60]. Hence, future studies should examine key elements for the regulation of mental health apps, such as the presence of evidenced-based approaches, existing randomized controlled trials conducted to assess app efficacy, as well as visibility of emergency services contacts. In so doing, both app developers and mental health professionals may achieve a shared understanding of the key elements guiding evaluation and regulation of mental health apps.
Individual Differences in Apps Usage
Finally, with regard to individual differences in apps usage, future research should look into the role of individual differences-including demographic factors and individual needs and preferences-in modulating the effectiveness of mental health apps [61,62]. Research has suggested that trait-like demographic and usage factors, including socioeconomic background, individual motivations underlying digital technology use, perceptions of usefulness, and smartphone use preferences could potentially influence access to and wellbeing outcomes of digital technology, including mental health apps [5,62,63]. Given that the potential of mental health apps has primarily been examined in adolescents and young adults (see [23] for a review), however, there is currently a lack of understanding about the role of these individual differences, such as demographics (e.g., socioeconomic status, age) and other usage factors, in shaping engagement with and effectiveness of mental health apps. Therefore, subsequent research should inspect how these individual difference factors influence apps effectiveness.
Conclusions
In sum, this umbrella review provided a comprehensive synthesis of existing quantitative and qualitative evidence regarding the potential and pitfalls of mobile mental health apps as an adjunct to traditional psychotherapy. Further, we offer three key areas for future research, concerning app functionality, app regulation, and individual differences in app usage. Our review highlights that mobile mental health apps' unique potential, such as providing timely support, being cost-effective, combating stigma surrounding help-seeking, and enhancing treatment outcomes, could be tapped into to supplement mental health interventions, although associated risks (i.e., user engagement challenges, safety issues, confidentiality breaches, and non-evidence-based approaches) need to be understood and managed. Specifically, one viable risk management strategy would be adhering to the American Psychiatric Association's hierarchal framework that emphasizes clinicians' responsibilities to examine stages of the framework with clients, discuss queries, and support shared decision-making on app usage [61].
|
v3-fos-license
|
2021-06-01T13:52:29.395Z
|
2021-05-31T00:00:00.000
|
235260211
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-021-02748-y",
"pdf_hash": "c64d1d5364ea674c74adafb13b7a971084324d97",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2629",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"sha1": "87256e315fe05f9f902c74212aa8dcd76565daa6",
"year": 2021
}
|
pes2o/s2orc
|
Training needs assessment: tool utilization and global impact
Background Global demand for standardized assessment of training needs and evaluation of professional continuing education programs across the healthcare workforce has led to various instrumentation efforts. The Hennessy-Hicks Training Needs Analysis (TNA) questionnaire is one of the most widely used validated tools. Endorsed by the World Health Organization, the tool informs the creation of tailored training to meet professional development needs. The purpose of this project was to describe TNA tool utilization across the globe and critically appraise the evidence of its impact in continuous professional development across disciplines and settings. Methods A systematic integrative literature review of the state of the evidence across PubMed, Scopus, CINAHL, and Google Scholar databases was carried out. Full-text, peer reviewed articles and published dissertations/theses in English language that utilized the original, adapted or translated version of the TNA tool were included. Selected articles were appraised for type and level of evidence. Results A total of 33 articles were synthesized using an inductive thematic approach, which revealed three overarching themes: individual, team/interprofessional, and organizational level training needs. Included articles represented 18 countries, with more than two thirds involving high-income countries, and one third middle-income countries. Four studies (12.1%) used the original English version instrument, 23 (69.7%) adapted the original version, and 6 (18.2%) translated and culturally adapted the tool. Twenty-three studies targeted needs at the individual level and utilized TNA to determine job roles and responsibilities. Thirteen articles represented the team/interprofessional theme, applying the TNA tool to compare training needs and perceptions among professional groups. Last, three articles used the tool to monitor the quality of care across an institution or healthcare system, demonstrating the organizational training needs theme. Conclusions Overall evidence shows that the TNA survey is widely used as a clinical practice and educational quality improvement tool across continents. Translation, cultural adaptation, and psychometric testing within a variety of settings, populations, and countries consistently reveals training gaps and outcomes of targeted continuous professional development. Furthermore, it facilitates prioritization and allocation of limited educational resources based on the identified training needs. The TNA tool effectively addresses the “know-do” gap in global human resources for health by translating knowledge into action.
Background
Over the last 25 years, a trained workforce has been at the core of success for any organization or industry. Appropriate and systematic approaches to training have been shown to result in skills improvement, which in turn raise the quality of employees [1]. Assessing and understanding workforce training needs ensures confidence, know-how, and a variety of new skills that bolster preparedness on an individual and team-based level in any organization [2]. Given the rapid technological advances, persisting workforce shortages, increased disease burden, and shrinking resources, healthcare organizations must methodically survey existing and expected performance levels of their staff [3]. Yet, evaluation of training and development processes, used until the mid-90's, showed that healthcare professionals were not acquiring the necessary skills to successfully perform their jobs [4]. Similarly, healthcare organizations often did not carry out adequate assessments of training needs, due to limited time and resources, or failure to use research evidence to inform practice [5]. Consequently, training needs analysis (TNA) must be viewed and carried out in the context of existing healthcare systems to be consistent with the needs of employees and relevant to the ever-changing demands of organizations [6].
Literature on healthcare employee training needs has evolved considerably. Several TNA models were developed to understand and address training deficiencies in the workplace through data collection and analysis from both employees and employers [7]. The traditional model focuses on job behavior and task analysis, using surveys and formal interviews to gather data [8]. Its main drawback, besides being time-consuming, is its focus on predetermined outcomes which precludes the possibility of unplanned learning taking place. As an alternative, the practical model considers a trainercentered, demand-led or supply-led "pedagogical approach" to TNA [9]. Whilst this model helps the TNA coordinator select the appropriate approach for the desired outcome, it does not provide any guidance as to how to conduct an assessment that is both comprehensive and effective. As a result, this approach could be a waste of time and resources.
Acknowledging the above challenges and limitations, investigators from the United Kingdom (UK) in 1996 initiated efforts towards a cost-effective and psychometrically sound TNA tool for the healthcare industry. The Hennessy-Hicks Training Needs Analysis (TNA) Questionnaire, referred as "TNA" from here on after, was developed to identify individual training needs, organizational requirements, and targeted training strategies [10,11]. Since its development, the TNA tool has been psychometrically tested and used for a variety of purposes among several settings and populations. It has a proven track record for use with primary healthcare teams, district and practice nurses, nurse practitioners (NPs), and health visitors in the UK [10][11][12][13]. The TNA has been shown to minimize response bias and provide reliable information about current performance levels, skill areas most in need of further development, and how to best achieve optimal results. This knowledge facilitates organizations with priority-setting and policy development, as well as with evaluation of their continuous professional development (CPD) programs.
In 2011, the TNA developers licensed the tool to the World Health Organization (WHO) for online use and dissemination through the Workforce Alliance website [14]. With rising calls for evaluating training and competency to regulate nursing practice in the Americas [15], stemming from the "Global strategy on human resources for health: Workforce 2030" [16], our motivation for this project was twofold. First, the lead investigator's experience with translating, adapting, and applying the TNA instrument in another language, as part of an action research PhD dissertation. Second, the team's affiliation with a WHO Collaborating Center (WHOCC) that promotes global capacity building for nurses and midwives as well as educational quality improvement (QI). Therefore, this integrative review aimed to describe TNA tool utilization across the globe and critically appraise the evidence of its impact in CPD across disciplines and settings.
Methods
The Hennessy-Hicks TNA Questionnaire and Manual [14] was accessed through the WHO Workforce Alliance website and was carefully reviewed to determine initial intended use. The tool consists of a one-page demographics section, an open-ended question, and a 30-item questionnaire which covers core clinical tasks, arranged into five sub-sections; research/audit, administrative/ technical, communication/teamwork, management/supervisory, and clinical activities. Respondents rate each item on a sevenpoint scale according to two criteria: "How critical the task is to the successful performance of the respondent's job" and "How well the respondent is currently performing the task." Ratings for criterion A (Criticality Index) provide an overall occupational profile of the job, and those for criterion B (Skill Index) the level of performance. Subtracting the scores (criterion Acriterion B) in each task provides a Training Needs Index. The accompanying manual offers instructions, data analysis, and customization for use in one's own environment.
To allow for cross-country and cross-setting comparisons, the investigators adopted the Knowledge to Action (KTA) Framework developed by Graham et al. [17]. Knowledge translation is "a dynamic and iterative process that includes the synthesis, dissemination, exchange and ethically sound application of knowledge" [18]. The KTA process conceptualizes the relationship between knowledge creation and action as a "funnel". Knowledge is increasingly distilled before it is ready for application whereas, the action cycle represents the activities needed for knowledge application and integration [17]. By translating knowledge into action, researchers can effectively address the "know-do" gap in healthcare practice [19].
Search strategyeligibility criteria
The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were adhered to in the conduct and reporting of this systematic integrative review [20]. An electronic database search in PubMed, Scopus, CINAHL, and Google Scholar was performed using the following strategy: (("Surveys and Questionnaires"[Mesh] OR tool* OR measure* OR questionnaire* OR survey* OR scale* OR instrument*) AND (Hennessy OR Hicks OR Hennessy-Hicks OR "Hennessy-Hicks Training Needs Assessment Questionnaire") AND (nurs* OR training-needs))." All full-text, peer reviewed articles, dissertations or theses published in English language since 1996 were included. Additionally, a targeted manual search of grey literature and listed references was carried out. A total of 289 articles were retrieved and duplicates were removed with the use of Sciwheel Reference Manager. The resulting 265 articles were first screened by title/abstract, and then, 97 full-text articles were assessed for eligibility. During screening and eligibility steps, inclusion was determined if any of the following content-specific criteria were met: a) study using the original TNA tool, b) psychometrics study carrying out translation and/or cultural adaptation of the TNA tool in other languages or countries, and c) study applying or integrating an adapted TNA version. The above search strategy, along with reasons for excluding articles, is depicted in Fig. 1.
Data extraction and quality appraisal
Initial title and abstract screening was conducted by an independent investigator (SM), with full-text article assessment for eligibility and data extraction carried out by two independent investigators (SM, LT). Any conflicting votes on eligibility of an article were resolved by a third investigator (AM). Appraisal of level of evidence was based on the adapted Rating System for the Hierarchy of Evidence [21,22]. The World Bank Atlas was followed for country classification by income level (high, middle or low) [23]. An inductive thematic analysis of the literature sample was carried out. Interprofessional education (IPE) and collaborative practice were operationalized based on the Framework for Action on Interprofessional Education and Collaboration, developed by WHO [24]. Hence, both terms 'interdisciplinary' and 'multidisciplinary', often used interchangeably, were captured. Given the systematic review methodology of this study, no study approval was required.
Results
A total of 33 articles, published within the last 25 years (January 1996-May 2020), met our inclusion criteria and are presented in Table 1. A summary of their characteristics and appraised evidence is shown in Table 2. The majority of the articles (87.9%) were original research, 9.1% were dissertations or theses, and 3% posters. Thirty-two (97%) of the studies were descriptive or mixed methods (level VI evidence), and one (3%) was an expert opinion (level VIII). Study populations included nurses (72.7%), physicians (12.1%), other healthcare professionals (30%), and health insurance employees (3%). Settings ranged from primary healthcare (57.6%), to acute care (51.5%), and other organizations (6.1%) or companies (3%). As shown in Table 3, a total of 18 countries were represented, with the majority originating from the UK. More than two thirds (69%) focused on high-income countries (HIC), 28% on middle-income countries (MIC), and only 3% on low-income countries (LIC).
In terms of TNA tool use, 4 (12.1%) studies used the original English version instrument, 23 (69.7%) adapted the English version, and 6 (18.2%) translated and culturally adapted the tool. Translation and cultural adaptation of the original TNA was carried out in the following languages/countries: a) Bahasa Indonesian -tested among community nurses and midwives in Indonesia [31][32][33]; b) Greek -tested among rural primary care nurses, midwives, and health visitors in Greece [45,46]. In addition, a modified version of TNA was translated and culturally adapted into the following languages/countries: a) Kiswahilitested among reproductive, maternal, and newborn healthcare workers in Mwanza, Tanzania [48]; b) Bulgarian, Polish, Italian, Albanian, and Romaniantested among formal and informal caregivers across European Union countries and immigrant communities [50]. Furthermore, a thematic analysis revealed the following three levels in training needs analysis: a) individual level; b) interprofessional or team/unit level; and c) organizational level.
Individual training needs analysis
As listed in Table 1, a total of 17 studies centered on identifying or targeting needs at the individual level within a specific population of interest across a variety of settings [8, 12, 13, 25, 30, 32, 33, 37-39, 41-44, 49, 50, 53]. Nurses, physicians, midwives, and other healthcare professionals were studied in primary healthcare, acute care, healthcare organizations, and other businesses. All 17 articles applied the TNA tool to determine specific job roles and responsibilities within the targeted setting. Hence, an individual's perceptions about which tasks were most important for performing their jobs were assessed and captured in column A of the TNA tool. For example, Hicks and Tyler [38] used the tool to determine the required education for family planning nurses in the UK, analyzing the tasks these nurses performed, while also allowing them to indicate their training needs. By comparing the roles of the family planning nurse with the family planning nurse prescriber, the investigators determined the nuances that distinguished the two roles in two consecutive studies [38,53].
Eight articles determined job profile differences in a variety of geographical locations [12,13,30,32,33,38,50,53]. Four of these articles compared training needs across different countries or regions; within the UK, USA, and Australia [30], across Indonesia [32,33], and across five European countries [50]. For example, Pavlidis et al. [50] determined the differences in caregivers' perceived training needs across the UK, Greece, Bulgaria, Poland, and Italy. Four main training needs were reported as contributing to quality care improvement: a) basic nursing skills; b) specialization in specific conditions (such as diabetes, stroke, dementia); c) training in advanced health care systems; and d) training in psychology-related skills, such as time management, emotion regulation, and communication [50]. Targeting those skills was deemed to improve European caregivers' capacity in the health and social services sectors.
Existing knowledge gaps, and the most effective training methods to meet individual training needs, were identified across all articles. For example, a study carried out in Ireland, determined medical doctors' training needs to inform professional development courses [43]. Workload/time organization and stress management were identified as the most pressing needs, while doctor-patient communication was ranked highest for importance and level of current performance. The TNA tool allowed for setting priorities that best met individuals' training needs, and facilitated managers, local governments to allocate scarce budgets and resources to improve the quality of health care [43]. Similarly, a dissertation study of South African managers in public hospitals revealed that a combination of formal and informal training, [49].
Team/Interprofessional training needs analysis
A total of 13 articles exhibited the team/IP training needs theme, encompassing more than one professional group or focusing on a team or unit as a whole [26,28,29,31,34,35,40,45,46,48,51,52,54]. Target populations included nurses, physicians, and other healthcare professionals across acute and primary care, as well as healthcare institutions. Twelve of the articles were descriptive, qualitative, or mixed methods studies (level VI evidence) whereas, one was an opinion article (level VIII evidence) [34]. Articles under this theme utilized the TNA tool for a wide variety of purposes, including to monitor and optimize quality of care. For instance, Singh and colleagues [52] utilized the TNA tool at a tertiary care hospital in India to evaluate nurses' training needs. They identified patient care, research capacity, managerial/administrative, and communication as the highest priorities [52]. Other investigators used the tool to determine perception of job roles by nurses and their managers. TNA developers, Hicks and Hennessey [35], used the tool to define the newly established NP role in the UK, providing an operational definition and specific training needs to be targeted. The researchers surveyed all nurses working at advanced clinical levels within an acute sector of the National Health System (NHS) trust. Their triangulation results indicated overall consensus between the nurses and their managers, regarding both the definition of the NP role and the essential training requirements, with somewhat differing opinions by the medical staff [35]. Subsequently, implications in regulating educational provision for NPs in the UK emerged.
The TNA tool was also used to compare training needs and perceptions between different professional groups in the clinical setting, thus allowing for greater [46] used the tool in Greek primary healthcare centers to determine the occupational profile differences between nurse graduates from 2-year programs and graduates from 3-or 4-year programs. Collected data were then used to determine training gaps to be targeted by future interventions. Determination of training needs at all levels allowed for budgetary analysis and resource allocation for optimum results. Hicks and Hennessy focused on improving the capacity of the NHS trust and its employees by identifying the training needs of practice nurses [34]. In this 1997 study, practice nurses considered communication and teamwork to be the most important aspects of their job, and there was overlap in the training needs identified by the nurses themselves and by their managers. In 2005, Hicks and Thomas [40] analyzed the training needs among professionals delivering community sexual health services and used the data to recommend additional courses within the allocated budget. Similarly, Mwansisya et al. [48] surveyed reproductive, maternal and neonatal healthcare workers within eight districts in Tanzania and provided a baseline of training needs in a low middle-income country. Another study, carried out at a School of Dentistry in a Sudanese university, showed that faculty and staff prioritized academic student supervision, data analysis, and effective presentation skills [54]. The survey also revealed knowledge deficits related to legislation and community engagement. The investigators concluded that effective targeting of these group training needs would require development of university-wide policies for training [54].
Organizational training needs analysis
Three articles focused on improving outcomes for an organization, such as a healthcare system, hospital, or business [27,36,47]. Targeted populations were nurses, physicians, other healthcare professionals, and business employees. Barratt and Fulop [27] applied the TNA tool to improve use of research, and knowledge generation participation, across healthcare and public health organizations in London and Southeast England. In doing so, the investigators identified key tasks, priorities, and barriers to building research capacity, such as assessing the relevance of research and learning about new developments [27]. An earlier study by Hicks and Hennessey Some articles involved more than one country explored the issue of evidence-based clinical care within the context of diminishing resources in the British NHS [36]. A TNA survey from seven NHS trusts showed common training needs and skill deficits in relevance to locality and clinical area. The authors concluded that targeting the real skill deficits of the workforce, as well as the personnel most in need of training, was essential for effective integration of evidence-based care within routine practice [36]. Last, Moty [47] focused on technology improvement at a contract research organization to incorporate user feedback into a portal system through the TNA tool. By incorporating end users' input to optimize portal design, more positive opinions about portal technology could be solicited, and desire to use technology could be increased [47]. All three articles demonstrated the tool's versatility in addressing organizational training needs at a systems level through an integrated approach.
Discussion
This integrative review synthesized evidence about TNA tool utilization across the globe, and critically appraised its impact in CPD across various disciplines, settings, and countries. The tool proved to be modifiable for different purposes and contexts, without compromising its high validity and reliability. Its flexible design allowed it to be easily adapted to various populations, settings, and cultures while retaining its psychometric characteristics. Hence, the tool's value as an international instrument for analyzing training needs in the healthcare and education sectors became evident.
TNA tool utilization across the globe
Following initial development and testing, the TNA instrument was successfully used in the UK to identify individual training needs and trends related to demographics, with an emphasis on development of the NP role at the early stages [12,13,30]. Carlisle and colleagues [8] summarized the latent factor structures that affect the occupational profile construct of the TNA scale and examined them within the Australian context. The investigators confirmed the original five-factor model and suggested that the underlying dimensions relating to the occupational profile were perceived to be important for high performance by nurses in Australia [8]. Nevertheless, the majority of studies reflecting individual training needs struggled with low response rates and self-report bias. Therefore, investigators cautioned about conclusions drawn from data that relied on individuals' own perceptions of their learning needs. At the team/IP level, perception of training gaps and competences among nurses, midwives, physicians, and public health staff in settings ranging from hospitals to rural health facilities emerged throughout 10 countries.
Specifically, in Singapore [26], Australia [28], Saint Lucia [29], Indonesia [31], UK [34], Greece [45,46], Tanzania [48], South Africa [51], India [52], and Sudan [54]. These studies used the TNA tool for primary data collection, with one translation and validation into Bahasa Indonesian language [31] and another into Greek [45]. Training needs of a group of interest were compared to those of other professional groups or team members to tailor CE offerings and optimize IP operations. At the organizational level, UK healthcare institutions and NHS trusts [27,36] as well as a US contract research organization [47] used the TNA instrument to improve research capacity/utilization, identify key barriers, and mitigate resistance to change. By establishing the relationship between organizational factors (hospitals) and demographic variables, individual occupational competency profiles, as well as team professional development, can be planned and executed by HR departments. Further synthesis of sampled participating countries, depicted in Fig. 2, revealed that two-thirds of the studies occurred in HICs and one-third in MICs, with only one study stemming from an LIC. Out of 11 studies that used translated TNA versions, three were from Indonesia (MIC), two from Greece (HIC), and one study involved six European countries (4 HICs, 2 MICs). Cross-country comparison by income classification and TNA theme allowed for examination of challenges or limitations in usage of the original, adapted, or translated TNA tool version, and how these were addressed by the investigators. According to Gaspard and Yang [29], how a healthcare professional determines which tasks are essential, and how they perceive their actual performance of that task, may be influenced by several factors. For instance, motivation for continuous learning, a special interest in a particular task, a specific education deficit, and satisfaction or not with unit management. This limitation is addressed by allowing for two ranking systems -where employers also rank employees -to cross-check motivation and establish the need. For example, to determine NP training that would satisfy NHS trust aims, a full training needs analysis of nurses and their immediate supervisors was carried out [34]. The nurses completed the analysis with their own perceptions of training needs, while their managers completed it on behalf of the identified nurse. The resulting mutually agreed training program enhanced understanding of both parties' agendas and could be achieved with minimum conflict [8,33,34,46]. For Greek nurses in rural PHC settings, appropriate training activities, along with organizational changes, had potentially equal impact on short-term staff development and long-term strategic planning programs [46].
Reported flaws or limitations of the original or adapted/translated TNA tool included: a) small-scale study [8,26,37,40,42,45,46]; b) polling only one organization, or specific unit [26,29,37,42,49]; c) not surveying employers or stakeholders [28,29,48,51]; d) low participation or item completion rates [34,38,45,46,53]; e) lack of consensus at a national level regarding training content [45]; and f) focus on individual and team assessment, rather than organizational [25,39]. Last, one study included only 10 out of the 30 original TNA items, with the rest of items being newly introduced [50], which exceeds the developers' threshold for psychometrics modification [14]. Furthermore, some investigators recommended further studies to explore TNA's: a) applicability in a wider healthcare system, b) feasibility as a large-scale survey instrument in secondary and tertiary care settings, and c) usability for collaborative activities, especially through global information technology network teaching programs [34,45,48].
The main advantage of the TNA instrument is the accompanying detailed instruction manual, made available by the developers through the WHO Workforce Alliance website [14]. According to this manual, the standard 30item questionnaire can be tailored to a particular study focus. Up to 8 of the original items can be changed or omitted, and up to 10 new items can be added. To this direction, the developers have included an example of how to adjust the questionnaire ( [14], p. [21][22][23][24][25]. The additional items are to be devised according to an accepted psychometric process for developing questionnaires. For example, a literature review, focus group and interview with relevant personnel should be conducted, with the information distilled into themes, using an approved data-reduction method, such as the Thematic Network Analysis [55]. This provides the core areas from which tailored items are to be constructed. Coverage of the themes (and subsequent items) should be comprehensive and appropriate. These themes will form the basis for new items, which should be in a format similar to that of the standard questionnaire. Modified questionnaires should be piloted with a small sample. The manual includes additional item banks that have been used in other studies, grouped as follows: "Extended nursing role", "Nurse prescribing", "Specialist care", "Child abuse / child protection", and "Management" ( [14], p. [50][51][52][53][54][55]. This item bank allows for easy tailoring to an investigator's aim and unique context.
Global impact in continuous professional development
Regardless of country and profession surveyed, the tool consistently revealed the perceived and assumed training needs, clarified roles, and facilitated CPD at the individual or team level. Sampled literature revealed that there were no universal trends in training needs according to locus of practice, and that training requirements were specific to the actual role performed within an organizational environment. For example, Hicks & Tyler [38] and Hicks & Fide [39] demonstrated that a more targeted, less costly, training program would be optimum, upon surveying the roles and training needs of family planning nurses and breast care nurses in the UK. Organizational factors were shown to determine the Fig. 2 Literature heat map -TNA tool use by country income level. *Income country classification by World Bank [23] occupational profile and training needs of nurses in primary and secondary care across Australia, UK, and the USA [8,30,36]. Similarly, most surveyed caregivers in five European countries declared a need for training in the psychosocial aspects of caregiving [50]. Yet, when dealing with stress, caregivers from Italy and Greece had lower needs than those from Poland, UK, and Turkey. Despite the socio-economic differences among countries, all participants faced increasing demands from caregiver burden [50]. For employees in the Nigerian health insurance industry, addressing the gaps through an on-thejob training course was deemed to be the optimum approach [25]. For each country seeking to minimize economic impact and streamline processes, the TNA tool provided an affordable, standardized approach to prioritize and implement an effective CPD program.
Several other studies concurred on the urgent need for flexible and tailored CPD, as an outcome of a comprehensive TNA analysis [26,33,40,44,46,[52][53][54]. Especially in middle-income and low-income countries, such as Saint Lucia, South Africa and Sudan, strengthening university-level education for nurses and other healthcare professionals was a key recommendation for evidence-based decision making [29,51,54]. In the case of Indonesia, the introduction of TNA in a series of studies, carried out by Hennessy and colleagues [31][32][33], motivated several junior nurse researchers to pursue and study CPD, as reflected in multiple references of the instrument. Given the prevailing budget constraints and limited accessibility to research funds, it was not surprising that training needs in research competences emerged as top priorities across studies conducted in MICs and LICs. In Turkey, building research capacity among public health professionals to tackle prevailing noncommunicable diseases, was seen as a national priority [42]. Following identification of several individual, team and organizational barriers, a comprehensive CPD plan for junior researchers, and a QI plan for governmental institutions were recommended. This latter plan provides a roadmap for addressing the lack of coordination between institutions and researchers, establishing research monitoring and evaluation, and strengthening routine health information systems. Similarly, in Sudan, the top QI priority for university faculty of Dentistry was research, followed by leadership, health professions management, community engagement, and teaching skills [54]. As demonstrated in a UK study, raising the ability of NHS organizations to use research and generate knowledge was tied to improved services and population health [27]. The above findings support the argument for research capacity as an essential component of PHC nursing [56]. Moreover, findings show the critical need for operational integration of standardized QI processes across units or institutions in order to close the gap between theory and practice [57]. Hence, implementation of a regionally based CPD and QI program, stemming from TNA application, could be the solution to a healthcare system in need of reform.
Strengths and limitations
Integrative reviews allow for the combination of diverse methodologies (i.e., experimental and non-experimental research) in order to more fully understand a phenomenon of concern [58]. By combining data from the theoretical as well as empirical literature, this review could potentially impact evidence-based practice for nursing and other healthcare professions. The ongoing worldwide interest in the Hennessy-Hicks TNA instrument and manual has been a strong incentive for this review. Following personal communication with the developers, they confirmed receipt of many email enquiries, estimated around 100, from around the world asking for permission or clarification about using the tool and informing about outcomes. Unfortunately, there is no cumulative record of requests during the past 25 years. This lack of usage data and repository was further compounded by inability to obtain webpage metrics (i.e. 'hits' and 'downloads') due to the hosting Global Health Workforce Alliance (GHWA) webpage no longer being maintained [14]. Therefore, no information was available on who has been using the tool nor who is resourcing the archived tool on the GHWA website.
This review was based on an extensive search of four major electronic databases and a targeted manual search of grey literature and cross-listed references. The selected databases (PubMed, CINAHL, Scopus, and Google Scholar) capture a variety of international journals in the nursing, social sciences, and biomedical literature, with relevant hand-searched theses and dissertations also included. All articles were independently reviewed and appraised, using an adapted hierarchy scale for level of evidence [26,27]. The main limitation stems from the exclusion of any relevant literature published in languages other than English. According to the developers, TNA has been translated into many other languages, including Arabic and Chinese, but there was scarce evidence published in the English literature. Upon appraisal, variation in criteria application, along with cross-country cultural and linguistic variations are also acknowledged. Last, World Bank rankings [23] were based on the 2020 index rather than the year when the study was conducted. Because of the relatively small sample size (n = 33), the investigators combined the "low-middle income countries" with the "upper-middle income countries" categories under "middle-income countries" (MIC).
Implications
The TNA instrument allows for triangulation of a) assessment (identifying and triaging needs); b) needs (gap between what exists and what is required); and c) training (acquiring knowledge, skills or change attitude). Our literature review synthesized the reported use and value of the Hennessey-Hicks TNA tool across settings, populations, and countries, identified the enablers and barriers to its use, and distilled best practice CPD recommendations. Viewed from the KTA Framework perspective, our findings explain how the scholarship of discovery (TNA tool development, psychometrics of adapted or translated versions) leads to the scholarship of integration (translation and cultural adaptation of TNA), and ultimately to scholarship of application (using different versions of the tool across various settings, populations, or countries). As knowledge moves through each stage, it becomes more synthesized and therefore, useful to end-users. Hence, for healthcare professionals, CE should be based on the best available knowledge, the use of effective educational strategies, and the planned action theories to understand and influence change in practice settings [17]. A recent metasynthesis of CPD literature highlights nurses' belief in CPD, as fundamental to professionalism and lifelong learning, and its importance in improving patient care standards [59]. Yet, it shows a disconnect between nurses' CPD needs and expectations with the organizations' approaches to professional development. The authors conclude that access to CPD should be made more attainable, realistic and relevant [59]. By translating TNA evidence into action, health policy makers, administrators, and educators can effectively design appropriate, cost-effective CPD programs with clear priorities to achieve the desirable knowledge, skills and practice, tailored to local needs. There is also a high level of fit between the adopted KTA Framework and the affiliated WHOCC's Terms of Reference. Completion of this review coincided with the start of a project by the Registered Nurses Association of Ontario, aiming to develop a repository of measurement tools that can be mapped to the KTA Framework, and to report on their pragmatic and psychometric properties. Given that the TNA measurement tool has been identified as mappable to the KTA framework, the potential synergy between the two groups is promising. Moreover, our findings are aligned with WHO recommendations based on the "Framework for Action on Interprofessional Education and Collaborative Practice" [24]. These call for health policy makers to systematically address training needs of the healthcare workforce in order to strengthen IPE and collaborative practice. As tasked by the Pan American Health Organization (PAHO), the affiliated WHOCC aims to enhance the use and dissemination of knowledge resources that build capacity and leadership for nurse and midwife educators. Hence, lessons learned will be used to promote TNA tool application and integration for individual, team/ IPE, and organizational improvement across the PAHO region (North/Central/South America and the Caribbean). These steps are both timely and relevant for evaluating training and competency, and for regulating nursing practice in the Americas [15] during the post-pandemic era.
Conclusion
Since its development in 1996, the TNA instrument has been widely used as a clinical practice and educational quality improvement tool across continents. Translation, cultural adaptation, and psychometric testing within a variety of settings, populations, and countries consistently reveal training gaps along the individual, team/interprofessional, and organizational themes. It is not only applied to identify training needs and demographic trends, but also to prioritize targeted training strategies and CPD programs. Furthermore, it facilitates triaging and allocating limited educational resources, especially in low and middle-income countries. These findings underscore the tool's effectiveness in addressing the "know-do" gap in global human resources for health by translating knowledge into action.
|
v3-fos-license
|
2019-01-25T15:11:40.787Z
|
2019-01-24T00:00:00.000
|
59223555
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-018-36859-2.pdf",
"pdf_hash": "9c71f457254414bf8aeaeb707ed524c0be9770cd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2630",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "0f60aca7dbbe1a6b75fcfed309e52b4891b46975",
"year": 2019
}
|
pes2o/s2orc
|
Cellular and animal models of skin alterations in the autism-related ADNP syndrome
Mutations in ADNP have been recently associated with intellectual disability and autism spectrum disorder. However, the clinical features of patients with this syndrome are not fully identified, and no treatment currently exists for these patients. Here, we extended the ADNP syndrome phenotype describing skin abnormalities in both a patient with ADNP syndrome and an Adnp haploinsufficient mice. The patient displayed thin dermis, hyperkeratotic lesions in periarticular areas and delayed wound healing. Patient-derived skin keratinocytes showed reduced proliferation and increased differentiation. Additionally, detection of cell cycle markers indicated that mutant cells exhibited impaired cell cycle progression. Treatment of ADNP-deficient keratinocytes with the ADNP-derived NAP peptide significantly reduced the expression of differentiation markers. Sonography and immunofluorescence staining of epidermal layers revealed that the dermis was thinner in the patient than in a healthy control. Adnp haploinsufficient mice (Adnp+/−) mimicked the human condition showing reduced dermal thickness. Intranasal administration of NAP significantly increased dermal thickness and normalized the levels of cell cycle and differentiation markers. Our observations provide a novel activity of the autism-linked ADNP in the skin that may serve to define the clinical phenotype of patients with ADNP syndrome and provide an attractive therapeutic option for skin alterations in these patients.
Autism spectrum disorders (ASD) vary in presentation and severity and a number of ASD genes have been identified with the introduction of exome sequencing 1,2 . Analysis of patients with intellectual disability and their parents, can diagnose at least 16% of cases, mostly involving de novo mutations 3 . Recently, a number of patients with ASD, intellectual disability and various shared clinical features caused by mutations in Activity-Dependent Neuroprotective Protein (ADNP) have been reported [4][5][6] . All mutations were heterozygous frameshift or nonsense changes located at the last exon of the gene and gave rise to premature stop codons. ADNP may participate in chromatin remodeling, transcription and microtubule/autophagy regulation 7,8 . According to the Genotype-Tissue Expression (GTEx) database, ADNP gene is expressed in a wide variety of tissues and cell types including transformed lymphocytes and fibroblasts, primary and secondary sex organs, central and peripheral nervous systems and thyroid among others.
In mice, Adnp is essential for brain formation and Adnp haploinsuficiency is associated with cognitive and social deficits as well as tauopathy 9 . Interestingly, administration of NAP, an 8-aminoacid neuroprotective peptide derived from ADNP protein, ameliorated the short-term memory deficits in ApoE knockout mice, a model for Alzheimer disease 10,11 , and in Adnp +/− mice 12 . Although classified as a neurodevelopmental disorder, the ADNP syndrome, also called Helsmoortel-Van der Aa syndrome (MIM: 615873) presents with a plethora of clinical symptoms, including hypotonia, growth retardation, recurrent infections and hyperlaxity 4 which reveals the multisystem character of this disorder. Studies on rare diseases are usually hampered by a lack of cellular models to analyze the response to potential treatments or decipher the molecular mechanisms underlying these conditions. Primary cell models are typically limited to renewable and easily accessible cell types, such as immortalized lymphocytes, fibroblasts or keratinocytes. As an example, primary fibroblasts from skin biopsy have been used to Scientific RePoRts | (2019) 9:736 | DOI: 10.1038/s41598-018-36859-2 establish cellular models of neurodegenerative diseases, such as Friedrich ataxia 13 and Parkinson disease 14 . We now present novel data on the alterations of ADNP-deficient skin cells from a patient with ADNP syndrome, and reproduced the skin phenotypic changes in an Adnp +/− mouse model. Moreover, treatment of mice with NAP reverted the altered phenotype of the skin.
Materials and Methods
Phenotype of the patient. Our patient is an 11 year-old girl that was born at term after a normal gestation and delivery, to non-consanguineous healthy parents. At the neonatal period the patient presented marked irritability and at very early stages, she exhibited autistic-like behavior, marked gastrointestinal problems, psychomotor retardation, sleep disturbances, teeth grinding, delayed growth and cognitive disabilities. Mild dysmorphic features characteristic of the ADNP syndrome 4 , including prominent forehead, high hairline, notch of the eyelid, broad nasal bridge and thin upper lip, were also present. During early infancy gastrointestinal problems were mostly associated with recurrent intestinal parasitosis (Giardia Lamblia) and candidiasis. No infections in other systems were ever detected. The patient was treated with steroids and antibiotics with apparently good outcome. Different transient endocrine disorders such as hypothyroidism with high TSH, hypocortisolemia and hypoglycemia were also identified when gastrointestinal disorders were present. She was firstly diagnosed with non-progressive neurodevelopmental disorder that resembled a mild form of Rett-like syndrome with mental retardation, learning disabilities and autism.
Sonography. Thickness of the dermis at the volar forearm was determined by high resolution ultrasound visualization with high frequency probes (18)(19)(20)(21)(22). A standard echographic gel was used as the coupling medium between the transducer and the patient's skin. Minimal pressure was applied to preserve the thickness and echogenicity of the dermis.
Primary cell cultures. Primary skin cells were isolated from forearm-skin biopsy. This study was approved by the ethics committee of Valdecilla University Hospital (Santander, Spain) and an informed written consent from the parents of the child was obtained. All research was performed in accordance with relevant guidelines and regulations. Keratinocytes were cultured in the presence of a feeder layer of mitomycin C-inactivated J2-3T3 fibroblasts in Ham's 12 medium/Dulbecco's modified Eagle medium (DMEM) (Invitrogen, Carlsbad, CA), supplemented with 10% fetal bovine serum, 1.8 × 10 −4 M adenine, 0.5 µg/ml hydrocortisone, 5 µg/ml insulin, 10 −10 M cholera enterotoxin and 10 ng/ml epidermal growth factor. Primary fibroblasts were cultured in DMEM supplemented with 10% fetal bovine serum and 0.5% L-glutamine. Primary keratinocytes and fibroblasts were treated with 100 nM or 600 nM NAP peptide (amino acid sequence, NAPVSIPQ) (GenScript, Piscataway, NJ) for different times as indicated. Cells at low passages (between 1 and 5) were used for all experiments.
For the wound-healing assay, fibroblasts were grown to confluency in 24-well plates and the monolayer of cells was scratched with a needle. The percentage of scratch closure was determined following 24 h of culture.
About 10 days later, cell cultures were fixed with 3,8% formaldehyde in PBS for 10 minutes and stained with rhodanile blue as described previously 15 . Cell density was quantitated by measuring absorbance at 400-700 nm with a transmission optical microscopy (Nikon, Tokyo, Japan) connected to a spectrograph (Andor, Belfast, UK). Data were obtained from three independent assays.
Proliferation assays. Keratinocytes (70000 cells/well) were plated in 6-well plates and cultured in the medium described above and proliferation was quantitated at different time points by cell counting with Neubauer chamber. To assess fibroblasts proliferation, 2500 cells per well were seeded on conductive microtiter plates (E-plate 16) and monitored during 120 hours using an xCELLigence DP instrument (ACEA Bioscence, San Diego, CA).
Flow cytometry. Keratinocytes were harvested, fixed with 3,7% paraformaldehyde or with cold 70% ethanol, and stained for involucrin or keratin 1. Isotype IgG was used as negative control. Alexa Fluor-labeled secondary antibodies were then used and labelled cells were analysed by flow cytometry (FACSCanto, BD Biosciences, Franklin Lakes, NJ) using the FACSDIVA software (BD Biosciences).
Whole exome sequencing. Whole blood was obtained from the patient and her parents and genomic DNA was extracted from mononuclear cells using the QIAamp DNA blood kit (Qiagen, Hamburg, Germany). Whole exomes were sequenced by using a HiSeq. 2000 sequencer (Illumina, CA, USA). Sequencing reads were aligned against the human reference genome (hg19) using BWA with the default parameters. Several tools (SAM, GATK, Picard) for manipulating alignments including sorting, merging and indexing the BAM files were used. Single nucleotide variant and indel calling was performed using GATK Unified Genotyper. Variants were annotated using snpEff and association studies were performed using Plink software. To confirm the mutation in cellular models, we used an allele-specific PCR from genomic DNA with two allele-discriminating forward primers 5′ CACCTGTGAAGCGCACTTAC 3′ (for wild type allele) and 5′ CACCTGTGAAGCGCACTTAA 3′ (for mutant allele) and a common reverse primer 5′ GGGATAGGGCTGTTTGTTGAA 3′ . Fragments (206 bp) were resolved by agarose gel electrophoresis.
Mouse model. All procedures involving animals have been approved by the Animal Care and Use Committee of Tel Aviv University and the Israeli Ministry of Health. All experiments were performed in accordance with relevant guidelines and regulations. Two-month old Adnp heterozygous and littermates control mice 12 , outbred with ICR 16 were housed in a 12-h light/12-h dark cycle facility, and free access to rodent chow and water was available. For intranasal administration, NAP peptide was dissolved in a vehicle solution and administered as described 12,17 . Nasal NAP application (0.5 μg NAP in 5 μl vehicle solution) was performed daily, once a day, for 6 weeks (5 days a week). Vehicle-treated mice were maintained until the age of 4.5 months. Primary fibroblasts were prepared from tail tips of 4.5-month-old Adnp +/− (n = 3) and Adnp +/+ (n = 3) female mice. Tail tips were incubated in a solution containing 0.1% collagenase at 37 °C for 2 h following standard procedures 18 . Isolated fibroblasts were plated at 5000 cells/well on 96-well plates in triplicate. When indicated, 180 nM NAP was added to fibroblast cultures. For tissue analyses, skins were frozen and embedded in optimal cutting temperature compound as described 19 .
Characterization of skin cells from a patient with ADNP syndrome.
The clinical phenotype of the ADNP syndrome is not fully characterized. This syndrome is a neurodevelopmental disorder and it has also been shown that children with ADNP syndrome exhibited early primary tooth eruption 5 . The nervous system and tooth enamel along with the epidermis have a common embryonic origin, the ectoderm. Thus, we aimed to study the skin of a patient diagnosed with ADNP syndrome in our hospital. This tissue has the advantage that a biopsy can be easily performed and skin cells can be obtained and cultured to generate a cellular model of the disease. The patient carried an ADNP p.Tyr719* causal heterozygous truncating mutation (Fig. 1a), identified in other 23 children with ADNP syndrome from more than 100 cases diagnosed worldwide 6,20 . ADNP protein is mainly localized in the nucleus by means of its nuclear localization signal. However, the truncated protein shows a cytoplasmic distribution likely due to a partially deleted nuclear localization signal 21 . Thus, p.Tyr719* variant lacks the nuclear activities of ADNP. A sonographic study revealed that the dermis of the patient was thinner compared with healthy controls (Fig. 1b). Dermis is the major contributor to the variation in skin thickness and is mainly determined by its collagen content 22 . Dermal thickness in the ventral forearm of the patient was 0.9 mm, whereas the skin of 8 controls of the same sex, weight and age had a thickness of 1.2 ± 0.1 mm, which is in the range of normal values as previously described 23 . Additionally, the epidermal layers beneath the stratum corneum were thinner in our patient than in a normal control. However, the fully differentiated stratum corneum of the epidermis, was about 2-fold thicker in the patient than in the control (Fig. 1c-e) as determined by tissue staining with hematoxylin and eosin and immunolabeling of KRT10 that is expressed in all suprabasal cell layers. This result is consistent with the hyperkeratotic lesions observed in the skin of periarticular areas (Fig. 1f). Although all sections from the skin's patient were processed in parallel with controls, we can not rule out an expansion of the stratum corneum thickness due to non-intrinsic factors. During the course of this study, we obtained ultrasound data of the skin of another patient, a six-year-old boy carrying a novel truncating mutation (c.138_139del, pPhe46Leufs*52). Sonography revealed that dermal thickness was 0.73 mm, confirming the previous result and extending this skin alteration to other mutations in the ADNP gene. This second mutation generates a short fragment at the N-terminal end of the protein that according to recently published data, should not be expressed because fragments up to residue 447 show little or no expression due to degradation by the proteasome 21 . A survey among 23 ADNP families was conducted through the ADNP parent support group on Facebook 5 . Nine children presented the pTyr719* mutation and in seven of these cases parents referred skin alterations including delicate skin, low wound healing compared with their other healthy child, eczema and rashes. Assessment of whether the skin phenotype reported here is specific for particular mutations will require clinical and echographic evaluation of the skin in more patients and in vitro skin studies whenever possible. Both keratinocytes and fibroblasts were obtained from a forearm skin biopsy of the patient and a healthy control of the same age and maintained in culture for a limited number of passes (usually less than 5). These skin cells, carrying the ADNP mutation (Figs 2a and S6), displayed reduced proliferation compared to wild type cells (Fig. 2b,c). Mutant keratinocytes represented a high-scatter population of larger (forward scatter) and more complex (side scatter) cells as determined by flow cytometry analysis ( Supplementary Fig. S1a,b) and contained larger nuclei (Supplementary Fig. S1c). These features are consistent with terminal differentiation of keratinocytes 24 . Mutant keratinocyte colonies contained round-up cells with larger intercellular spaces, suggesting reduced cell (Fig. 3a). Mutant fibroblasts were less fusiform and displayed shorter and disordered filopodia than the control counterpart (Fig. 3a). Clonogenic assays further confirmed that cell proliferation was notably impaired in keratinocytes carrying mutant ADNP (Fig. 3b,c). Reduced proliferation was accompanied by increased differentiation as assessed by quantitating the proportion of keratinocytes that expressed Keratin 1 (KRT1) and Involucrin (IVL) epidermal differentiation markers (Fig. 3d,e). To further analyze the effect of mutant ADNP on proliferation, we studied the expression of cell cycle regulatory proteins in the patient's epidermal tissue. Cyclin A (CCNA), which peaks at G2 phase and is abruptly destroyed at the beginning of mitosis, accumulates following G2 arrest. Interestingly, Cyclin A accumulated in the peribasal epidermal layer of the patient, lining the suprabasal compartment containing Keratin 10-positive differentiated cells (Fig. 4a,b). In addition, phospho-histone H3 marks highly condensed chromatin, corresponding to metaphase chromosomes, in control's skin basal cells, whereas it associated with open chromatin (interphase chromosomes) in the patient (Fig. 4c,d). On the contrary, cyclin E (CCNE), needed for suprabasal keratinocyte growth 25 , was barely detected in the patient's suprabasal epidermis ( Supplementary Fig. S2a). Cycling cells in the epidermal basal layer, frequent in normal controls, were also drastically reduced in the patient as determined by labeling with the proliferation marker Ki67 ( Supplementary Fig. S2b). These data are consistent with a growth alteration due to G2 or mitosis defects.
In vitro response of skin cells to neuroprotectant peptide NAP. NAP (NAPVSIPQ), the shortest active peptide of ADNP, has been shown to increase ADNP activity 26 . Treatment of keratinocytes with 600 nM NAP showed a reproducible trend to increase the proliferation of mutant cells but it did not reach statistical significance (Fig. 5a). However, NAP treatment reduced the number of mutant and also wild-type cells expressing differentiation markers KRT10, KRT1 and IVL as assessed by flow cytometry (Fig. 5b-d) and immunofluorescence staining (Supplementary Fig. S3). Similar data were obtained when mutant fibroblasts were exposed to NAP, showing a small but significant (p = 0.005) increase in cell proliferation in a time-dependent manner, whereas wild type cells did not show any response to the peptide (Fig. 5e). NAP peptide specifically binds tubulin and stimulates microtubule assembly 27 . This interaction might affect processes where tubulins play key roles including proliferation and differentiation. We immunostained both wild type and mutant keratinocytes with antibodies against αTubulin and consistently observed a decrease in αTubulin protein levels in mutant cells that were recovered at least in part by adding NAP (Supplementary Fig. S4), suggesting that NAP might have a tubulin stabilization role. Migration is a required step for differentiating keratinocytes to reach the upper layers of the skin and it needs that both cell-cell and cell-substrate contacts are remodeled to allow cell detachment from the basement membrane. Mutant keratinocytes showed a higher capacity to detach from the substrate after 2 days of culture compared with wild type cells (p < 0.001). However, exposure to NAP drastically reduced the number of detached mutant cells to levels similar to those of wild type keratinocytes (Fig. 5f,g). Keratinocytes depend on cell adhesion to maintain their proliferation 15 . Thus, our data suggest a cell adhesion defect caused by a deficiency of ADNP. In order to analyze the reduced healing capacity of ADNP mutant carriers and the effect of NAP, we performed a wound healing assay, widely used to mimic cell migration during wound healing in vivo 28 . We showed a reduced migration of mutant fibroblasts compared with wild type cells (Fig. 5h). Moreover, exposure of both mutant and wild type cells to NAP significantly increased the repopulation of the scratch area.
Effects of NAP on the skin of an
Adnp-deficient mouse model. Dermal skin fibroblasts were isolated from transgenic mice carrying only one functional copy of the Adnp gene (Adnp +/− ) 12 . Adnp +/− cells exhibited a significant decreased proliferative activity compared with wild type (Adnp +/+ ) fibroblasts (Fig. 6a). Although there was a tendency toward increased proliferation of Adnp +/− cells following exposure to NAP, it did not reach the threshold for statistical significance. The effect of NAP on the skin of transgenic mice was also studied following intranasal administration of the peptide. Epidermis was thinner in Adnp +/− mice compared with Adnp +/+ control, and treatment with NAP significantly increased dermal thickness (Fig. 6b). Immunostaining of skin samples revealed that Ccna1 protein levels were significantly higher in Adnp +/− mice compared to controls and fully normalized after NAP administration (Fig. 6c,d). Similar to the pattern observed in the patient, Ccna1 was expressed in the peribasal layer, lining the suprabasal area (Fig. 6c). Also consistent with patient's skin data, Keratin 10, localized in the suprabasal compartment, showed increased levels in Adnp +/− mice, that were normalized following treatment with NAP ( Supplementary Fig. S5a,b).
Discussion
We have described the biological alterations of skin cells in a patient with ADNP syndrome. On the basis of the current knowledge of this syndrome, epidermal defects have not been included within the symptoms caused by ADNP gene mutations. However, when parents were consulted by means of a survey of ADNP families through the ADNP parent support group on Facebook 5 , seven out of 9 children carrying the pTyr719* mutation presented, as referred by their parents, skin alterations, mainly delicate skin, low wound healing, eczema and rashes. The cellular alterations found are focused on keratinocytes, the most prevalent cell type in the epidermis, and fibroblasts, the predominant cell type in the dermis. It has been described that ADNP is essential for brain formation 29 and mutations in ADNP, similar to the one described here, are associated with delayed brain development 6 .
In line with this, ADNP mutations also resulted in thinner tooth enamel 5 . Interestingly, we have shown that a truncating mutation in ADNP resulted in thinner skin. It is noteworthy that brain, tooth enamel and epidermis have a commom embryonal origin as they all derive from the ectoderm. In vitro culture of mutant keratinocytes and fibroblasts revealed a reduced proliferative activity compared with cells from healthy donors. Fibroblast proliferation and keratinocyte differentiation are among the key steps during wound healing 30 observed in the patient by ultrasound imaging and the increased differentiation potential of keratinocytes may explain the hyperkeratotic lesions in the skin of the patient as a result of a thicker stratum corneum. ADNP protein contains a neuroprotective peptide, NAP, that increases ADNP's activity at the cellular level 26 and reverts abnormal behavior in Adnp haploinsufficient mice 12 . In vitro treatment of keratinocytes or fibroblasts with NAP partially reverted their proliferation and differentiation deficiencies. Interestingly, while we did not observe a significant proliferative response to NAP in mouse fibroblasts (Adnp +/− ), there was a small but significant effect on the patient's fibroblasts, suggesting that the response may be mutation-specific. Alternatively, intrinsic biological differences between mouse and human fibroblasts may account for the differential response to NAP. In line with this, it has been described a different cellular response to genotoxic agents between mouse and human fibroblasts 31 . Cell attachment, which is associated with wound healing, was impaired in ADNP mutated keratinocytes and partially corrected by NAP treatment. Consistent with this result, NAP has been shown to antagonize ethanol inhibition of cell adhesion in different cell lines 32 . Previous data suggested that NAP was neuroprotective by controlling microtubule dynamics 26 , promoting neurite outgrowth 33 . It is likely that a similar mechanism could increase proliferation of skin cells as reorganization of the microtubule network for chromosomal segregation is required for cell division. Failure to separate the chromosomes can trigger a G2/M phase arrest, blocking entry into mitosis. In line with this, we have demonstrated that cyclin A, which is upregulated at G2 and destroyed before mitosis, accumulates in patient's epidermis, whereas cyclin E that appears at G1 to regulate the G1/S phase transition is barely detected. These data suggest that a deficient microtubule reorganization in epithelial cells carrying mutant ADNP may promote blockade at G2/M phase of the cell cycle avoiding cells to progress through mitosis. Interestingly, a prolonged mitotic blockade triggers keratinocyte differentiation 25 , which might explain the increased differentiation of mutant keratinocytes. A mechanistic explanation to these NAP activities may reside in the interaction of NAP with microtubule-associated proteins. It has been described that SIP motif in the NAP peptide (NAPVSIPQ) interacts with microtubule end-binding proteins such as the EB1 protein family 26 . EB1 interacts with other microtubule-associated proteins to regulate Golgi dynamics and vesicle transport in different cell types including epithelial cells 34 , and it is also important for positioning the mitotic spindle 35 . Thus, NAP may have both nuclear and cytoplasmic activities through microtubule dynamics to facilitate cell division and cell migration in heterozygous carriers of a functionally defective ADNP variant. In line with this, we showed reduced protein levels of αTubulin in mutant cells that were partially recovered with NAP. A likely explanation is that part of the αTubulin proteins may remain as monomers in ADNP-deficient cells and are degraded by the proteasome. Degradation of monomeric tubulin has been previously described in different cell types due to misfolded proteins or destabilization of microtubules 27,36 . Consistently, NAP stimulates microtubule assembly in vitro 37 . Our in vitro data were reproduced in an Adnp heterozygous mouse model 12 . In Adnp +/− mice, dermal thickness increased following intranasal administration of NAP and this was accompanied by normalization of the levels of cell cycle and differentiation markers. Biological activity of NAP in this mouse model has been described and data showed that NAP treatment partially ameliorated cognitive deficits 12 . These in vivo results have fostered a number of clinical trials in tauopathies and schizophrenia 9 . Recently it has been described that premature primary tooth eruption in children with ADNP syndrome may help early and simple diagnosis 5 . Our data paves the way to consider skin alterations as another feature of the ADNP syndrome that can be easily identified by sonography and could be a potential simple surrogate marker for future clinical trials. In summary, we have shown that skin cells with an ADNP mutation suffer reduced proliferation and increased terminal differentiation that may lead to thinner epidermis and a delay in wound healing and to a thickening of the cornified layer that may cause hyperkeratotic lesions.
|
v3-fos-license
|
2017-05-04T00:08:16.669Z
|
2017-03-01T00:00:00.000
|
6375605
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2017.00304/pdf",
"pdf_hash": "14723bfc42908c3f8f522692be41062a7cda62d4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2632",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "14723bfc42908c3f8f522692be41062a7cda62d4",
"year": 2017
}
|
pes2o/s2orc
|
miR-21a-5p Contributes to Porcine Hemagglutinating Encephalomyelitis Virus Proliferation via Targeting CASK-Interactive Protein1 In vivo and vitro
Porcine hemagglutinating encephalomyelitis virus (PHEV) is a highly neurovirulent coronavirus that can cause nervous symptoms in piglets with muscle tremors, hind limb paralysis, and nystagmus. Whether some factors affect virus replication and proliferation had not been fully understood in the course of nerve damage caused by PHEV infection. In recent years, some reports suggested that miRNA might play a key regulatory role in viral infection. In this study, we found the miR-21a-5p is notably up-regulated in the brains of mice and N2a cells infected with PHEV, and it down-regulated the expression of CASK-interactive protein1 (Caskin1) by directly targeting the 3′-UTR of Caskin1 using a Dual-Luciferase reporter assay. The over-expression of miR-21a-5p or Caskin1 knockdown in the host significantly contributes to PHEV proliferation. Conversely, the silencing of miR-21a-5p by miR-21a-5p inhibitors suppressed the virus proliferation. Taken together, our results indicate that Caskin1 is the direct target gene of miR-21a-5p, and it is advantageous to virus proliferation by down-regulating Caskin1. These findings may help in the development of strategies for therapeutic applications.
INTRODUCTION
Porcine hemagglutinating encephalomyelitis is an acute and highly contagious disease in pigs, mainly affecting piglets within 3 weeks of age, causing vomiting and wasting disease, as well as obvious neurological symptoms. The disease has not yet effective prevention and treatment measures currently. (Chen et al., 2011;Gao et al., 2011;Li Z. et al., 2016). The mortality rate ranges from 20 to 100% (Lan et al., 2012(Lan et al., , 2013. This disease is caused by a member of the Coronaviridae family, which be known as porcine hemagglutinating encephalomyelitis virus (PHEV) (Dong et al., 2014); it is an enveloped virus containing a non-segmented, single-stranded, positive-sense RNA genome of approximately 30 kb. Pigs are the natural host of PHEV, but have been adapted to replicate in mouse and mouse neuroblastoma N2a cells (Chen et al., 2011). PHEV is a highly neurovirulent virus that spreads to the central nervous system via peripheral nerves (Dong et al., 2015), but the mechanism of induction of nerve injury is unclear. It is of great scientific interest to study the pathogenesis of PHEV from the point of view of virus infection and host protein interaction for the development of new antiviral drugs and treatment programs. microRNAs (miRNAs) are non-coding ssRNAs that are 19-25 nt in length and post-transcriptionally regulate the expression of multiple genes by combining with the 3 -untranslated region (UTR) of their target messenger RNAs and thus become crucial regulators in complex gene regulatory networks (He and Hannon, 2004;Varnholt, 2008;Chi et al., 2009;Mallick et al., 2009). Accumulating evidence indicates that miRNAs play a important role in the infection of coronavirus and the neurovirulent virus (Hasan et al., 2014;Lai et al., 2014;Song et al., 2015;Zhu et al., 2015;Piedade and Azevedo-Pereira, 2016). For example, during SARS coronavirus infection process, miR-17 * , mir-574-5p, and miR-214, were up-regulated, and miR-98 and miR-223 were down regulated. Among these miRNAs, miR-17 * , mir-574-5p inhibited the replication of SARS coronavirus, whereas miR-214 contribute to immune escape of the bronchial alveolar stem cells (BASC) (Mallick et al., 2009). miR-15b modulates the inflammatory response during JEV infection by negatively regulating RNF125 expression (Zhu et al., 2015). Our previous research revealed that miR-21a-5p, which is highly homologous with miR-21, was significantly increased in the process of PHEV infection by a DNA microarray analysis (Data are not published).
miR-21 is a multifaceted microRNA regulating the expression of target genes involved in several cellular programs, such as cell proliferation, migration, invasion, and metastasis (Krichevsky and Gabriely, 2009;Zhang et al., 2013). The regulatory role of miR-21 in process of viral infection was confirmed by a number of studies and can be used as a target for the treatment of viral diseases. For example, in the murine coxsackievirus B3 (CVB3)-induced myocarditis model, the expression of miR-21 was significantly reduced. The recovery of miR-21 expression significantly relieved CVB3-induced myocarditis as shown by an increased body weight, a reduced myocardial injury, a lowered myocarditis score and an increased survival rate. Further study showed that miR-21 protects against myocardial apoptosis by specifically inhibiting the expression of its target programmed cell death 4 (PDCD4). These data proved miR-21 might be a novel target for the treatment of CVB3 infection and other apoptosis-mediated cardiovascular diseases (He et al., 2013).
In this study, we sought to investigate the regulatory role of miR-21 in PHEV proliferation and provide theoretical basis for the development of a new therapeutic regimen for PHEV infection.
Cells, Virus, and Mice
Mouse neuroblastoma N2a cells (N2a) and a human cervical carcinoma cell line (Hela) were obtained from Professor Xia (Military Veterinary Institute, Academy of Military Medical Sciences, Changchun, China). N2a cells and Hela cells were maintained in Dulbecco's Modified Eagle's medium (DMEM) (Gibco, USA) containing 10% fetal calf serum, 1% streptomycin, 1% penicillin, and were incubated at 37 • C in a wetted chamber supplemented with 5% CO 2 . The PHEV strain HEV 67N (GenBank: AY048917) was propagated in N2a cells. BALB/c mice (3 weeks old) were obtained from the Laboratory Animal Centre, Jilin University.
The Choice of Housekeeping Genes
Generally, U6 and GADPH are expressed at relatively constant levels in normal and pathological conditions. These genes may be used as housekeeping genes in brain damage (Chi et al., 2014;Alarcon et al., 2015;Li G. et al., 2016;Shen et al., 2016). There are no significant differences in the expression of U6 and GADPH in the gene expression patterns in the cerebral cortex of mice infected with PHEV detected using microarray in our previous study , so we take them as internal reference genes for the relative quantification of other genes in this study.
RT-PCR for miRNA and mRNA Expression miRNA-enriched total RNA was extracted from N2a cells, Hela cells and brain tissues of mice infected with PHEV using a miRNApure Mini Kit (cwbio, China). To analyze Caskin1 (GenBank: NM_027937.2) mRNA expression, RNA was extracted using Trizol, tissue: 50-100 mg tissue/ml, cell: 10 cm 3 /ml. The concentration of RNA was detected by spectrophotometer (Thermo Scientific). RNA was reverse transcribed into cDNA by using the reverse transcription kit (Takara, Japan). The quantification of miRNAs was performed using the Bulge-Loop TM miRNA qRT-PCR Primer Set (RiboBio, China). miR-21a-5p and Caskin1 expressions were determined by RT-PCR using SYBR Green Master Mix kit as described previously (Shen et al., 2016). The relative expression was analyzed using the 2 − CT method. U6 and GAPDH were used for normalization of miR-21a-5p and Caskin1 expression, respectively (Shen et al., 2016). The cycle conditions and the system for PCR were set according to the manufacturer's protocol. U6 and mmu-mir-21a-5p primers were purchased from RiboBio. The primers for Caskin1 and GAPDH were designed as follows: mouse Caskin1 sense primer, 5 -GTGGGTCGGAGCCATTCA-3 ; anti-sense primer, 5 -GCCGAGCTGGAGCGTTT-3 ; mouse GAPDH sense primer, 5 -CTCAACTACATGGTCTACATGTTC-3 ; anti-sense primer, 5 -ATTTGATGTTAGTGGGGTCTCGCTC-3 ; HEV sense primer, 5 -AGCGATGAGGCTATTCCGACTA-3 ; and anti-sense primer, 5 -TTGCCAGAATTGGCTCTACTACG-3 . The PCR reaction system was 20 µL and reaction conditions: pre-degeneration at 95 • C for 3 min, denaturation at 95 • C for 30 s, annealing at 60 • C for 30 s, extension at 72 • C for 30 s with a total of 40 cycles. The amplification efficiency of PCR was detected (Supplementary Data Sheet 1).
Cell Transfection
Hela cells were plated in six-well plates at a density of 3 × 10 5 cells/well in DMEM containing 2% fetal bovine serum and were grown overnight. X-tremeGENE HP DNA Transfection Reagent (Roche, Sweden) was used to co-transfected Hela cells with 50 nM miR-21a-5p mimic or 100 nM inhibitor or their respective non-targeting negative control oligonucleotides (RiboBio) and 2 µg of Caskin1-WT or Caskin1-MUT. The empty plasmid pmirGLO group was used for the negative control, and non-transfected Hela cells were used as the blank control. After 48 h of transfection, luciferase activity was detected after transfecting 48 h by using a dual luciferase reporter assay system (Promega). Renilla luciferase activity was used for normalization. N2a cells (3 × 10 5 cells per well) were seeded into sixwell culture plates, incubated overnight and transfected with 50 nM of the miR-21a-5p mimics or 100 nM of the miR-21a-5p inhibitor or the Caskin1 siRNAs or the siRNA NC using X-tremeGENE HP DNA Transfection Reagent (Roche). Their respective non-targeting negative control oligonucleotides and a scrambled siRNA (siNC) were used as the negative controls. The cells were inoculated with virus 12 h after the transfection. All the transfection experiments were repeated at least three times.
Western Blotting Analysis
The cells in 6-well plates or brain tissues were washed once with phosphate buffer saline (PBS), followed by lysis using a Radio Immunoprecipitation Assay (RIPA) Lysis Buffer and a Phenylmethanesulfonyl fluoride protease inhibitor (Beyotime) on ice for 30 min. The concentration of protein was determined by the BCA Protein Assay kit (Pierce). The protein samples (50 mg/lane) were separated using a 10% polyacrylamide gels and were transferred to 0.22 µm polyvinylidene fluoride membranes using the Bio-Rad wet transfer system. After blocking overnight at 4 • C with 5% non-fat dry milk in PBS, the membranes were probed with antibodies against Caskin1 (Synaptic Systems, Göttingen, 1:2000), β-actin (Proteintech, USA, 1:2000) and PHEV (a laboratory-prepared polyclonal antibody to PHEV, 1:500) with an overnight incubation at 4 • C. Next, the membranes were washed with PBS containing tween-20 (PBST) four times and were incubated with horseradish peroxidase-linked secondary anti-rabbit or anti-mouse IgG antibodies (Proteintech) for 1 h at 37 • C. After washing with PBST, the signal was visualized using an ECL detection kit (Proteintech). β-actin was used as a loading control.
TCID 50 Analysis
To determine the TCID 50 of the virus culture collected at different passages, the cell culture supernatants were serially diluted from 10 −1 to 10 −8 , and 100 µL of the diluted virus was inoculated onto the N2a cells in each well of the 96-well culture plates with eight wells for each dilution. The plates were incubated for 3 days at 37 • C in 5% CO2 and were scored for a cytopathic effect. The infectious titer was calculated by the Reed and Muench method (Biacchesi et al., 2005).
PHEV Infection and miR-21a-5p Antagomir Administration
The mice were randomly divided into four groups, six mice in each group, as follows: group 1 was the control group; group 2 was the PHEV-infected and PBS group (PBS); group 3 was the PHEV-infected and antagomir control group (NC); and group 4 was the PHEV-infected and miR-21a-5p antagomir treated group (antagomir). The miR-21a-5p antagomir used in this study contains chemically modified single-stranded RNA molecules which could prevent the complementary pairing of miRNA and its target gene mRNA through the combination of strong competitive with the mature miRNA in vivo were purchased from RiboBio. The mice in the antagomir group were injected intraperitoneally with 2 nmol antagomir per mouse. The mice in the other groups were injected with the same volume of control solution or not. The brain tissues were analyzed 24 h after an intracerebral injection to study the expression of miR-21a-5p and Caskin1 by RT-PCR or Western blotting. After 24 h of injection, mice were inoculated intranasally with 100 ml of PHEV solution (TCID 50 = 10 −4.5 /0.1 ml). The brain tissues were analyzed 5 days after the inoculation with PHEV to analyze the expression of miR-21a-5p and Caskin1 and viral RNA by qRT-PCR or Western blotting. The weight of the mice was measured every day. The permission to work with laboratory animals was obtained from the Animal Welfare Ethical Committee of the College of Veterinary Medicine, Jilin University, China.
Indirect Immunofluorescence
After inoculation with PHEV, the mice were sacrificed, and the brain tissues were cut into frozen-sections. The frozen-sections or the cells grown in 6-well plates, after transfection, were washed with PBS, fixed with 4% paraformaldehyde for 15 min at room temperature, permeabilized with 0.1% Triton X-100 for immunofluorescence for 15 min at room temperature and were blocked with 5% non-fat milk powder for 1 h at 37 • C before being washed with PBS and incubated overnight at 4 • C with a PHEV polyclonal antibody. After washed with PBS three times, the (FITC)-conjugated Affinipure Goat Anti-Mouse IgG (H+L) secondary antibodies (Proteintech) were incubated with PBS at 37 • C 1 h. Hoechst was used to stain the nuclei. After washed with PBS three times, the coverslips were mounted onto glass with Antifade Solution (Solarbio) before visualization on a confocal microscope.
Statistical Analysis
Values are presented as an arithmetic mean ± standard error. All data were analyzed by SPSS 17.0 software (Chicago, USA). Histogram was carried out with GraphPad Prism 5.0 software (San Diego, CA, USA). Western blot pictures were analyzed by Tanon Gis software (Shanghai, China). Fluorescence intensity was analyzed by ImageJ software (National Institutes of Health, USA). All results were considered statistically significant at the p-values < 0.05 level.
miR-21a-5p Up-Regulation during the PHEV Infection Process
To determine the differentiated expression of miR-21a-5p during the PHEV infection process, we collected the N2a cells after infection for 24, 48, and 60 h, and the mouse brain tissue was infected for 3 and 5 days prior to the RT-PCR. The results revealed that the relatively expression level of miR-21a-5p was significantly higher after infection than in the control (Figures 1A,B). Thus, we speculate that miR-21 might play a role in the process of viral infection.
In vitro
To determine whether mir-21a-5p has effects on PHEV replication, we tested the effect of upregulating or blocking miR-21a-5p on PHEV replication in N2a cells. To figure out the efficacy of the miR-21a-5p mimics and the inhibitor, the N2a cells were transfected with the miR-21a-5p inhibitor or the miR-21a-5p mimics for 24 h, and the expression level of miR-21a-5p was analyzed. A significant increase or decrease was observed in the miR-21a-5p level in the N2a cells transfected with the miR-21a-5p mimics or the miR-21a-5p inhibitor, respectively, compared to the cells transfected with the negative control ( Figure 1C). The N2a cells were transfected with the mimics or the miR-21a-5p inhibitor (50 or 100 nM), followed by infection with PHEV. The cells were collected 24 h post-infection to determine the viral propagation. Among the RT-PCR, Western blotting, IFA and TCID50 results, the overexpression of miR-21a-5p significantly increased the progeny of PHEV production, and conversely, the transfection of the miR-21a-5p inhibitor demonstrated the opposite effects (Figures 1D-G). These data suggest that miR-21a-5p induction contributes to PHEV replication.
The Prediction of miR-21 Target Genes
To characterize the molecular components of miR-21a-5p activity in facilitating PHEV replication, we next predicted miR-21a-5p targets using bioinformatics prediction software.
TargetScan predicted 210, MicroCosm predicted 836 and miRanda predicted 4990 target genes. Of these, 203 target genes were predicted by all three systems. Then, the target genes were functionally analyzed. We found that the target genes were involved in a variety of physiological processes, such as cell differentiation, proliferation, apoptosis, and synaptic function (Supplementary Data Sheet 2). Part of results was demonstrated in Table 1. Caskin1, a newly discovered post-synaptic density protein in mammalian neurons, was used as the research object.
miR-21a-5p Modulates Caskin1 Expression in PHEV-Infected N2a Cells
The time-dependent expression pattern of Caskin1 mRNA and protein in the N2a cells and mouse brain tissue following PHEV infection was studied. A significant down-regulation in Caskin1 mRNA and protein expression at 24, 48, and 60-h post-PHEV infection was observed (Figures 2A,B). The results of the brain tissue detection in the mice were consistent with the above findings (Figures 2C,D). The mRNA and protein expression of Caskin1 also were determined after transfecting with the miR-21a-5p mimics which could be over-expressed miR-21a-5p. The expression of Caskin1 was significantly decreased after transfection with miR-21a-5p mimics (Figures 2E,F). Furthermore, the expression of Caskin1 in N2a cells following transfection with the miR-21a-5p inhibitor was analyzed. Inhibition the expression of miR-21a-5p caused the enhanced expression of Caskin1 mRNA and protein (Figures 2E,F). It was thus evident from the results that miR-21a-5p modulates Caskin1 expression.
miR-21a-5p Directly Regulates Caskin1 Expression by Targeting the 3 -UTR of Caskin1
To test whether miR-21a-5p directly regulates the expression of Caskin1 in the process of the PHEV infection, we prepared a Dual-Luciferase miRNA Target Expression Vector by binding the 3 -UTR of mouse Caskin1 which containing an exact match to miR-21a-5p target sequence (Caskin1-WT-UTR) (Figure 2G). We also created a Dual-luciferase miRNA Target Expression Vector by binding the 3 -UTR of mouse Caskin1 which containing a mismatched version of miR-21a-5p target side (Caskin1-MUT-UTR) ( Figure 2G) as control. Cotransfection of the Dual-luciferase miRNA Target Expression Vector containing the Caskin1-WT-UTR (Caskin1-WT) with the miR-21a-5p mimics in the Hela cells resulted in an approximate 90% loss of Dual-luciferase reporter expression compared with the control (Figure 2H). However, the Dualluciferase expression was not affected by the co-transfection with the miR-21a-5p mimics when the Caskin1-WT-UTR was replaced with the Caskin1-MUT-UTR in the Dual-luciferase reporter system (Figure 2H). Similarly, luciferase activity was significantly increased when the Hela cells were transfected with the miR-21a-5p inhibitor to inhibit the endogenous miR-21 levels ( Figure 2H). Taken together, these results indicate that miR-21a-5p negatively regulates Caskin1 expression in the PHEV infection process by straightly binding the 3 -UTR of the Caskin1 gene.
miR-21a-5p Promotes PHEV Replication by Targeting Caskin1 in the N2a Cells
Then we tested whether the expression of Caskin1 had an effect on PHEV replication. First, the expression of endogenous Caskin1 was reduced by transfecting Caskin1 siRNA in the N2a cells. The results showed that more than 80% Caskin1 mRNA and protein levels was silenced in the N2a cells ( Figures 3A,B). Then, we examined the effect of reduced Caskin1 expression on PHEV replication. PHEV replication was significantly increased in the cells after reducing the expression of Caskin1 compared to the cells transfecting with a negativecontrol siRNA (Figures 3C-E). Cells in which Caskin1 was silenced had a significant increasing trend in PHEV titers compared to the control cells ( Figure 3F). In conclusion, these results indicate that Caskin1 is a conditioning factor of PHEV replication and that PHEV makes use of miR-21a-5p induction to decrease Caskin1 levels and conducive to its replication.
miR-21a-5p Antagomir Treatment Reduces Symptoms in PHEV-Infected Mice
To determine whether miR-21a-5p inhibition was possible in normal mice in vivo, the miR-21a-5p antagomir was injected intracerebrally three times, at 3-day intervals (at days 0, 3, and 6). We detected miR-21a-5p expression in the brain tissue of the different groups of mice. In the miR-21a-5p antagomir group, the miR-21a-5p expression levels were significantly down-regulated 24 h after injection compared with the control (Figure 4A). The result indicated that the miR-21a-5p antagomir efficiently entered into the mouse brain tissue, resulting in the deletion of miR-21a-5p in the mouse brain tissue. On day 5 post-infection, the control group, the PBS group, and the NC group mice exhibited typical symptoms of PHE that included generalized muscle tremors and hyperesthesia. However, the antagomir group did not have these typical symptoms. At 7 dpi, the antagomir group appeared to have FIGURE 4 | miR-21a-5p antagomir treatment reduces symptoms in PHEV-infected mice. (A) The expression of miR-21a-5p in brain tissue 24 h after injection. (B) Weight loss rate in the mice after treatment during PHEV infection. (C) The expression of miR-21a-5p in brain tissue 5 days after PHEV inoculation. (D) The expression of Caskin1 mRNA in brain tissue 5 days after PHEV inoculation. (E) The expression of Caskin1 protein in brain tissue 5 days after PHEV inoculation. (F) The relative expression of the viral RNA in brain tissue after treatment. (G) The expression of PHEV protein in brain tissue after treatment. (H) IFA was performed to examine the expression of PHEV in brain tissue after treatment. All of the data are representative of at least three independent experiments. * P < 0.05, * * P < 0.01 vs. normal controls. the typical symptoms. Unfortunately, all of the mice in those groups eventually died. In contrast, the antagomir group of mice survived more than 2 days. In addition, compared with other groups, the antagomir group of mice did not have an obvious weight loss (Figure 4B). At 5 dpi, the mice were sacrificed, and the brain samples were collected and processed for subsequent experiments. The results were similar to those in vitro, it has a negative relationship between the expression patterns of miR-21a-5p and its target Caskin1 in the brain tissues from the PHEV-infected mice, and the higher expression of miR-21a-5p was associated with a low-level of Caskin1 (Figures 4C-E). In the PHEV-infected mice, the treatment with the miR-21a-5p antagomir caused a significant reduction in the miR-21a-5p expression and rescued the alterations in the Caskin1 levels (Figures 4C-E).
We used RT-PCR, IFA, and Western blotting to determine the effects of the miR-21a-5p antagomir on the viral proliferation after injection. The expression of viral RNA and protein was down-regulated after injecting the miR-21a-5p antagomir (Figures 4F,G). The results of the IFA indicated that the miR-21a-5p antagomir had a certain effect on the proliferation of the virus (Figure 4H). These findings affirm that the miR-21a-5p antagomir inhibits viral proliferation by up-regulating Caskin1 and has a therapeutic effect on animals.
DISCUSSION
Many studies show that miRNA and viral infection are closely related processes, such as Epstein Barr virus, herpes virus and some reverse transcription viruses because virus-encoded miRNAs can regulate host cell endogenous miRNA expression (Pfeffer et al., 2004). The host cell's endogenous miRNA inhibits the replication of the virus, and there could also be a virus that facilitates viral replication or regulates cellular immune function and so on (Sarkies et al., 2013). For example, the host cell miR-145 negatively regulates replication of the oncolytic herpes simplex virus-1 by targeting AP27i145 (Li et al., 2013), and in influenza virus infection, multiple proteases play a key role in the host cell miRNA regulation of these proteases, which permits convenient influenza virus replication (O'Connor et al., 2014). In addition, miR-29b in the JEV-infected mouse microglial cell line was upregulated during JEV-induced microglial activation, and miR-29b plays a role of the pro-inflammatory response. The mechanisms of its action is mediated by inhibiting the anti-inflammatory protein, TNFAIP3, resulting in the continuous activation of NF-κB and followed by pro-inflammatory cytokine secretion (Li et al., 2013).
Multiple miRNAs modulate the virus infection process, but the roles of the miRNAs in the PHEV infection process are not fully understood. Neurotropic viruses such as JEV, human immunodeficiency virus 1, herpes simplex virus 1, and vesicular stomatitis virus could cause host cell miRNA expression changes during infection. The expression of miRNAs, which normally regulates viral replication, was upregulated by 1.5-4-fold (Ashraf et al., 2016). The previous research from our lab demonstrated that the change in the miR-21a-5p expression was obvious in the PHEV infection process, suggesting that miR-21a-5p might play a very important role in the process of virus infection. PHEV mainly causes obvious nerve injury, whereas miR-21 also plays an important role in nerve injury. For example, in traumatic brain injury, the expression of miR-21 was up-regulated 1.5fold in the brain cortex and hippocampus and might affect the pathophysiology of traumatic brain injury (Redell et al., 2011). In this study, we found that miR-21a-5p expression in N2a cells was up-regulated after PHEV infection and increased up to 2.5-fold at 60 h. The expression of miR-21a-5p was also up-regulated in PHEV-infected mice and increased to 3.35-fold at 5 days. This indicates that the expression level of miR-21a-5p is significantly increased during PHEV-infected host, suggesting that miR-21a-5p may play a role in PHEV-induced neurotoxicity. In addition, the up-regulation of miR-21a-5p expression promotes viral proliferation, and the knock-down of the expression of miR-21a-5p reduces viral proliferation, suggesting that miR-21a-5p affects PHEV proliferation. Unlike other cells, nerve cells are very sensitive to injury, especially to some neurotropic virus infections, such as PHEV. Small changes in the amount of the viruses in the host neurons may cause significant changes in the course of disease. In spite of siRNA inhibition of Caskin1 showed a very modest inhibition of PHEV replication (twofold) in neuronal cell cultures and even less of an effect in mice, its impact on viral nerve injury may be very large. Therefore, we speculated that miR-21a-5p might play an important role in PHEV pathogenesis. Whether there are other targets of miR-21a-5p affect viral replication is unclear during PHEV infection. miRNAs post-transcriptionally regulate the expression of multiple genes by binding to the 3 -UTR of their target messenger RNAs to play biological functions (Chi et al., 2009). To determine the mechanism of miR-21a-5p affecting virus proliferation, we predicted its target genes. There were many target genes of miR-21, such as PTEN, PDCD4, RECK, TPM1, TIMP-3, Maspin, and Sprouty (Spry-2, Spry-1) (Buscaglia and Li, 2011). These target genes were involved in the process of cell proliferation and apoptosis and so on. In this study, we choose Caskin1 as the target gene to be detected. Caskin1 is a brain-specific multi-domain scaffold protein that binds Lar and Dock through its different structural domains. In addition, Caskin1 plays a key role in motor axon targeting through interaction with the Lar-dependent signaling pathway (Weng et al., 2011). In the CNS, Caskin1 and dock have overlapping roles in axon outgrowth. Overall, these studies indicate Caskin1 is required for neuronal axon growth and guidance in the CNS. Together, these studies identify Caskin1 as a neuronal adaptor protein required for axon growth and guidance (Weng et al., 2011). Time dynamics research to the expression of miR-21a-5p and its target gene Caskin1 showed that an inverse relationship with respect to each other's expression until the 24 h time point. The up-regulation of miR21a-5p after PHEV infection showed a sustained changes at 24, 48, and 60 h; however, Caskin1 mRNA and protein levels decreased 60 h post-infection compared to the 24 and 48 h time points. It is possible that miR-21a-5p reduces the expression of its target mRNA and protein (Caskin1). These results were consistent with the above in vivo experiment. However, further studies are required to resolve this issue of the differential kinetics of miR-21a-5p and its target. In this study, we used a luciferase reporter assay to evaluate the interaction of miR-21a-5p with the 3 -UTR of Caskin1. It was evident from the luciferase reporter assay that miR-21a-5p binds to this region of Caskin1 and suppresses its expression. The knock-down of the expression of Caskin1 in the N2a cells promotes virus proliferation. Taken together, our findings demonstrate that miR-21a-5p positively regulates PHEV replication by targeting Caskin1.
To study the role of miRNA in vivo and its influence on the viral diseases, the miRNA antagomir was used to reduce the miRNA concentration. For example, after the miR-19b-3p antagomir treatment, 40% of the JEV-infected mice became asymptomatic, and the expression of miR-19b-3p showed a reciprocal pattern with its target gene RNF11 in the JEV-infected mouse brain tissues. The miR-19b-3p antagomir inhibits cytokine secretion and activation of astrocytes and microglia, and reduces neurons damage in the JEV-infected mice (Ashraf et al., 2016). In this study, miR-21a-5p antagomir treatment delayed the onset of mice and delayed their weight loss, and the lifespan of the mice was extended for about 2 days. After the miR-21a-5p antagomir treatment, the virus multiplication decreased in the PHEVinfected mouse brain tissues. MiR-21a-5p exhibited a negative regulation expression profile with Caskin1 in the brain of PHEVinfected mouse, which further supports a functional interaction between the miRNA and mRNA in vivo. These findings indicate that the miR-21a-5p antagomir treatment reduces the symptoms in the PHEV-infected mice.
In this study, we identified a new mechanism regulating the proliferation of PHEV mediated by interaction between miR-21 and Caskin1, which may be exploited to reduce the proliferation of PHEV for therapeutic applications.
ETHICS STATEMENT
All of the mouse experiments in this study were approved by the Animal Welfare Ethical Committee of the College of Veterinary Medicine, Jilin University, China (permission number 2012-CVM-12) and were performed in accordance with the guidelines of the Council for the International Organization of Medical Sciences on Animal Experimentation (World Health Organization, Geneva, Switzerland).
|
v3-fos-license
|
2022-01-22T16:22:29.523Z
|
2022-01-01T00:00:00.000
|
246138004
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://pure.eur.nl/files/46188020/Milestones_in_music_Reputations_in_the_career_building_of_musicians_in_the_changing_Dutch_music_industry.pdf",
"pdf_hash": "c35d89f94fa464bd73b9bfcd62b8e287845b00bc",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2634",
"s2fieldsofstudy": [
"Business",
"Sociology"
],
"sha1": "166652b083665f1f98fd0c36d77be425f7f21c89",
"year": 2022
}
|
pes2o/s2orc
|
Milestones in music: Reputations in the career building of musicians in the changing Dutch music industry
This study addresses the role of reputation in the career building strategy of early-career musicians in a transforming music industry. Drawing from interviews with 21 musicians, we find that musicians continue to believe that building their reputations within the established music industry is important for career success, despite technological changes that could lead them to focus instead on alternative career strategies. Our analysis proceeds in two stages that broadly reveal how market culture shapes workers ’ strategies. First, we discuss how musicians put considerable effort towards achieving particular career milestones that they believe will signal success to industry intermediaries. Second, we show that new technologies that connect artists directly to audiences without the need for intermediaries have allowed musicians to pursue new career building strategies. However, they have not eliminated musicians ’ belief in appealing to industry insiders through milestones. Even though achieving industry milestones may not lead to immediate economic benefits, musicians pursue them because (1) they believe that backing from industry intermediaries may result in later success and (2) they value the symbolic appeal and romance of being part of the industry.
Introduction
The music industry has undergone major transformations due to (illegal) downloading and streaming causing a shift in revenues from recorded music towards revenues from live performances (Naveed, Watanabe & Neittaanmäki, 2017;Young & Collins, 2010). In addition, the arrival of social media and other technological innovations allegedly democratized the means of production, promotion and distribution of music (Fox, 2004). Recent scholarship has examined the implications of these transformations for the work practices of pop musicians, focusing on the rise of Do-It-Yourself (DIY) or entrepreneurial business models (Bennett, 2018;Threadgold, 2018) and (new) required tasks and skills , such as aesthetic labour (Hracs & Leslie, 2014) and social media skills (Haynes & Marshall, 2017). Like in other labour markets such as 'sports, the arts, academia, knowledge work and fashion modelling' (Dumont, 2018, p. 515), reputation is an important commodity for musicians to build a career, as it helps them to receive support and opportunities from market actors (Lingo & Tepper, 2013). However, because of these transformations, previous key signals of reputation may no longer hold as much weight in the current music industry (e.g. releasing an album). Yet, few studies have examined how musicians use their reputation to build a career in a context marked by technological changes. Therefore, drawing on in-depth interviews with 21 Dutch early-career pop musicians, this article addresses the role of reputation in the career building strategy of early-career musicians in a transforming music industry. 1 In our paper, we break this research problem down into two parts. First, we investigate how musicians attempt to create and signal a favourable reputation to build their career. Ample research has been done on how cultural intermediaries cope with uncertainty due to the lack of formal evaluation standards, relying on reputation to select and promote artistic products, for instance in television (Bielby & Bielby, 1994;Zafirau, 2008), visual arts (Velthuis, 2013) and literature (Franssen & Kuipers, 2013). However, it remains unclear how cultural workers themselves can create a favourable reputation to increase their chances to be selected by these intermediaries (Dumont, 2018). Moreover, the way in which musicians, and artists in general, build their careers by promoting themselves is currently underexplored (Lingo & Tepper, 2013;Zwaan, Ter Bogt and Raaijmakers, 2009) -which is especially pressing in light of changing working conditions. To address this issue, we argue that musicians can be thought of as workers in a 'status market' (Aspers, 2011) where cultural intermediaries decide which workers they will offer business opportunities to (Bielby & Bielby, 1994) by ranking them on the basis of their reputations (Aspers, 2011). Here, according to Podolny (2010), reputation 'denotes an expectation of some behaviour or behaviors based on past demonstrations of those same behaviors ' (p. 13). Therefore, workers may want to perform practices which help to create a favourable reputation (Dumont, 2018). In our analysis we show that musicians attempt to achieve milestones; they create such a reputation and can be used to signal past and future success to music industry representatives. In this way, we introduce milestones as a mechanism through which market culture shapes reputational practices.
Second, we examine musicians' beliefs about the ways in which new technologies impact their career building strategies. As the music industry is changing, strategies that meet 'traditional' market demandswork in line with the market culturemight not guarantee immediate economic success anymore, and musicians might want to leave the existing market. While we can expect that reputational practices, such as the collection of milestones, are shaped by how artists like musicians interpret these transformations, i. e., which processes affect reputations and how the value of this reputation may be changing, it remains unclear how such artists are navigating technological changes in the creative industries and how they adjust their career building strategy accordingly. Therefore, we draw from the sociology of markets and cultural sociology to explain why artists may (or may not) be resistant to technological changes and continue to follow (or deviate from) established industry practices, pointing out structural factors such as power dynamics (Beckert, 1999) and cultural factors such as 'cultural lag' (Swidler, 1986). In our analysis we show that musicians experience a continuing dependency on the traditional music industry, leading to a situation where milestones are collected that meet the demands thereof and are expected to help achieve long-term career success, but do not translate in short-term economic profitscontributing to our understanding on why effects of transformations on the career building strategies of workers may be mediated.
The role of reputations in the career building of musicians
To understand how artists find work in uncertain labour markets (Menger, 1999), previous research investigated 'the symbolic work that artists do to build reputations, convince others of their legitimacy as artists and professionals' (Lingo & Tepper, 2013, p. 338). For artists, it is especially important to convince cultural intermediaries, because in art markets they function as gatekeepers, connectors, marketers, distributors and more (Janssen & Verboord, 2015). This is also the case in the music industry, which has been conceptualized as a network of intermediaries, where 'the manager, record firm or bookie introduces artists to the industry (input), while media, retail and concert promoters present the artist to a public (output)' (Keunen, 2014, p. 26). Because of this role of matching supply and demand, artists try to influence decisions of these intermediaries to increase 'the probability that a given new release will be selected for exposure to consumers' (Hirsch, 1972, p. 648).
Yet, as art markets lack a standard to assess the quality of artists and their work and demand uncertainty causes high economic risks (Hirsch, 1972;Negus, 1992), intermediaries face high levels of uncertainty when selecting artists (Franssen & Kuipers, 2013;Velthuis, 2013). For this reason, intermediaries look for solutions to assess and value the quality of artists (Smits, 2016), in an attempt to filter out the oversupply of candidates (Hirsch, 1972). Most importantly, intermediaries create a circuit of commerce, i.e. a network that 'reinforces credit, trust, and reciprocity within its perimeter but organizes exclusion and inequality in relation to outsiders' (Zelizer, 2010, p. 315). For instance, research on the (Dutch) music industry has shown that intermediaries use their professional networks to obtain information about acts ). Furthermore, these circuits share evaluation repertoires that help to value art works (Zelizer, 2010) based on a combination of institutional culture such as 'shared values, norms and conventions' (Mears, 2011, p. 159) and expertise consisting of professional standardized knowledge and personal dispositions (Smith Maguire & Matthews, 2012). For example, Dutch A&R (artist and repertoire) managers select musicians on the basis of 'the live performance, quality of the music, musical skills, appearance, motivation as well as potential media and audience appeal' (Zwaan & Ter Bogt, 2009, p. 97).
Intermediaries rely on these evaluation repertoires to judge, as a proxy for quality (Podolny, 2010), the reputations of artists and rank them relative to each other in the market (Aspers, 2009). As such, art markets such as the music industry can be understood as status markets: based on this hierarchical order, intermediaries award artists with a certain amount of status and higher levels of status translate into increased rewards (Aspers, 2011). In this way, reputation (based on past behaviours of artists in the market) is converted into status (the position one has in this hierarchical order) (Podolny, 2010), and business opportunities are awarded accordingly. 2 For example, having a 'favourable' reputation (Zafirau, 2008, p. 102) is a strong predictor for products being picked up by intermediaries (Bielby & Bielby, 1994). For this reason, reputation is often used as a rhetorical strategy to legitimize choices for certain artistic products (Bielby & Bielby, 1994). Therefore, to take advantage of the career opportunities that intermediaries offer, musicians need to perform practices that help to create a favourable reputation (Dumont, 2018;Zafirau, 2008) by performing actions in the market that signal their qualifications (Jones, 2002). Performing reputational practices is especially important for early-career artists, as this helps them to acquire support in a market where educational credentials do not function as closing mechanisms (Eigler & Azarpour, 2020;Jensen & Kim, 2020;Skaggs, 2019). Indeed, research has shown that pop musicians engage in a process of capital mobilization and conversion to draw attention from intermediaries (Scott, 2012). Moreover, music industry actors attempt to increase the symbolic value of musicians, for example by promoting their work and using their network (Lizé, 2016). Furthermore, music industry actors tend to 'aggrandize' their businesses, which is to acquire or inflate their reputation (Schreiber & Rieple, 2018). Yet, it remains unclear how musicians themselves attempt to create favourable reputations.
In sum, artists rely on a circuit of commerce of intermediaries to build their career. These intermediaries look at their reputations to evaluate them and therefore artists perform practices to create a favourable reputation to meet industry expectations. Based on these reputations, intermediaries rank artists in a hierarchy, and in this way convert the reputations into status and offer business opportunities accordingly.
Building a career in a changing market
As new technologies provide career opportunities that do not depend on intermediaries, one might expect that musicians may abandon their orientation on traditional intermediaries (e.g. record labels, radio stations) and corresponding career building strategy. As mentioned, a shift in revenues from recorded music to live performances and a series of technological innovations allegedly democratized the means of music production and distribution. As a result, some intermediaries such as record labels, music retailers and media outlets lost their central role in the industry. Most importantly, record labels lost economic power (Rogers, 2013), making them less inclined to take risks with regard to offering contracts to new acts (Frith, 2014). At the same time, other and new intermediaries such as live venues and streaming services became more important for musicians (Naveed et al., 2017). Moreover, new technological opportunities can help musicians to monetize direct contact with their fans, which holds the promise (or requirement) to bypass labels, record and distribute one's own music, reach audiences directly and create new revenue streams (Haynes & Marshall, 2017;Young & Collins, 2010). To understand the way in which musicians navigate these changes, the combination of sociology of markets and cultural sociology can help to analyse how market culture and structure shapes career building strategies and why market change may affect such strategies.
To start, market sociologists distinguish three ingredients that influence the practices of workers: structure, agency and culture (Aspers, 2011). First, the position that a worker has, and the structural conditions of that position, affects the opportunities for action. For example, research in music has shown that the strategies of musicians can be understood as a response to the specific configuration of the local industry (Everts & Haynes, 2021;Tarassi, 2018). Furthermore, workers strategize their actions by reflecting on their current and desired position. Lastly, market culture creates order by providing a set of rules of 'how market actors are allowed and expected to cooperate and compete in the market' (Aspers, 2011, p. 94). Here, institutionalized decision rules enable actions because they make outcomes predictable, while they constrain other actions because these would possibly violate these rules (Beckert, 1999).
Research in the sociology of markets has shown that these transformations can have two effects on the practices of workers: first, practices can remain resistant to change 'the more they enjoy high levels of social legitimacy and the more they have the backing of powerful agents' (Beckert, 1999, p. 791). This shows that agents with more capital can continue to promote practices when they understand them to be appropriate, which especially tends to happen when changes affect the distributional outcomes in the market (Beckert, 2010). Under these circumstances, workers experience normative pressures to resist new practices even if they are more efficient (Beckert, 1999). A second effect is that markets may become destabilized, changing power dynamics and altering opportunities (Beckert, 2010). This, then, results in a process where is re-established what actors want to trade (Aspers, 2011). For example, Ryan and Peterson (1993) argue that in the market of pop music new technologies can lead to shifts in the power balance, causing a re-evaluation of the skills of musicians.
Cultural sociological approaches inform a conceptual framework to understand how an established market culture exercises possibilities and constraints on actors, by highlighting the effects of explicit and implicit culture (Lizardo & Strand, 2010). First, according to Swidler (1986Swidler ( , 2001, explicit culture functions as a 'tool kit' (Swidler, 1986) which structures the actions of actors and the goals they have, as 'action and values are organized to take advantage of cultural competences' (Swidler, 1986, p. 275, original emphasis). These practices are experienced as taken-for-granted, which has been demonstrated in research investigating the way in which risk coping strategies of theatre actors are shaped by local institutional contexts (Kleppe, 2017). Second, according to Bourdieu (1993), practices are affected by implicit culture as actors have a habitus shaped by their position in a field that structures their practices. In pop music, Threadgold (2018) has shown that for musicians the DIY culture of their music scene has a symbolic appeal which informs their actions. Moreover, this habitus influences the way actors perceive the field and their chances for success, or how the field 'presents itself to each agent as a space of possibles' (Bourdieu, 1993, p. 64, original emphasis). For example, this model has been used to explain how musicians choose out of a range of creative possibilities when writing or playing music (Toynbee, 2016).
Moreover, cultural sociology helps us to understand that how musicians interpret industry transformations might also affect their career building strategy. Cultural sociology has shown that implicit and explicit culture can mediate the effects of market change. First of all, change disrupts the influence of implicit culture on actors, as it causes 'temporary disjunctions between habitus and field' (Sweetman, 2003, p. 541, original emphasis), forcing actors to be more reflexive (Bourdieu & Wacquant, 1992). In addition, explicit culture can both hinder and stimulate innovations of actions in times of change. If the existing cultural scaffolding breaks down, the old culture might lose its influence and actors can experience 'unsettled times' (Swidler, 1986). In those cases, actors need to draw from new cultural repertoires to create new actions (Swidler, 1986) which most often is done by new 'institutional generations' (Lizardo & Strand, 2010, p. 223) such as early-career musicians. At the same time, 'cultural lag' may occur where actors fail to take advantage of new opportunities because it requires them to change their way of doing things (Swidler, 1986). In these situations, the 'old' cultural scaffolding is retained and keeps being taken for granted, even though it might not be functional anymore.
Overall, to understand how musicians build their career by creating a favourable reputation in the changing music industry, we should understand their career building strategy as an outcome of 1) the interplay between their structural position and agency within the dynamics of a status market, 2) the explicit and implicit culture of the market they operate in, and 3) the way market change may affect this.
Data and methods
For this study, musicians were targeted who participated in the 2018 edition of the Noorderslag festival, an influential showcase festival in the Netherlands. The focus on this festival helped to identify pop musicians who were in the same phase of their career, as acts who perform here are promising early-career artists (Kamer, 2016). Furthermore, targeting this population helped to identify musicians that aimed to build their career in the Dutch music industry, as Noorderslag is widely perceived as the place where the new generation of pop-rock acts presents itself to the intermediaries of the Dutch music industry (Keunen, 2014;Van Vugt, 2018). More specifically, musicians and intermediaries that partake in this festival are active in what Keunen (2014) has called the 'alternative mainstream' part of the Dutch music industry. This circuit is situated between the underground and mainstream and contains musicians active in a multitude of sub styles such as indie pop, punk or folk that all rely on the same network of intermediaries (ibid.).
To reach this population, we employed a purposeful sampling strategy: first, we left artists performing at the two main stages of the festival out of the sample, as these were more established acts. Second, to achieve a geographical spread, acts from various cities in the Netherlands were selected. Third, as musicians might have different roles in an act, we aimed to speak with musicians of each act who were involved in reputational and career building practices. Fourth, as gender significantly affects music careers (Berkers & Schaap, 2018), we aimed for a gender balance in our sample. However, because of rejections on interview requests, the sample consists of fourteen respondents who identify as men and seven who identify as women. 3 Of the 54 acts participating in the festival, 21 musicians were interviewed. 4 Musicians were approached via e-mail addresses mentioned on their website or via their booker or manager. An overview of the interviewees with more background information can be found in table 1. 5 To ensure their privacy, interviewees have been anonymized and their age is reported in categories.
The interviews were semi-structured to obtain information in light of the research question, while allowing the possibility to ask follow-up questions (Kvale, 2007). During the interviews, questions were asked about: 1) their goals and motivations in music, 2) how they build their careers and created favourable reputations, and 3) their reflections on the transforming music industry and how their work practices are shaped by this. In addition, a more general set of questions was asked for context, for example about the financial aspects of their work.
The interviews were performed face-to-face by the first author and took place in a location convenient for the interviewees, ranging from cafes to rehearsal spaces, between 18 June 2018 and 11 January 2019. The conversations lasted on average 66 min. Audio was recorded and afterwards transcribed verbatim. After transcription, the data have been thematically analysed in ATLAS.ti version 8 (Braun & Clarke, 2006), producing 750 initial codes capturing the interesting features of the data in light of the research question. Afterwards, in the process of searching for, reviewing, and defining themes, fifteen code groups were created containing patterns found in the data.
Results
In our analysis, first we explore how musicians attempt to build their career by creating a favourable reputation. Then, we investigate the musicians' beliefs about the ways in which new technologies impact their career building strategies.
3 31 musicians were contacted, of which ten declined the invitation to participate. The reasons for refusal were diverse: from a busy schedule due to touring and recording, to a more general refusal. Except for our effort to sample more women, upon comparison the musicians who refused did not show substantial difference from the included interviewees regarding age, label type, music education and style. 4 While not large enough to represent all early-career musicians in the Dutch music industry, the sample does have a sufficient size to reach saturation (Small, 2009). 5 In the analysis each interview is referred to by the number of each interviewee.
Performing success by obtaining milestones
For most respondents their central goal is a sustainable career in music in terms of workflow and finances. Of course, as shown in earlier research (Umney & Kretsos, 2015), the interviewees express a passion for music: they want to continue to create and perform music, with an emphasis on playing live gigs for audiences, and increase the opportunities to do so.
In addition, they try to earn money and increase their revenues. Notably, to reach these goals, all interviewees feel that they depend on the traditional intermediaries within the Dutch music market, such as media, labels, bookers, managers, pop venue and festival programmers: It is difficult to earn a steady income in music … this depends on whether you are counted in based on whether people think you're cool or believe that you belong in the media and music world. … It is not public opinion that informs this, it are the people from the music industry that decide. (15) Interviewees understand these intermediaries to be linked together in a powerful network, which they characterize as 'cliquish' (7,11,15). According to the interviewees, this network shares information and draws boundaries to outsiders (Zelizer, 2010) and they believe that a positive judgement of this small circuit can lead to new business opportunities: 'bands … get hyped because they know the right people' (12). Surprisingly, Spotify was the only new intermediary mentioned often by the interviewees: 'I am releasing singles. … I hope that they get on playlists on Spotify, because that is so important these days' (13). In particular, curated playlists seem to increase musicians' visibility in the Dutch music industry and can provide additional revenues.
This dependency on the established market and its intermediaries can to some extent be understood as a forced marriage. As musicians do not see a lot of opportunities to reach their goals outside the market (see Section 4.2.1), they experience a dependency on the industry to reach their goals. At the same time, musicians are pessimistic about their chances that they will reach these goals within the industry, as they consider the market to be characterized by heavy competition and difficulties with establishing an income. As one musician argued about creating a sustainable income: That is probably never going to happen. … If you aim for a minimum: then you should pay yourself 1 000 euro per month. That is 4 000 per month for the whole band, and the booker and manager go on top of that. And the additional fixed costs like the van, your stuff, the rehearsal space, gas, that kind of shit, and you have big expenses such as the studio etcetera. Then you need to earn 10 000 euro per month. How? That is never going to happen. (7) Nevertheless, the interviewees believe that they rely on the mentioned intermediaries to reach their goals. However, according to the interviewees, because these intermediaries are confronted with an oversupply of new musicians and because it is uncertain who will become successful, intermediaries are selective about who they work with. Therefore, to receive support, interviewees pursue a strategy where they try to create a favourable reputation and signal this reputation to intermediaries, in the hope to convince them of their suitability. To create a reputation, musicians engage in a career strategy consisting of a pursuit to reach what some interviewees refer to as milestones. According to the musicians, milestones are ritualized practices that they believe to function as signals of prior success and predictors of future success to intermediaries. By achieving those milestones of which they believe that they will be evaluated positively by intermediaries, musicians aim to reflect the evaluation repertoires of these intermediaries. Musicians hope that collecting milestones and signalling them to intermediaries will increase their status and lead to new business opportunities. As one musician said: 'the more you achieve, the more milestones you collect, the higher your fee' (8), and another: Yes, successful singles are sort of milestones that you have. Like: 'o, we released this single, that did well and got airplay on this radio station and was picked up by them', and that will give you some leverage for the next one, so to speak. (4) In other words, while the core of being a pop musician revolves around their material performances for musical audiences (on-stage and in recordings), musicians also give symbolic (offstage) performances for an organizational audience of intermediaries where they, together with their bookers and managers, act out their success story (or prior hits, see Bielby & Bielby, 1994) to create a favourable reputation for their act.
The milestones that were mentioned can be categorized on the basis of the different cues they signal (Jones, 2002). First, several milestones signal the competencies and experiences of these musicians in the music industry, such as releasing EPs or albums, participating in pop competitions, organizing tours, playing a lot of gigs, or playing abroad. Collecting such milestones can be used to signal that an act has acquired a level of success in the market that enables them to perform such activities, indicating that they have the capacity to reach similar or bigger success in the future. For example, one musician argued that touring abroad had a signalling function for the Dutch industry, rather than that it helped to build an audience in those countries: 'Of course you don't play very big venues there, so you really don't get a lot of fans there, but it's good for the people here to see that you play abroad' and 'when people see you come there, they know that you're busy' (20). Second, other milestones signal social relationships with high-status industry actors. Because of this association, the reputation of musicians is confirmed (Bielby & Bielby, 1999) and the status of these actors rubs off on them (Podolny, 2010). Examples of these milestones are playing at prestigious festivals and venues, getting attention from blogs, television, newspapers, magazines, and radio stations such as 3FM and signing with established (international) bookers or managers. For example, playing at a prestigious showcase festival signals to the industry that your band is promising and is worthy to invest in, or as one musician reported: 'playing at Noorderslag holds a certain value for people, like 'okay, thát band played at Noorderslag'' (9).
As said, according to the interviewees, collecting these different kinds of milestones is key for creating a favourable reputation. In the field, having a 'favourable' reputation means that musicians display a capacity for commercial success, for example that they have potential audience appeal and can be 'ticket sellers' (2), and have the necessary qualities to do well, such as having a good live performance and appearance, and music that 'is good and preferably something unique' (20) and 'poppy enough to reach a more mainstream audience' (13) (cf. Zwaan & Ter Bogt, 2009, p. 97). As such, collecting and signalling milestones has a performative quality as it helps to build such a reputation of potential success, because it creates the expectation with intermediaries that they could reach similar successes in the future, or to rephrase Podolny's discussed definition of reputation, it is this past behaviour that causes an expectation of similar future behaviour (2010, p. 13). For example, one musician told how good reviews on a showcase gig led her to small tour as it built a reputation that she was able to give 'very good shows' (6). In a similar manner, for one musician a successful single on a streaming platform led to attention from labels, because, as a musician told, these labels hoped that: 'maybe they can make another one, then we'll can earn something with that' (20).
In an attempt to convince intermediaries of their suitability, musicians signal these milestones to intermediaries in multiple ways. First, musicians send out PR packages and press releases to the industry. For example, one musician discussed how they used the milestone of releasing their first EP to acquire new gigs: 'Well when we released our first EP … We did a lot of promotion then. So, we made promotional packages and sent them to all kinds of people' (9). Furthermore, achieved milestones are mentioned in social media posts to show to the industry (and audience) that they are busy doing 'interesting' things (11) because 'you need to keep them warm, otherwise they leave' (7). Most importantly, musicians together with their bookers and managers contact intermediaries and use these milestones to signal the musicians' reputation. For example, one interviewee explained how his band managed to get a gig at a big festival by obtaining milestones which their booker used to pitch them to representatives of that festival. They asked themselves: How can we reach this with a minimum of resources? We thought we can do that with 20, 25 gigs. That was the strategy. … In addition, we did two weekends in Germany, because it was interesting for our Dutch booker. (21) Of course, a lot of these milestones such as releasing an album or playing in a small pub also are experienced as pleasurable activities by these musicians, which is reflected in other research that has argued that passion and the pursue of creative interests are central motivations in creative and musical work (Bhansing, Hitters, & Wijngaarden, 2018;Threadgold, 2018;Umney & Kretsos, 2015). Consequently, these milestones are not solely achieved to appeal to intermediaries (see Section 4.2.2). However, even though musicians do have other motivations for performing these practices (see also , they are part of a deliberate strategy to build their career as well: While you want to try to live in the now as much as possible, especially as a musician, because you want to make music now, you just have to look at what we will do in the future, and how can you make sure that you can be at festivals again and so on. (11) As such, these milestones have a double function: they often are pleasurable activities for the musicians themselves, while at the same time they are strategically collected and used to signal the act's quality to the industry to reach their goals. Therefore, according to several interviewees milestones should not be strived for just for the pleasure of musicians but should always be incorporated in a long-term strategy: 'for example, if you perform a lot, but there is no good plan behind it, if you have no good reason why you are doing it… You should not forget the end goal…' (6). Therefore, as one musician said, the collection of various milestones must strategically 'tie into each other' (10). For example, this musician came up with a planning so that the 'autumn tour comes after the release of the EP' so that his act then 'can send out a press release again and promote those songs again' (10).
As shown, the collected milestones function as symbolic capital (Bourdieu, 1993) and musicians hope that a positive evaluation of their reputation can help them to convert this acquired capital into more and other forms of capital necessary to build their career (Scott, 2012), such as economic (e.g. new gigs), cultural (media attention) or social capital (management deals) -which then on its turn can be used to obtain new milestones. For example, one musician explained how a representative of a radio station said to her that 'if you release this single, it will not get in our day rotation, so that would not be smart' (4), after which she chose to release an alternative single. However, when that new single did well on the radio, the station became interested in the single they rejected earlier. In addition, milestones can be used to negotiate higher fees. One musician described how their booker could increase the fees after they scored a hit on the radio, another musician could do this when their live show was well-received, and a third one did just the opposite and accepted a lower fee offer from a prestigious concert organizer because playing that gig would provide their booker leverage in future negotiations with that and other concert organizers: We make certain choices so our booker can slowly increase our fee. That is why we did that gig for less, because it was from that organizer. We did a show for 150 euro … our booker said that it was the right thing to do. (20) These examples show that initial success can influence your chances for future success, or as Bielby and Bielby state 'success breeds success ' (1999, p. 80).
In sum, the interviewed musicians see the pathway to reach their goals within the boundaries of the traditional music industry and experience a dependency on traditional intermediaries. As a result, they create a favourable reputation by means of achieving milestones and signalling those milestones to these intermediaries to obtain status and corresponding business opportunities. In this way, their career building practices are shaped by their attempt to reflect the evaluation repertoires they perceive to exist in the existing market (Zafirau, 2008).
Transformations of the music industry
To understand how musicians account for the changing conditions in the discussed career building strategy, we first discuss the new technologies and new roles that have been incorporated in the existing career strategy. Then, we show how due to cultural lag several work practices have remained resistant to change.
Incorporating changes
To start, musicians have incorporated work practices made possible by new technological innovations. All musicians are active on social media and several musicians record their music themselves or release their songs independently on Spotify. Moreover, multiple musicians have a web shop through which they sell merchandise. Nevertheless, the interviewees believe that these technological possibilities only make a modest contribution to reaching their career goals and they are not optimistic about their economic potential. For example, musicians report that it is difficult to build a following online: 'smaller artists are suffering, because it is very difficult to reach the mainstream social media' (11). Moreover, an online fanbase is difficult to monetize: 'having 100,000 followers does not translate into higher ticket sales' (19). As a result, musicians continue to feel dependant on the traditional career path that the music industry offers, in part because they perceive a lack of economic opportunities outside the music industry, indicating that power dynamics have remained unchanged. In addition, interviewees report that some of these new practices, such as getting selected in highly rated Spotify playlists and having a strong presence on social media, have become understood by intermediaries as milestones, revealing that the evaluation repertoires of the intermediaries too have adapted to the new situation. As one musician expressed: I think that a lot of labels base their choice on what stands out with regard to Spotify plays. We released something independently and that did pretty well for something that was released without a label, and that led to attention from labels because these were plays that we could generate ourselves. (16) In other words, these new work practices are also incorporated in the discussed career strategy within the traditional music market, creating additional market demands that these musicians must try to meet by adding supplementary reputational opportunities.
Second, while musicians adopted entrepreneurial and DIY approaches before, the decreased support of labels and the increased technological opportunities have transformed these models from a niche alternative into a dominant approach for new aspiring musicians (Haynes & Marshall, 2018;Hracs, 2015): Everyone can release music on the internet. Earlier you had the whole process of recording and pressing. And you had to manage to sell it. Now, everyone can record in their bedroom and put it online. (9) As a result, interviewees take on managerial tasks such as developing long term strategies, business tasks such as networking and finance and technical tasks such as recording or selling merchandise (see , reflecting research on protean careers in creative industries (Bridgstock, 2005). However, they remain pessimistic about reaching their goals relying on the new technological opportunities and they continue to depend on traditional intermediaries, for example to reach bigger audiences. In addition, they acknowledge the limitations of a DIY approach most importantly as it limits the possibilities to build a network and create a favourable reputation with the circuit of intermediaries. Here, building an alliance with managers, bookers, labels and other professionals in the industry can help to get access. As one musician reflected on the benefits of collaborating with industry partners: A large part of music is networking. So yes, if your manager is friends with programmers that can help a lot. And labels can get you on television. We played Noorderslag twice, and at other prestigious festivals, but we don't get to do that. … These days, it is about who knows who. (20) Therefore, in their organization of work they choose the middle ground between a DIY approach and collaborating with traditional actors in the industry. Whereas previous research predicted a shift towards DIY and entrepreneurial approaches, this study only partially confirms these findings as musicians opt for a hybrid approach.
In short, while these musicians also draw from new repertoires to create new capacities, they continue to depend on theto a large extentunchanged structure of the music industry. Their traditional career strategy based on creating a favourable reputation within the industry and corresponding toolkit maintains its influence, suggesting that only new tools are added to the toolkit, instead of replacing outdated tools. As a result, this generation of musicians experiences a moderate adjustment of their work practices rather than a fundamentally unsettled time (Swidler, 1986).
Resisting changes
The lasting importance of the traditional industry structure also has the effect that several milestones have been resistant to change.
However, whereas in the traditional industry these milestones perhaps would predict revenues from sales and reaching an audience, due to the transformations this is no longer necessarily the case. Here, the two most striking examples are the releasing of albums and getting radio airplay. First, musicians continue to release albums, even though they believe that audiences listen more to individual songs in playlists: Playlists are doing well. People don't listen to albums. The medium of the album … is becoming less relevant. People go for one specific track. In the 80 ′ s you had more physical sales and people had to listen to your whole record to hear that one single that they really wanted to hear. (10) Moreover, releasing one album does not generate enough attention ('buzz') to tour for a year. For example, by regularly releasing songs it becomes easier to capture the attention of the audience over a longer period: The music industry is volatile. You want to be in the spotlights all the time because those are the moments where you can build an audience. Releasing an album … only gives your audience one release moment … and then they have to remember your album for the next couple of months. Whereas if you release tracks every three, every two months, you are in the picture more often. (8) Second, due to changing media consumption the prominent 3FM radio station is becoming less influential. The musicians believe that receiving attention from this radio station does not have the same effect anymore with regard to reaching an audience. Nevertheless, musicians continue to pursue airplay on 3FM: 'right now, 3FM is a difficult brand to aim for I think (…) but our management doesn't doubt that we should focus on them, so we trust them in that' (10). According to the musicians, because of the transformations these traditional milestones are suboptimal strategies as they have become (at least partially) decoupled of immediate economic success.
To understand why these musicians continue to pursue such practices, we have to look at the way the existing culture of the music industry mediates the market change. First, musicians believe that intermediaries continue to use several milestones as part of their evaluation repertoire, and therefore these practices still appear to receive backing from these powerful actors (Beckert, 1999). According to the musicians, getting radio airplay remains an important way to signal the potential quality of an act and get picked up by the industry: 'radio airplay really is a factor that can bring you success' (5). In the same manner, releasing albums continues to be considered essential, because dominant music critics still focus on them: 'That is how the industry works. … [I]f you want to get a review in De Volkskrant (Dutch newspaper) or in Oor (Dutch pop music magazine), then you have to release an album' (9). In other words, because musicians continue to rely on intermediaries, and these intermediaries continue to rely on these milestones, the cultural scaffolding of the traditional industry seemed to have remained stable causing cultural lag (Swidler, 1986). Even though the milestones have little or low economic benefit, they remain important steps for the accumulation of reputation over time and hence are expected to contribute to later economic success. Therefore, musicians have come to perceive these milestones primarily as useful for building a favourable reputation. However, as discussed earlier, because they feel that they have such a small chance to reach their goals in the industry, collecting these milestones has the risk of becoming an 'empty' story to hold on to, without much guarantee that this investment will pay off in the long run.
A second reason why milestones are resistant to change is because musicians keep drawing from the traditional market culture as a toolkit to shape their work practices. When asked why a musician wanted to release an album with his act, he responded: 'it is a band thing I guess. Bands will always release albums' (11). Here, the market culture continues to provide these practices with a certain taken-for-grantedness (Swidler, 1986). In addition, several practices also have a symbolic appeal for musicians, which too causes them to continue to perform these practices. This symbolic appeal relates to romanticized connotations that musicians attach to the archetypical image of the pop artist. As one musician captured this appeal: I think that everybody secretly wants to be a rock star. As in: travel a lot, see a lot of places, people think you're cool, a lot of crazy parties, crazy people, yes that is very cool. It's just fun. (12) Musicians enjoy being part of the traditional music industry (see also Crossley & Bottero, 2015), and continue to be attracted to its symbolic appeal (Threadgold, 2018). As a result, they orientate themselves on the opportunities, or space of possibles (Bourdieu, 1993), that the music industry offers and they shape their work practices in accordance with this. As a result, musicians still aspire for example to play at prestigious festivals, 'our ultimate goal is to play at Glastonbury' (10), or tour abroad, 'we want to play [abroad] more often. These are very small pub shows, but that is a lot of fun' (9), even if they lose money with it, because, in addition to their function as milestones, these activities correspond with the romantic myth of the musician.
Of course, the experiences of these musicians have been altered by the transformations of the music industry: they earn less from record sales and depend more on performing, they might have less the role of industry workers and more as entrepreneurs and try to take advantage of the new technological opportunities. Nevertheless, because of their continuing dependency on traditional intermediaries and because they continue to value the symbolic appeal of being part of the music industry, musicians continue to perform reputational practices with low immediate economic impact.
Conclusion
In this article we investigated the role of reputation in the career building strategy of early-career musicians in a transforming music industry. In the first part of our analysis, we showed how the interviewees create such a favourable reputation. Here, we argued that to build a sustainable career in music they experience a dependency on the intermediaries within the Dutch music market. Therefore, to improve their status and acquire rewards, they aim to create a favourable reputation by collecting milestones to signal a track record of prior successes and their capacity for future success (cf. Bielby & Bielby, 1994). Together, these findings provide insights on how early-career musicians solicit the support of intermediaries when entering the music industry (Lingo & Tepper, 2013;Zwaan, Ter Bogt and Raaijmakers, 2009) and the role that reputations plays in this process (Dumont, 2018), by showing how musicians attempt to manipulate the decision processes of intermediaries by means of these milestones.
In the second part of our analysis, we studied musicians' beliefs about the ways in which new technologies impact their career building strategies. While digital optimists were hopeful about the opportunities that the transformations held for musicians (Frost, 2007;McLeod, 2005), in line with other recent scholarly work (Haynes & Marshall, 2017;Young & Collins, 2010) critiquing the prediction that digitization has democratizing effects, all in all for our interviewees the potential effects of the technological transformations on their career strategy have been largely abated. Results indicate that musicians have only moderately implemented new technologies and roles, and they continue to see the music industry as the most viable pathway to reach their goals. In addition, several milestones remained resistant to change even though their immediate economic impact is limited, creating cultural lag. Reason for this is that the evaluation repertoires of intermediaries continue to function as a cultural scaffolding (Swidler, 1986) and that musicians believe that the intermediaries still back traditional milestonesshowing that agents with high levels of capital can continue to promote practices, even if change occurs (Beckert, 2010). In addition, as musicians continue to value the symbolic appeal part of the music industry, they continue to experience traditional milestones as meaningful within the context of that market (Bourdieu, 1993).
Overall, our study offers a framework to help explain how culture structures reputational practices. We introduce milestones as a mechanism to illustrate that workers perform certain reputational practices because they believe that they reflect the evaluation repertoires of intermediaries. In this way, these evaluation repertoires function as a cultural scaffolding for cultural workers and these milestones serve as signposts structuring their careers. Similar analyses in other creative industries can yield comparable patterns of institutionalized practices used by workers to be valued and selected by intermediaries to increase capital volume and capital types. At the same time, systematic comparisons of different markets can show how the importance of milestones may differ based on the degree to which finding audiences for artists are 'contingent on gatekeepers' actions' (Hirsch, 1972, p. 655).
Furthermore, our findings contribute to our understanding of why reputational practices may be influenced by industry transformations and help to understand the circumstances under which workers may be resistant to technological changes and continue to follow established industry practices. First, at this point in time, the discussed technological changes in itself were not enough to destabilize the market, as the power relations appear to have remained stable. This confirms the point of Hesmondhalgh (2009) that changes due to technological innovations in the music industry are recurring patterns and should not be understood as market disintegration. Second, when change occurs, workers do not immediately turn into reflexive entrepreneurs who under such circumstances can 'envision alternative modes of getting things done' (Beckert, 1999, p. 786, original emphasis), because they may continue to take the cultural scaffolding as taken for granted and leaving the market altogether might undermine the whole reason why they chose to participate in the first place, i.e. the romantic appeal of being part of the industry.
Of course, this paper is telling only one part of the story of careers in music. The practice of gradual accumulation of reputation and its value for early-career musicians as described here is also reflected in the discourse in the Dutch music industry when the importance of a 'chain approach' is discussed where musicians work their way up in small steps (e.g. Bussemaker, 2013; Gielen, Van der Veen & Van Asselt, 2017; Van Vugt, 2018). Yet, as we focused on the perspective of musicians, we cannot provide evidence that this strategy based on creating a favourable reputation is appealing to intermediaries other than what the experiences of these musicians tell us. Moreover, not all musicians want to build an act in the music industry: some opt for careers as a music teacher, session musician or songwriter. Some try to build an act completely outside the traditional music industry where different business models exist (e.g. cover bands or resident DJ's). For example, the new generation of hip-hop musicians relies 'only on informal DIY channels for the production, performance and consumption of rap and hip hop to make their names' (Reitsamer & Prokop, 2017, p. 13). Therefore, it remains important to investigate other forms of work in music and the role of reputation in it as well.
Nevertheless, the perceived importance of the strategy of investing in milestones shows the necessity for new acts to accumulate reputation in the music industry to stand out amongst their peers. Yet, chances for success are low as the music industry has been characterized as a winner-takes-all market where a lot of musicians struggle to make a livingwhich has only further intensified due to the industry transformations. Consequently, these practices may cause 'value slippage' as other industry actors may benefit more from the investments made by these musicians than the musicians themselves (Hoeven et al., 2021). Even so, acquiring a competitive advantage in this way can very well make the difference between sold-out tours or the margin of rehearsing in your parent's garage.
|
v3-fos-license
|
2018-04-03T02:29:17.995Z
|
2010-08-01T00:00:00.000
|
5153256
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1053/j.gastro.2010.04.055",
"pdf_hash": "c58bdbe1895b7115a5f22b5937fd4628848f2bc0",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2636",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "f6425eb6ab41c40431074166f239193bef55bcb6",
"year": 2010
}
|
pes2o/s2orc
|
Edinburgh Research Explorer LKB1 haploinsufficiency cooperates with Kras to promote pancreatic cancer through suppression of p21-dependent growth arrest
mechanism We investigated the of Lkb1 in a model of pancreatic cancer, both in terms of disease progression and the molecular level. To test the relevance of our to investigated levels of LKB1 and its potential targets in RESULTS: We definitively show that Lkb1 haploinsufficiency can cooperate with oncogenic to cause pancreatic ductal adenocarcinoma (PDAC) in the mouse. Mechanistically, this was associated with decreased p53/p21-dependent growth arrest. Haploinsufficiency for p21 ( Cdkn1a) also synergizes with Kras G12D to drive PDAC in the mouse. We also found that levels of LKB1 expression were decreased in around 20% of human PDAC and significantly correlated with low levels of p21 and a poor prognosis. Remarkably, all tumors that had low levels of LKB1 had low levels of p21, and these tumors did not express mutant p53. CONCLUSIONS: We have iden-tified a novel LKB1-p21 axis that suppresses PDAC follow-ing Kras mutation in vivo. Down-regulation of LKB1 may therefore serve as an alternative to p53 mutation to drive pancreatic cancer in vivo.
P ancreatic cancer is the fourth most common cause of cancer deaths worldwide, with an estimated 5-year overall survival of Ͻ5%. 1 The highly aggressive nature of this disease, combined with the anatomical location of tumors, results in 90% of patients having surgically unresectable disease at the time of diagnosis. 2 The pancreas consists of 3 main cell types-islet cells, acinar cells, and duct cells. Tumors can arise from any of these cell types, but approximately 90% of cases are pancreatic ductal adenocarcinoma (PDAC). PDAC arises from precursor lesions called pancreatic intraepithelial neoplasms (PanINs). 3 The formation of PanIN lesions and the progression to invasive adenocarcinoma are driven by activation of the KRAS oncogene in about 90% of cases, 4 accompanied by loss of function of tumor suppressors, most commonly the Ink4a, p53, and Smad4 tumor suppressors. 3 Certain inherited genetic lesions have also been shown to confer a predisposition to pancreatic cancer. Mutations in the LKB/STK11 tumor suppressor gene result in the Peutz-Jeghers syndrome, 5,6 an autosomal-dominant condition characterized by hamartomatous polyps of the gastrointestinal tract and a dramatically increased risk of epithelial malignancies at other sites, including a Ͼ100fold increased risk of pancreatic cancer. [7][8][9] Restoration of silenced LKB1 in human pancreatic carcinoma cells induces apoptosis in vitro. 10 Furthermore, LKB1 gene inactivation has been observed in intraductal papillary mucinous neoplasms of the pancreas. 11 Lkb1 knockout mice are not viable, and embryos survive only until embryonic day E9.5 because of neural tube defects and vascular abnormalities. 12 However, Lkb1 ϩ/Ϫ mice are viable and mirror human Peutz-Jeghers syndrome in that they develop benign intestinal polyps (hamartomas) and have an increased risk of a range of cancers later in life. [13][14][15][16] However, the consequences of Lkb1 deficiency in the pancreas have not been well-studied thus far, and the mechanisms by which its loss may contribute to pancreatic cancer are unknown.
Lkb1 encodes a serine/threonine kinase that activates a number of downstream kinases, including the adenosine monophosphateϪactivated protein kinase (AMPK), which responds to energy stress by negatively regulating the mammalian target of rapamycin kinase. 17 Lkb1 is also able to regulate cell growth and apoptosis, potentially through interaction with the tumor suppressor p53. 18 Ectopic expression of Lkb1 in cells lacking the endogenous protein induces p21 expression and cellcycle arrest in a p53-dependent manner, and Chromatin Immunoprecipitation analysis has revealed that Lkb1 is recruited to the p21 promoter by p53. [19][20][21] Lkb1 deficiency has also been shown to prevent culture-induced senescence, although paradoxically it renders cells resistant to subsequent transformation by Ha-Ras. 13 Using cre-lox technology to target endogenous expression of Kras G12D to the mouse pancreas through the Pdx1 pancreatic progenitor cell gene promoter results in formation of PanINs. 22 However, these lesions fail to rapidly progress and only develop into invasive pancreatic adenocarcinoma at low frequency unless additional genetic lesions are introduced. In this study, we have assessed whether Lkb1 loss can promote tumorigenesis in this model and found a dramatic acceleration of tumorigenesis in mice carrying a single conditional knockout allele of Lkb1. We have also demonstrated that this is associated with decreased p21-dependent growth arrest.
Immunohistochemistry
Immunohistochemical analysis was performed on formalin-fixed paraffin-embedded sections according to standard protocols. For detailed protocols, see Supplementary Materials.
Senescence-Associated -Galactosidase Staining
We stained cryosections of mouse pancreas or tumor for senescence-associated -galactosidase activity according to manufacturer's protocol (Cell Signaling Technology, Danvers, MA) and counterstained them with nuclear fast red solution.
Laser Capture Microdissection and RNA Isolation
Frozen tissue was sectioned (at 15-20 m) onto PALM-PEN membrane slides and lightly stained with hematoxylin. Laser capture microdissection was performed using the P.A.L.M. MicroLaser System. RNA was isolated with the RNA easy extraction kit (Qiagen, Hilden, Germany).
Reverse-Transcription Polymerase Chain Reaction
Total RNA was reverse transcribed to complementary DNA using the Superscript III kit (Invitrogen, Carlsbad, CA) according to manufacturer's instructions. For further information and primers, see Supplementary Materials.
Tissue Microarray Analysis
The human pancreatico-biliary tissue microarray was created within the West of Scotland Pancreatic Unit, University Department of Surgery, Glasgow Royal Infirmary. For further information, see Supplementary Materials.
This increased pancreatic cancer predisposition was not limited to invasive tumors; the number of PanINs observed in 6-week-old Pdx1-Cre, Kras G12D/ϩ , Lkb1 flox/ϩ (KLC) mice was significantly increased when compared with Pdx1-Cre, Kras G12D/ϩ (KC) mice ( Figure 1B, P ϭ .007). In addition, we also observed an increase in PanIN 2 and PanIN 3 lesions compared with KC mice ( Figure 1C). Histological sections of tumors arising in Pdx1-Cre, Kras G12D/ϩ , Lkb1 flox/ϩ (KLC) animals were analyzed to ascertain the phenotype of these PanINs and tumors. PanIN lesions exhibited characteristic histologic changes of the normal duct, including expansion of the cytoplasm with associated mucin accumulation, which was confirmed by Alcian blue staining ( Figure 1D, right), formation of papillary architecture, loss of polarity, appearance of atypical nuclei, and luminal budding ( Figure 1D). A majority of Pdx1-Cre, Kras G12D/ϩ , Lkb1 flox/ϩ (KLC) tumors were PDAC ( Figure 1E); however, some tumors exhibited a more cystic morphology ( Figure 1E, middle panel), and en- hanced immune cell infiltration was apparent in some tumors ( Figure 1E, right panel), compared with the small number of tumors observed in older Pdx1-Cre, Kras G12D/ϩ (KC) mice. Our results show that Lkb1 deficiency can synergize with activated Kras to induce pancreatic tumor formation.
Homozygous Loss of Lkb1 Is Sufficient to Initiate Pancreatic Tumorigenesis
We also investigated whether loss of Lkb1 as a sole initiating genetic event was sufficient to induce pancreatic tumor formation in the mouse. We crossed Lkb1 flox/ϩ mice to Pdx1-Cre mice and interbred the offspring to generate cohorts of Pdx1-Cre Lkb1 flox/ϩ (LC) and Pdx1-Cre Lkb1 flox/flox (LLC) mice. We found that Pdx1-Cre Lkb1 flox/flox (LLC) mice develop pancreatic tumors with an incidence of 100% and a median survival of 68 days, while Pdx1-Cre, Lkb1 flox/ϩ (LC) mice remained disease-free for 500 days (Figure 2A). Pdx1-Cre Lkb1 flox/flox (LLC) mice presented with abdominal distention, and tumors arising in these mice were mucinous cystadenomas characterized by the presence of multiple large cysts, in some cases at the expense of most of the normal pancreas tissue ( Figure 2B-E).
Tumors also exhibited excessive mucin production, as confirmed by Alcian blue staining ( Figure 2E). We conclude that Lkb1 loss as a sole event is sufficient to initiate pancreatic tumor growth; however, those tumors are benign mucinous cystadenomas and Lkb1 loss alone is not sufficient to drive formation of PDAC. These results agree with a previous analysis of mice lacking Lkb1 specifically within the pancreas, in which mice developed pancreatic serous cystadenomas. 27
Laser capture microdissection was also performed to isolate tissue from preneoplastic PanIN lesions and tumors arising in Pdx1-Cre, Kras G12D/ϩ , Lkb1 flox/ϩ (KLC) mice, and reverse-transcriptase polymerase chain reaction analysis showed transcription of wild-type Lkb1 in the resulting tumors ( Figure 3D). Further, immunoblot analysis revealed only a decrease in Lkb1 levels, and a reduction in Figure 1). These results demonstrate that Lkb1 is a haploinsufficient pancreatic tumor suppressor, and that lack of only 1 allele is sufficient, when combined with Kras mutation, to cause PDAC.
Lkb1 Deficiency Limits Expression of the Tumor Suppressors p53 and p21 in PanIN Lesions
We sought to further delineate the mechanism by which Lkb1 haploinsufficiency synergizes with activated Kras to promote pancreatic tumorigenesis. Consistent with its in vivo tumor suppressor function, Lkb1 deficiency has been shown to prevent culture-induced senescence. 13 Re-expression of Lkb1 in cancer cell lines deficient for Lkb1 has also been shown to result in p53-dependent cell-cycle arrest and enhanced expression of p21. 19,20 On the basis of these results, we wondered whether Lkb1 might act to suppress pancreatic tumorigenesis by promoting growth arrest in vivo through transcriptional activation of p21, because preneoplastic pancreatic lesions in Elas-tTA/tetO-Cre, Kras G12V mice have previously been reported to undergo oncogene-induced senescence, as indicated by positive staining for a number of senescence markers. 29 We hypothesized that preneoplastic lesions in our Pdx1-Cre, Kras G12D/ϩ , Lkb1 flox/ϩ (KLC) mice would exhibit diminished p21 and p53 expression compared with those lesions found in Pdx1-Cre, Kras G12D/ϩ (KC) mice. We performed immunohistochemical analysis for both p21 and p53 in PanIN lesions in these mice. High levels of both p21 and p53 were observed in PanINs arising in Pdx1-Cre, Kras G12D/ϩ (KC) mice ( Figure 4A, Supplementary Figure 2), compared with normal ducts in these mice, as expected (data not shown). Significantly, however, in PanIN lesions arising in Pdx1-Cre, Kras G12D/ϩ , Lkb1 flox/ϩ (KLC) mice, we observed a considerable reduction in levels of p21 and of p53 ( Figure 4B, Supplementary Figure 2). We quantified the proportion of cells staining positive for p21 and p53 expression in PanINs from both Pdx1-Cre, Kras G12D/ϩ (KC) and Pdx1-Cre, Kras G12D/ϩ , Lkb1 flox/ϩ (KLC) mice and confirmed that expression of both was significantly reduced in Pdx1-Cre, Kras G12D/ϩ , Lkb1 flox/ϩ (KLC) PanINs with a median of 14.3% p21-positive cells and 12.0% p53positive cells, compared with 32.2% and 36.1%, respectively, in Pdx1-Cre, Kras G12D/ϩ (KC) PanINs ( Figure 4C, P Ͻ .002; Figure 4D, P Ͻ .004). Quantitative real-time polymerase chain reaction analysis performed on microdissected tissue demonstrated that transcription of p21 is also decreased in Pdx1-Cre, Kras G12D/ϩ Lkb1 flox/ϩ (KLC) mice, to 0.64% of the levels observed in Pdx1-Cre, Kras G12D/ϩ (KC) PanINs (data not shown).
Decreased Lkb1 Expression in Human PDAC Correlates With Low p21 Expression and Reduced Survival
We next sought to investigate whether this Lkb1/ p21 pathway was relevant to human PDAC development. Lkb1 and p21 immunohistochemistry was performed in a tissue microarray containing 114 cases of primary human PDAC. As expected, we observed Lkb1 staining primarily in the cytoplasm of epithelial cells ( Figure 7A), while p21 staining was evident in the nuclear compartment (Supplementary Figure 5). Lkb1 staining was present in 98% of stained normal ductal tissue. In PDAC, 19% of cases expressed Lkb1 at a low level (histoscore Ͻ100). Expression levels of Lkb1 did not differ in terms of lymph node status or tumor size; however, high tumor grade and stage were significantly associated with lower median Lkb1 expression level ( Figure 7B; P ϭ .01 and P ϭ .02, respectively). In univariate analysis, low Lkb1 expression (n ϭ 20) was associated with significantly decreased survival compared with high expression (n ϭ 86) after resection ( Figure 7D, left panel 13.5-20.6]; P ϭ .006). Most importantly, in a multivariate Cox proportional-hazards regression analysis, low Lkb1 expression remained an independent predictor of poor survival, with a hazard ratio of 1.87 (95% CI: 1.09 -3.22; P ϭ .022).
Given our preclinical data suggesting that low Lkb1 levels caused low levels of p21, we next investigated the expression of p21 on the same human PDAC tissue microarray. Expression levels of p21 were not significantly altered in relation to any clinicopathological parameter; however, low expression of p21 (n ϭ 78) was associated with decreased cumulative survival after surgical resection, compared with high expression (n ϭ 28) ( Figure 7D, right Figure 7C, Spearman's correlation coefficient 0.34; P Ͻ .001). Significantly, high expression of both Lkb1 and p21 identified a group of patients with a more favorable outcome and a median survival of 25.7 months ( Figure 7E, 95% CI: 12.9 -40.3). Other predictors of poor survival were higher tumor stage, high histologic grade, larger tumor size, and positive resection margin; however, p21 status did not independently influence outcomes (Supplementary Table 1).
Because the TP53 tumor suppressor gene is frequently mutated in human pancreatic cancer (40%-70%) 30 and LKB1 is down-regulated in around 20% of PDAC, we hypothesized that loss of Lkb1-mediated p53/p21 induc- tion might be able to circumvent the need for p53 mutation in human PDAC and thus should not be downregulated in those tumors with p53 mutation. We therefore investigated levels of p53 accumulation, indic-ative of p53 mutation, by immunohistochemical staining of the human PDAC tissue microarray. Strikingly, in those tumors that had low levels of LKB1, and, hence, low levels of p21, we did not observe accumulation of mutant p53 (median histoscore ϭ 4.08, n ϭ 20). In contrast, in the subset of tumors that had low p21 with high LKB1 expression, we found significantly higher levels of p53, indicative of accumulation of mutant p53 (median histoscore ϭ 71.3, n ϭ 58, P ϭ .05) ( Figure 7F). In human pancreatic cancer, we have shown that Lkb1 deficiency correlates with loss of p21 expression and with poorer prognosis, and that Lkb1 deficiency may act as an alternative to p53 mutation in human pancreatic tumorigenesis. These results support the hypothesis that Lkb1 acts as a tumor suppressor in the pancreas, and that it functions, at least in part, by inducing p21 expression. Loss of Lkb1 can thus facilitate escape from Ras-induced p21-mediated growth arrest, and promote Ras-induced tumorigenesis in the pancreas.
Discussion
These data show that Lkb1 haploinsufficiency synergizes with activated Kras in pancreatic tumorigenesis. Mechanistically, we believe this is because of reduced growth arrest/senescence through low levels of p21 in PDAC from these mice. Importantly, our study of human PDAC strongly supports this finding, as low levels of p21 and LKB1 are correlated in human PDACs. Our data are consistent with the previous findings that Lkb1 loss prevents culture-induced cellular senescence, 13 allows BRAF mutant melanoma cells to proliferate, 31 and cooperates with activating Kras mutations in a mouse model of lung cancer. 32 Indeed, our studies in both the pancreas and intestine suggest strong synergy with Kras signaling, with heterozygosity for Lkb1 sufficient to drive signaling downstream of Kras. 33 Overall, these data indicate that levels of Lkb1 are critical in determining the cellular response to Kras activation.
One important question that has been raised through our work and that of others 13 is whether biallelic mutations in LKB1 are required for tumorigenesis or whether they may in fact be limiting for tumor progression. Peutz-Jeghers syndrome patients develop benign hamartomas of the gastrointestinal tract and develop intraductal papillary mucinous neoplasm and cystadenomas.
Here we have confirmed the previous study of Hezel and colleagues, 27 who showed that complete loss of Lkb1 in the pancreas leads to formation of benign cystadenomas. Taken together, these data argue that complete loss of Lkb1 leads to formation of benign tumors, that a cooperating oncogenic event is required to drive carcinoma formation, and that the timing of the cooperating oncogenic event may be critical-if it occurs too late the tumor may not progress from a benign state. From the data presented here, we suggest that in sporadic cancer, a single LKB1 mutation or down-regulation of protein expression would be sufficient to synergize with KRAS mutation to drive tumor progression. Analysis of human pancreatic cancers is consistent with this hypothesis; 20 of 106 tumors show a down-regulation of LKB1 compared to normal ductal epithelium and, remarkably, low levels of LKB1 can act as an independent prognostic indicator of poor outcomes of resected pancreatic cancer. In agreement with our findings in the pancreas, when the LKB1 gene sequence was determined in primary lung adenocarcinomas, only 8 of 27 tumors (of 80 cancers total) that had a mutation or deletion of LKB1 exhibited biallelic loss, 32 suggesting that a monoallelic mutation in LKB1 is sufficient to drive cancer progression. The lack of LKB1 mutations so far observed in human RAS-driven pancreatic tumors may instead be explained by downregulation at the protein level, or inactivation of the gene by epigenetic means, because hypermethylation of Lkb1 in hamartomatous polyps and in tumors commonly associated with Peutz-Jeghers syndrome has been demonstrated in the absence of mutation of the gene. 34 We propose that the mechanism for the synergy between Lkb1 heterozygosity and Kras activation is an escape from Kras G12D -induced growth arrest by loss of p53 mediated p21 up-regulation. The reasons for this are multiple, including increased numbers of PanINs in Pdx1-Cre, Kras G12D/ϩ , Lkb1 flox/ϩ (KLC) mice, increased proliferation of PanINs, concomitant reduced expression levels of p53 and p21, reduced expression of senescenceassociated -galactosidase, rapid tumorigenesis in Pdx1-Cre, Kras G12D/ϩ , p21 ϩ/Ϫ mice, and the human data showing the correlation between LKB1 and p21 levels. Remarkably, no human tumors that had low Lkb1 expression had high p21 expression. There was a subset of human tumors that had low p21 with high LKB1 expression, presumably because of the fact that multiple different events can cause p21 down-regulation, for example, p53 mutation or TBX2 overexpression. 35 Indeed, this group of tumors exhibited high levels of p53, indicative of mutant p53 accumulation, suggesting that Lkb1 deficiency can substitute for p53 mutation in human pancreatic tumorigenesis.
Given the plethora of pathways that LKB1 impinges on, it is likely that other pathways may also contribute to the phenotype we see here. However, we failed to see clear up-regulation of phosphoϪmammalian target of rapamycin within the Pdx1-Cre, Kras G12D/ϩ , Lkb1 flox/ϩ (KLC) PanINs and tumors when compared with the Pdx1-Cre, Kras G12D/ϩ (KC) PanINs and tumors, although we clearly see reduced levels of the target phospho-AMPK (data not shown). It is possible that within the pancreas, reduced AMPK activation is not sufficient to exert a clear phenotype and indeed heterozygous AMPK knockout mice have no reported phenotype. 36 Other potential phenotypes of LKB1 deficiency, such as a loss of polarity and differentiation to mucus secretory lineages, could accelerate tumorigenesis in this system. 27,37,38 However, one of the characteristics of Kras-driven PanINs is an increase in mucin secretion and loss of polarity and, because heterozygosity for Lkb1 has never been sufficient to drive either of these 2 events, we believe that these are not major contributors to our phenotype, although they may act in synergy with Kras activation.
In conclusion, we have shown that Lkb1 heterozygosity can accelerate Kras G12D -induced PDAC formation. We have observed a marked reduction of p53 and p21 expression in PanIN lesions in these mice compared with mice bearing intact Lkb1. This correlation is borne out in human PDAC. We therefore propose that Lkb1 acts as a tumor suppressor in the pancreas through its ability to limit the p53/p21 pathway, thus allowing precursor lesions to more easily overcome the Ras-induced growtharrest barrier to tumor formation.
Genetically Modified Mice and Animal Care
Animals were kept in conventional animal facilities and monitored daily. Experiments were carried out in compliance with UK Home Office guidelines. Mice were genotyped by polymerase chain reaction analysis. Tumor and metastatic burden was assessed by gross pathology and histology. Animals were sacrificed by cervical dislocation as per institutional guidelines. Organs/tumors were removed and either fixed in 10% buffered formalin overnight at room temperature or snap frozen in liquid nitrogen. Fixed tissues were paraffin-embedded, and 5-m sections were placed on sialynated/poly-L-lysine slides for immunohistochemical analysis.
Immunohistochemistry
Formalin-fixed paraffin-embedded sections were deparaffinized and rehydrated by passage through Xylene and a graded alcohol series. Endogenous peroxidase activity was inactivated by treatment with 3% hydrogen peroxide, after which antigen retrieval was performed using microwave-heated antigen unmasking solution (Vector Labs, Burlingame, CA) or by incubation in citrate buffer in a pressure cooker. Sections were blocked in 5% serum for an hour, and then incubated with primary antibody for an hour at room temperature or overnight at 4°C. Primary antibodies used were anti-Lkb1 (Abcam, Cambridge, UK) 1:200, anti-pAMPK (Cell Signaling Technology) 1:50, anti mouse p53 (Vector) 1:100, anti-p21 (Santa Cruz Biotechnologies, Santa Cruz, CA) 1:500, anti-Ki67 (Vector) 1:200, and anti-human p53 (Dako, Carpinteria, CA). Sections were incubated in secondary antibody for an hour (Dako Envision ϩ Kit, or Vectastain ABC system) and the staining was visualized with 3,3=-diaminobenzidine tetrahydrochloride. Alcian blue staining was carried out by incubation in Alcian blue solution (pH 2.5) for 30 minutes, followed by counterstaining in nuclear fast red solution for 5 minutes.
Reverse-Transcriptase Polymerase Chain Reaction
Polymerase chain reactions were performed on a PTC-200 DNA Engine (Bio-Rad Laboratories, Hercules, CA), using the GoTaq polymerase kit (Promega, Madison, WI) according to manufacturer's instructions. Polymerase chain reaction products were run on a 2% agarose gel, stained with ethidium bromide, and visualized using the GelDoc-It 300 imaging system (UVP, Cambridge, UK). Primers sequences used were: Lkb1 F: GGTCACACTTTA-CAACATCAC, and R: CTCATACTCCAACATCCCTC.
Tissue Microarray Analysis
All patients gave written, informed consent for the collection of tissue samples, and the local Research Ethics Committee approved collection. All cases had undergone a standardized pancreaticoduodenectomy. A total of 1500 cores from a total of 224 cases with pancreaticobiliary cancer (including 114 pancreatic ductal adenocarcinomas) with a full spectrum of clinical and pathological features were arrayed in slides. At least 6 tissue cores (0.6 mm diameter) from tumor and 2 from adjacent normal tissue were sampled. Complete follow-up data were available for all cases within the tissue microarray analysis. Lkb1, p21, and p53 expression levels were scored based on staining intensity and area of tumor cells using a weighted histoscore calculated from the sum of (1 ϫ % weak staining) ϩ (2 ϫ % moderate staining) ϩ (3 ϫ % strong staining), providing a semi-quantitative classification of staining intensity. The cutoff for high and low expression of Lkb1 and p21 was a histoscore of 100 and 40, respectively. Statistical correlation between Lkb1 expression and p21 expression in human PDAC was determined by the Spearman Correlation Coefficient analysis. Kaplan-Meier survival analysis was used to analyze the overall survival from the time of surgery. Patients alive at the time of follow-up point were censored. To compare length of survival between curves, a log-rank test was performed. A Cox proportional hazards model was used for univariate analysis to adjust for competing risk factors, and the hazard ratio with 95% CIs was reported as an estimate of the risk of disease-specific death. Variables that were found to be significant on univariate analysis at P Ͻ .10 were included in multivariate analysis in a backward stepwise fashion. Statistical significance was set at a P value of Ͻ.05. All statistical analyses were performed using SPSS version 15.0 (SPSS Inc, Chicago, IL).
|
v3-fos-license
|
2016-01-09T01:06:55.066Z
|
2013-11-15T00:00:00.000
|
10263665
|
{
"extfieldsofstudy": [
"Computer Science",
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.neuroimage.2013.04.120",
"pdf_hash": "0c43fe301dc52a008a7beddb3a51d8d2ad788ec4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2637",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"sha1": "0c43fe301dc52a008a7beddb3a51d8d2ad788ec4",
"year": 2013
}
|
pes2o/s2orc
|
Gearing up for action: Attentive tracking dynamically tunes sensory and motor oscillations in the alpha and beta band☆
Allocation of attention during goal-directed behavior entails simultaneous processing of relevant and attenuation of irrelevant information. How the brain delegates such processes when confronted with dynamic (biological motion) stimuli and harnesses relevant sensory information for sculpting prospective responses remains unclear. We analyzed neuromagnetic signals that were recorded while participants attentively tracked an actor's pointing movement that ended at the location where subsequently the response-cue indicated the required response. We found the observers' spatial allocation of attention to be dynamically reflected in lateralized parieto-occipital alpha (8–12 Hz) activity and to have a lasting influence on motor preparation. Specifically, beta (16–25 Hz) power modulation reflected observers' tendency to selectively prepare for a spatially compatible response even before knowing the required one. We discuss the observed frequency-specific and temporally evolving neural activity within a framework of integrated visuomotor processing and point towards possible implications about the mechanisms involved in action observation.
Introduction
When interacting with our environment, we face the problem of selecting among a host of action possibilities (Cisek and Kalaska, 2010) that are afforded by different agents and/or objects that attract our attention to varying degrees (e.g., Gibson, 1966;Grèzes et al., 2003). An effective way to resolve this challenge is through directed visuo-spatial attention, which has been repeatedly shown to improve processing of information falling within the locus of attention while reducing the interference from competing sensory information occurring elsewhere within the visual field (Desimone, 1998;Desimone and Duncan, 1995;Posner et al., 1980). Attention studies have frequently examined static stimulation conditions, however, and in part constrained by neuroimaging methods with limited temporal resolution (Culham et al., 1998), much less is known about the brain mechanisms supporting dynamic attention processes (e.g., as involved in attentional tracking of moving objects in space). This is particularly so in everyday contexts such as action observation, an activity known to engage both attention (Belopolsky et al., 2008) and motor systems (Koelewijn et al., 2008).
Insights into the role of brain oscillations involved in spatial attention processes have particularly been gained through human neurophysiological studies employing the endogenous pre-cuing paradigm (Posner et al., 1980). In this paradigm, a cue informs about the likely location of a laterally presented target stimulus. After a delay period (cue-target interval of~1.0 to 2.5 s) the target stimulus then demands participants to make a perceptual detection (Thut et al., 2006) or discrimination response (Siegel et al., 2008). Cumulative research evidence has shown that lateralized parieto-occipital alpha (8-12 Hz) oscillations are strongly associated with the deployment of spatial attention (Capila et al., 2012;Thut et al., 2006) as well as modulated by spatial certainty (Gould et al., 2011). Specifically, the decrease of alpha oscillation amplitude in the hemisphere contralateral to the attended visual hemifield has been related to enhanced processing of information at the attended location, whereas the increase (or constant) alpha oscillation amplitude in the hemisphere ipsilateral to the attended visual hemifield has been related to the suppression of processing of information at the unattended location (Rihs et al., 2009).
In addition to the observation that alpha neural activity is associated with the functional inhibition of task-irrelevant brain areas (Händel et al., 2010;Jensen and Mazaheri, 2010;Kelly et al., 2006), other studies have also demonstrated that lateralized alpha amplitude is (inversely) related to perceptual performance van Dijk et al., 2008;Wyart and Tallon-Baudry, 2009). Thus, improvement of both perceptual and motor performance correlates with the suppression of parieto-occipital alpha oscillations (Foxe and Snyder, 2011). Since occipital alpha oscillations have been linked to excitability changes of visual areas (Romei et al., 2008a(Romei et al., , 2008b(Romei et al., , 2010 modulations of perceptual performance are not surprising. However, response times depend on the activation state of motor areas that is typically reflected in beta (16-35 Hz) oscillations (Engel and Fries, 2010). In fact, beta oscillations in cortical motor areas are known to have a causal effect on movement duration, force-generation, and inhibition (Joundi et al., 2012;Pogosyan et al., 2009) and are modulated during action observation (Kilner et al., 2009;Press et al., 2011). Yet, the specific contributions of parieto-occipital alpha and motor beta oscillations to perceptually (e.g., spatially) invigorated action preparation remain unclear. Indeed, we know relatively little about how alpha oscillations (i) mediate the expectancy-related attention process, and (ii) influence the on-going neural dynamics involved in shaping our prospective actions when spatial "cue" information is dynamically provided rather than statically some time before target onset. However, the former scenario mimics an everyday situation (e.g., driving through a busy crossroad where the movement of other vehicles and pedestrians require simultaneous monitoring) wherein we are required to pay attention (overtly or covertly) to dynamically moving object(s) of interest before deciding which action will be taken shortly.
The present study investigates how attention is allocated in the presence of dynamic biological motion stimuli (i.e., the arm movement of an actor) and how on-going anticipatory motor activation is influenced. More specifically, we used magnetoencephalography (MEG) to monitor participants' alpha and beta oscillations while they observed an actor making pointing movements towards a lateral target ( Fig. 1). At the end of the pointing movement the color of the target changed and cued the participant to perform a right or left hand response. Importantly, the actor's pointing movements were either targeted towards an endpoint in the same hemifield (straight movement) or one located in the opposite hemifield (crossed movement), thereby validly indicating the position of the response cue but not that of the to-be-executed response. The task as a whole requires participants i) to detect/discriminate the pointing hand in the spatial periphery, ii) to attentively track the changing spatial location of the pointing hand, and iii) to discriminate the response cue after the pointing movement reached its endpoint location. This modified 'Simon-task' (Simon, 1969) provides a useful experimental paradigm for a number of reasons. First, it fulfills the attention processing criteria as employed in previous visuo-spatial attention research. Second, it grounds the task within an ecological framework as we deal with both static and dynamic stimuli in our daily interactions with others. Third, and importantly, our modified task version allows us to temporally segregate actual motor from sensory processing and to focus specifically on the attention-related processes prior to any actual movement onset. Furthermore, presenting straight and crossed arm movements allows us to test whether covert motor activation induced during action observation (Koelewijn et al., 2008) relates to the moving limb of the actor (effector mirroring) or rather to the spatial position of the arm (position mirroring). Whereas behavioral evidence regarding this issue is mixed (Belopolsky et al., 2008;Bertenthal et al., 2006), to our knowledge, there is only one functional neuroimaging (MEG) study that addressed this issue (Kilner et al., 2009), finding evidence against the effector mirroring view.
Our study specifically focuses on the sensory and cognitive processes leading up to motor response anticipation. We hypothesize that the active allocation of attention to incoming visuo-spatial information serves to dynamically update predictions about the behaviorally relevant movement endpoint (which could be located either in observers' left or right visual hemifield). Consequently, the emerging alpha lateralization in parieto-occipital brain regions should sensitively index the associated shift in spatial attention. It is also known that beta oscillations in cortical motor regions are related to the temporal prediction of sensorimotor events (Fujioka et al., 2012;Saleh et al., 2010) and the state of motor preparation as a function of response certainty (Tzagarakis et al., 2010). Therefore, we further hypothesize to observe dynamic changes in beta lateralization reflecting anticipatory processes, e.g., response bias, and/or influences of sensorimotor information that can be expected to change during the attentive observation of the actor's pointing 1. The sequence of events in an experimental trial. A gray background screen of equal size as the movie frames (13°× 13°visual angle) was presented for a randomized inter-trial interval (ITI) of 200 to 700 ms. The movie (60 fps) started with the presentation of the first frame at t = 0 ms. The last movie frame and the response cue appeared at t = 1000 ms, and remained on screen showing the actor with the pointing hand in its end position and the other hand in a resting position. A target color change from black to blue (yellow) necessitated a left (right) index response within 1500 ms post response cue onset. In trials requiring no response, the randomized ITI followed 1500 ms after the end of the pointing movement. Throughout the experimental session a white fixation cross was centered in the middle of the stimuli, and it was also present during the randomized inter-trial intervals when the movie still-frames were reset to a gray background. Participants were provided with response feedback during the ITI in the form of a color change in the fixation cross that corresponded to their actual response (i.e. blue for 'left', yellow for 'right', and red for a 'wrong' response, such as an incorrect or delayed response). movement. Crucially, source-space analysis will allow us to gain first insights into the temporal interplay of alpha and beta oscillatory activity in brain areas concerned with sensory, attentional, and motor processing. Our time-resolved MEG sensor-and source-space analyses corroborated these hypotheses, demonstrating the dynamic updating of spatial attention by incoming sensory evidence together with priming of parieto-premotor areas. These results further inform current views regarding some of the key brain processes involved in the action observation network (e.g., Kilner, 2011;Thompson and Parasuraman, 2012).
Participants
Twelve healthy right-handed (Oldfield, 1971) paid volunteers (6 females, mean age 24.4 years; SEM ± 1.8) with no history of neurological illness participated in the study after providing informed consent. The study was approved by the College of Science and Engineering Ethics Committee (University of Glasgow).
Stimuli and task
Participants were shown 1 s-movies of an actor, seated forwardfacing, making a pointing action to one of two lateral targets located in front of both the actor and participant (Fig. 1). The actor's body and arms/hands were visible but not the face. The movies began (t = 0 ms) with the actor resting both hands palms down on the table, prior to making a pointing action with either the left or right hand. The movement terminated with the actor's moving index finger ending on one of the lateral targets. Critically, on the last frame of the movie (t = 1000 ms), the target that the actor pointed to either changed in color from black to blue or yellow, or it remained black. This target color change provided participants the relevant response cue. A target color change from black to blue (yellow) required a left (right) index finger response, while the response should be withheld in trials with no target color change. The stimuli were presented with an inter-trial-interval of 200 to 700 ms post response (or after 1500 ms in no-response trials) using the Psychophysics Toolbox (v3.0.8) (Brainard, 1997;Pelli, 1997) within MATLAB ® (MathWorks™, MA, USA). Twelve different Experimental Conditions, from the factorial combination of (i) the actor's moving hand (left vs. right), (ii) the target location corresponding to the movement endpoint (left vs. right, relative to the observer) and (iii) the required response (left, right or none), were each presented 10 times in randomized order in each of the eight experimental blocks. Each block consisted of a randomized sequence of 120 trials.
Neuroimaging acquisition
Participants were tested sitting upright within an electromagnetically shielded room. MEG data were acquired using a 248-channel magnetometer system (Magnes 3600 WH; 4D Neuroimaging, San Diego, USA). Head position stability was assessed via five head-position indicator coils attached relative to the (left, right preauricular and nasion) fiducials which were co-digitized with head-shape (FASTRAK®, Polhemus Inc., VT, USA) for subsequent co-registration with individual MRI (1 mm 3 T1-weighted; 3D MPRAGE). The MEG, index finger responses (LUMItouch™, Photon Control Inc., BC, Canada) and eye-tracker (EyeLink 1000; SR Research Ltd., Ontario, Canada) signals were sampled synchronously at 1017.25 Hz.
Behavioral analysis
Individual median response times (RT) were determined for the eight response-required Experimental Conditions (Fig. 2). We assessed the effects of (A) the Actor's Moving Hand (left vs. right), (T) the Endpoint Target Location on which the actor's pointing movement ended (left vs. right, relative to the participant's perspective), and (R) the Cued Response (left vs. right) on participants' response with repeated measures analysis of variance (PASW Statistics 18, SPSS Inc., IBM, IL, USA).
MEG data processing
All data processing, time-frequency and statistical analyses were performed using Fieldtrip (Oostenveld et al., 2011) within MATLAB®. During MEG acquisition, there were a few noisy channels that were invariant across subjects and some that were subject-specific. To standardize the whole signal pre-processing and to facilitate the subsequent source analysis, a common set of MEG sensors (N = 26, visually identified, and located primarily in the frontal sensors) with large signal variance was removed from the MEG data set. Next, for sensor-level analysis, we performed nearest-neighbor interpolation of the removed noisy channels using the Fieldtrip function ft_channelrepair. In subsequent sensor-level analyses, this allowed us the use of the same set of 248 channels across all subjects. Raw MEG signals were epoched from −1000 to +3000 ms relative to stimulus onset (0 ms), with linear trends removed. Eye-blinks and movement artifacts were rejected through trial-by-trial visual inspection. The remaining epochs were 'de-noised' relative to reference MEG signals prior to Independent Component Analysis to isolate and reject the cardiac component from the MEG signal. Similarly, Response-Congruency conditions (denoted as~LR;~RL;~LL;~RR;~signifies the irrelevance of the actor's moving hand), in which participants make a response that is spatially congruent (i.e.,~LL or RR) with the endpoint target location of the actor's movement or not (i.e.,~RL or~LR). Median RTs ranged from 300 to 610 ms (mean ± SEM = 451 ± 10 ms) across subjects and response-required Experimental Conditions.
Stimulus-Type & Congruency
Artifact-free neuromagnetic time series (mean (SEM) = 541.25 ± 17.24 trials) corresponding to correct trials were transformed to planar gradient signals (Bastiaansen and Knosche, 2000) that entered subsequent time-frequency analyses. For each of the eight response-required Experimental Conditions (Fig. 2), time-frequency time-series were computed from −1000 ms to 3000 ms using a Hanning-tapered 500 ms temporal window and a 20 ms time resolution. These time-series were expressed as a relative change to baseline.
Sensor-level analysis
Four groups of MEG sensors were defined that covered left/right parieto-occipital and left/right motor areas (see Supplementary Methods section; Inline Supplementary Fig. S1). Subsequent analyses were performed using the relative power change spectra (ΔP f t ð Þ ) obtained from these sensor subsets and their lateralized signals (i.e., subtraction of the relative change in power spectra for right-hemispheric sensors from that corresponding to the left-hemispheric sensors). Hemispherespecific and lateralized neuromagnetic modulations in alpha (8-12 Hz) and beta (16-25 Hz) frequency (f) bands were derived for the response-required Experimental Conditions (c). The effect of (A), (T), and (R) on alpha and beta neural activity was examined using time-resolved regression analysis. We used condition-specific averaged motor or parieto-occipital spectra from the left ( ΔP Þ ) of each participant (s = 1 to 12) as dependent variables and specified independent variables as pseudo dummy values, −1 or 1 for left or right, respectively, with reference to the Actor's Moving Hand (A), Endpoint Target Location relative to the observer's perspective (T), and Cued Response (R) as defined in Behavioral analysis. This is generalized as: ΔP c;s hem f t ð Þ denotes the motor or parieto-occipital spectra from the left or right hemisphere, or the lateralized spectral time series of each participant (as defined above), b 0,1,2…N are the regression coefficients and ε the residual error for each sample time point of interest (t) within −500 to +1650 ms relative to stimulus onset. The regression coefficients corresponding to the parameters of interest (A, T, R) were assessed using Bonferroni-corrected (p b 0.0004) t-tests (Manly, 2007). The color-coded bars depicted underneath the grand-averaged PO-and M-Lateralized spectral time series
Lateralized beta power modulations and RT
Based on the significant interactions between endpoint location and required responses (see Supplementary Results section and Inline Supplementary Table S1) lateralized beta power time series for the eight Experimental Conditions were grouped by Response-Congruency (denoted as~LR;~RL;~LL;~RR;~signifies the irrelevance of the actor's moving hand; Fig. 2). We accounted for laterality effects of response-related neural activity by multiplying Experimental Conditions requiring left or right responses with − 1 or 1, respectively. These power spectra were normalized by each subject's maximum spectral power across all Experimental Conditions during stimulus presentation (0 to 1000 ms). For each sampling time point (20 ms resolution) during the experimental trial (−500 to 2000 ms), the relation between normalized power spectra for each Response-Congruency condition and the corresponding normalized median RT was assessed by Pearson correlation.
Source-level analysis
To further investigate the underlying neural sources contributing to the observed effects in the sensor-level analysis, we performed lateralized (left vs. right hemisphere) contrast analyses at source level within subjects prior to group comparisons and assessed the regional source maxima to determine brain regions (ROIs) significantly related to Stimulus-Type and Response-Congruency. Noisy or interpolated channels were excluded, that is, all source-level analyses were conducted using the 222 good channels. Time-frequency source signals (600 to 1100 ms; baseline −500 to 0 ms) were derived using DICS (Gross et al., 2001) with individual's MRI (6 mm volume grid, normalized to the MNI space) and experimental session-specific sensor location to compute forward modeling lead fields. Spatial filters for localizing alpha and beta activities were derived with a linear constraint allowing maximal gain of source signal power while maximally suppressing that of all other sources and minimizing overall output signal variance (Van Veen et al., 1997). Individual baseline-contrasted stimulus-and response-related statistical source maps were subsequently analyzed by performing group-level contrast analyses to investigate significant sources pertaining to 1) Stimulus-Type (i.e., straight (LR~vs. RL~) vs. crossed (LL~vs. RR~) pointing movements), and 2) Response-Congruency (i.e., congruent (~LL vs.~RR) vs. incongruent (~LR vs.~RL) responses). Results were bootstrap-resampled (N = 500) to estimate confidence intervals and FDR corrected (α = 0.05).
Significant source localizations were integrated within the same brain grid space ( Fig. 5; see Inline Supplementary Fig. S2). We derived from the local maximum of these significant sources a set of bilateral ROIs known to be involved in sensory-motor integration (e.g., Donner et al., 2009;Ledberg et al., 2007) and importantly, functionally related to the overall task by deriving the regional cortical maxima (based on source comparison t-test statistics) with at least four significant connected surrounding voxels, and that their combined absolute mean of statistics value t ≥ 2.6. These maxima and their corresponding contralateral location contributed to the set of bilateral ROIs (N = 10; see Inline Supplementary Fig. S2 and Table S2).
Lateralized ROI source time-frequency power and RT
We derived lateralized baseline-corrected ROI power time series (Lat_ROI) in alpha and beta frequency bands. Power was calculated as square of the absolute complex signals from DICS with fixed dipole orientations and baseline-corrected as relative change. For every 100 ms moving average (50 ms resolution) from stimulus onset (0 ms) until after the response (1650 ms), we tested the correlation between source Lat_ROI and RT. These moving average correlations were categorized according to their statistical significance (uncorrected) into five p-value threshold bins (n.s.; p b 0.05; p b 0.005; p b 0.0001; p b 0.00001; Fig. 5).
Attentive tracking of pointing movements is reflected in alpha and beta neural modulations
During participants' viewing of the Actor's different pointing movements (Stimulus-Types; Fig. 3A), parieto-occipital (PO) and motor (M) sensor-based (Fig. 3) neural power showed distinctive modulations in alpha and beta oscillations. Sharp and brief PO alpha oscillatory power decrease manifested itself early in both hemispheres, falling~20% below baseline~275 ms following stimulus onset (Fig. 3A (i);(ii) ). Thereafter, modulations related to Stimulus-Type were seen to deviate from the point where the early alpha oscillatory power reduction was maximal (~300 ms). Attentive tracking of straight pointing movements was associated with the strongest rebound of alpha oscillatory power (~40% relative power increase from initial minima, peaking at 700 ms) in either hemisphere contralateral to the endpoint target location. The Stimulus-Type related modulations were distinctly separated prior to response cue onset (1000 ms) in each hemisphere. This is clearly seen in the lateralized spectral time series (Fig. 3A (iii) ). In particular, lateralized alpha activity reflected coding of the location of the target to which the actor's hand is moving~500 ms after stimulus onset. Straight pointing movements, where the actor's hand stays within the observer's left or right visual field, elicited enhanced ipsilateral alpha power modulation with corresponding contralateral suppression. For crossed pointing movements, alpha modulation followed a similar but weaker trend.
Distinctive motor beta power changes decreased (to~50% below baseline) from stimulus onset until the period of response onset (at 1450 ms) before rebounding for all Stimulus-Type in both hemispheres ( Fig. 3A (iv);(v) ). From~400 ms after stimulus onset, lateralized motor beta modulations (Fig. 3A (vi) ) appeared to code the spatial position of the actor's hand as reflected by lower beta amplitude in contralateral compared to ipsilateral motor areas. This is most obvious when the actor made crossed pointing movements, i.e., when the actor's left hand moved from the participants' right to the left hemi-field (experimental condition LL). In this case, the participants' initially negative motor beta lateralization (L b R) gradually shifted to become positive (L > R).
Evolving 'saliency' of observed stimuli dynamically modulates neuromagnetic signals Sequential unfolding of significant associations (p b 0.05, Bonferroni corrected) between lateralized spectral power changes (relative to baseline) and specific 'salient' features of Experimental Conditions can be observed between stimulus (0 ms) and response cue (1000 ms) onsets, and prior to the mean time of response onset (~1450 ms; Fig. 3A (iii),(vi) ; olive and lime green significance lines). Specifically, the regression analysis revealed that lateralized motor beta modulation (Fig. 3A (vi) ) was significantly associated with actor's moving hand, beginning at 427 ms post stimulus onset. Thereafter, lateralized PO alpha modulation (Fig. 3A (iii) ) that was significantly related to both the actor's moving hand and the endpoint target location emerged at 508 ms post stimulus onset. Significant target-related associations were next observed in lateralized motor beta at 810 ms ( Fig. 3A (vi) ), that is 190 ms before response cue onset. Subsequently, we observed a significant association between lateralized motor beta and the required hand response preceding its execution at 1355 ms ( Fig. 3B; orange significance line).
The regression analysis revealed that past the midpoint of the pointing movement, observers' lateralized PO alpha modulation was briefly, but significantly, related to the actor's moving hand before manifesting a significant relation to the target location ( Fig. 3A (iii) ). Lateralized motor beta power also revealed significant modulations that transited from reflecting the position of the actor's moving hand to that of the target location ( Fig. 3A (vi) ). Such a beta modulation is clearly observable for the (crossed) right hand pointing movement to the right target ( Fig. 3B (iv) , experimental condition RR). In this condition, prior to response cue onset, mean lateralized motor beta suppression indexed the engagement of motor areas for a right hand response, probably based on the inference of the actor's movement endpoint. When the response cue indicated a congruent right hand response, this decrease in beta oscillatory power was further enhanced. However, when an incongruent left hand response was cued, the lateralized motor beta power modulation showed a 'reversal', indexing the engagement of motor areas in the other hemisphere for the required left hand response. Such lateralized spectral modulation was similarly observed for the other Stimulus-Type and corresponding Response-Congruence conditions (Fig. 3B (i),(ii),(iii) ).
Importantly, the observation of a stronger decrease in beta oscillatory power in the hemisphere contralateral (cf. ipsilateral) to the endpoint target location prior to response cue onset suggests that the lateralized beta modulation might reflect participants' bias towards a congruent response, based on their inference of the endpoint target location (or the actor's movement goal). Further investigation of the association between lateralized motor beta power and RT confirmed this hypothetical response bias, yielding a significant correlation at 100 ms before response cue onset (r = 0.57, p b 0.0004, Bonferroni corrected; Fig. 4), i.e., even before participants knew with which hand to respond.
The moving average correlation revealed an evolving strength of association between the frequency-specific lateralized power modulations from these paired ROIs and RT ( Fig. 5; Inline Supplementary Table S3A). Predominantly, alpha oscillatory processes within perceptual brain regions (BA 18, BA 19) manifested early (at 50 and 150 ms, respectively) correlations with RT. Despite increases in correlation between RT and alpha power within PMd and PPC areas over time, these were non-significant. The strength and significance of correlation between pMTG alpha and RT increased sharply from the start of the action observation (550-670 ms; p b 0.05-0.005) and peaked In contrast, BA 19 manifested a relatively sustained correlation with RT in its alpha modulation, which was highly significant from the middle (530-950 ms; p b 0.005) of the actor's pointing movement until participants made their response (1410 ms). Significant RT-correlated beta oscillatory processes emerged later towards the end of the actor's pointing movement in visual areas (BA 18;530-1170 ms, BA 19;570-1130 ms By sorting the relative onsets of these significant associations (see Inline Supplementary Table S3B), we can further appreciate the interplay of both alpha and beta oscillatory processes across the ROIs over the course of attentive-tracking and its influence on response preparation. This apparent interaction began within early visual areas (BA 18,BA 19), although mostly alpha activity dominated. Subsequently, pMTG alpha processes emerged while visual areas continued to be prominently involved. Decisively, the sequence of peak correlations beginning with alpha modulations within pMTG (750 ms), followed by beta modulations first in BA 19 (810 ms) and then BA 18 (830 ms) occurred prior the late manifestations of significant beta mediated associations in PPC (890 ms). RT-related alpha modulations in BA 19 peaked (950 ms) prior to beta processes within BA 6 (970 ms) manifesting significant association, which peaked just after response-cue onset (1170 ms). Thereafter, RT-correlated beta modulations within PPC peaked swiftly (1210 ms) followed by RT-related alpha modulations in BA 18 (1290 ms) prior to participants' responses.
Discussion
There is accumulating consensus that oscillatory activity in the brain functionally contributes to the allocation of attention in space (e.g., Rihs et al., 2009), to motor preparation and execution (e.g., Fujioka et al., 2012;Pogosyan et al., 2009;Tzagarakis et al., 2010), as well as to action observation (e.g., Kilner et al., 2009;Press et al., 2011). However, while brain regions involved in attentive tracking of dynamic stimuli have been previously investigated (Culham et al., 1998), the frequency-specific neural processes induced during and after action observation within sensory, attention-related, and motor regions of the brain have not been systematically or jointly investigated. We addressed this issue by using advanced MEG analyses techniques that allowed us to demonstrate the dynamic unfolding of a complex temporal pattern of (alpha-and beta-band) oscillatory activity within different brain regions. Notably, these oscillatory neural responses were related to on-going changes in both the spatial allocation of attention and the activation state within the motor cortex, which in turn were predictive of participants' overt performance. We summarize the present findings for alpha-and beta-band oscillations and their implications regarding their functional interpretation separately, before discussing how these frequency-specific processes are dynamically engaged during the observation of biological motion stimuli.
Updating of spatial attention by incoming sensory evidence
We hypothesized that allocation of attention in space to different pointing movements (e.g., straight vs. crossed and/or left vs. right moving hand) is distinctively reflected in the online dynamics of the observers' neural modulations in both alpha and beta oscillations. The characteristic anticipatory activity of posterior alpha was evident very early on, in fact bilaterally, when it was equally uncertain as to which hemifield would be relevant for continued tracking before any discernible Stimulus-Type feature appeared. According to the view that alpha oscillations mirror an inhibitory process (Foxe and Snyder, 2011;Klimesch, 2012;Klimesch et al., 2007), the present stimulus-specific rebound following initial bilateral alpha oscillatory power decrease could reflect the active inhibition of information processing in the visual hemifield wherein the actor's arm remained stationary or started to move from its initial resting position. Specifically, decreased alpha oscillations in contralateral compared to ipsilateral PO areas represented the potential endpoint (i.e., target location) of the actor's hand movement. This effect is likely driven by the corresponding alpha power increase in the ipsilateral hemisphere (Kelly et al., 2006;Rihs et al., 2009), which endured until participants executed their response.
Based on these observations, we assume that the temporally evolving alpha power modulations reflect the observers' continuous extraction and prediction of the actor's movement endpoint from incoming dynamic stimulus information. Thus, incoming sensory information is constantly used to update predictions about the actor's movement endpoint (where the behaviorally relevant response cue would occur) and hence spatial certainty about the endpoint accumulates over time. Extending previous observations that attentional cues modulated anticipatory alpha activity depending on the degree of validity with which they indicated the location of the forthcoming target (Gould et al., 2011), we show that increasing spatial certainty during on-going attentive tracking modulates alpha activity. Certainly, motion energy differed between the actor's movement conditions (straight vs. crossed). That is, straight movements have more vertical motion relative to horizontal motion energy, while crossed movements would be predicted to exhibit stronger horizontal motion energies relative to the vertical. In principal, these differences in motion energy might have also induced the observed modulations of hand-specific beta lateralization. However, it seems unlikely that motion energy differences alone can account for the differential alpha modulations observed in our study for the following reason. If motion energy were to be responsible for the modulations in alpha, one would expect zero crossings in the lateralized oscillatory alpha activity for crossed pointing actions, because the actor's movement crosses observer's hemifields. However, such a pattern was clearly not observed in lateralized alpha oscillatory activity. Together, our results indicate that lateralized parieto-occipital alpha oscillations sensitively reflect the changing allocation of attention dependent on the position of the actor's movement in space and the anticipated endpoint location. These findings advance previous reports that showed alpha activity to be a crucial substrate of visual input regulation (Romei et al., 2010) and to be actively involved in the deployment of spatial attention (Rihs et al., 2009;Thut et al., 2006) with reference to retinotopic coordinates (Worden et al., 2000). Specifically, our observations demonstrate that changes in on-going alpha activity relate to attentive tracking of dynamic stimuli rather than to the likely target position indicated by a symbolic precue presented at fixation, typical of research paradigms used in previous studies. Finally, an exciting possibility that deserves further investigation is that alpha-mediated allocation of spatial attention influences anticipatory response processes, as reflected by motor-related beta activity, before the actual response is indicated by the imperative response cue. We will discuss these motor-related oscillatory activity changes next.
Priming of parieto-premotor areas by incoming sensory evidence
We assume that lateralized beta modulations reflect the observers' evolving response bias as a function of the actor's hand position in space. Thus, rather than the actor's moving effector, it was the visual hemifield within which the actor's hand was moving that determined the beta lateralization in motor areas, reflecting the dynamically changing response bias. Importantly, and as hypothesized, bias for a right hand response was reflected by stronger beta suppression for contralateral left motor areas as compared to ipsilateral right motor areas, and this resulted in faster RT if a right hand than a left hand response was required. Thus, the magnitude of this response bias (lateralized beta) predicted participants' RT; larger biases were associated with significantly faster responses, particularly those made in spatial congruence with movement endpoint location. In brief, beta lateralization power just before the onset of the response cue reliably reflects the response bias and predicts response time.
It is important to note that the response-related power modulation rode on top of the characteristic decrease of beta oscillatory activity that already began at stimulus onset. This power reduction manifested itself bilaterally and reflected a 'general state of movement preparation' (Pfurtscheller, 1981) for responding with either hand. This is in line with the overall task requirement that the precise response is only known upon presentation of the response cue, that is, at the very end of the actor's pointing movement. Since the required response is initially unknown, participants could refrain from activating a specific (left or right) response, guess and selectively activate one of the two responses, or simultaneously activate both potential responses (Jentzsch et al., 2004). In fact, and consistent with ERP findings of Jentzsch et al. (2004) in the response precuing paradigm, we observed an early and persistent bilateral beta oscillatory power decrease that we take to indicate parallel response activation. Moreover, with increasing spatial 'certainty' about the endpoint position, the spatially corresponding response becomes activated in the motor system, ultimately resulting in faster congruent than incongruent responses. Whereas ERP studies provided evidence for such location-based response priming following target onset (Stürmer et al., 2002), to our knowledge, this is the first study to demonstrate such a sensorimotor priming effect prior to response cue onset in beta oscillatory activity within parietal and premotor areas.
Finally, it is more commonly appreciated now that beta oscillatory activity is not solely motoric (Engel and Fries, 2010). For example, recent research showed that external entrainment of motor cortical activity at beta frequency (20 Hz) resulted in more slowly executed movements (Pogosyan et al., 2009) and a reduction in the number of unintended 'no-go' responses (Joundi et al., 2012). Existing studies have also shown that pre-response beta oscillatory activity is sensitive to experimental factors (Confais et al., 2012;Kilavik et al., 2012;Stančák et al., 1997;Tzagarakis et al., 2010), and could reflect predictive timing mechanisms (Fujioka et al., 2012;Saleh et al., 2010), or the degree of certainty in perceptual decision making (Donner et al., 2009). As such, it is also conceivable that beta modulations are influenced by alpha-mediated attention processes.
The interplay of alpha and beta modulations
Looking at the time-course of oscillatory activity changes in different brain regions, the present study revealed some intriguing possibilities regarding the apparent interplay of alpha-and beta-band modulations. It is evident from the moving average correlations (100 ms; 50 ms resolution) between the frequency-specific lateralized ROI power modulations and behavioral response times that alpha and beta processes within visual (BA 18, BA 19), extrastriate (pMTG), parietal (BA 7; ILP) and premotor (PMd) areas participated in integrating relevant sensory information (i.e., spatial and response cues) during attentive tracking of the biological motion stimulus. Specifically, we observed that RT-related alpha oscillatory processes in visual brain areas BA 18, BA 19, and pMTG lead in the presumed accumulation of information (e.g., spatial likelihood to guide attention). The final contribution to action selection or activation appears to critically involve the emergence of stronger RT-related beta oscillatory processes within BA 18 and BA 19, just prior to that in the PPC (BA 7; BA 40), followed by PMd (BA 6), and the sustained and highly significant response-related alpha oscillatory processes in visual areas. The late onset of RT-correlated PMd mechanisms is consistent with the presumed active role this area plays in integrating incoming sensory and motor information from various brain sources (Pastor-Bernier and Cisek, 2011).
Crucially, the response-related frequency-specific signals within each ROI both emerged, and were also most prominently involved, at different time points during the sequence of action observation, cue onset, and response preparation. Of course, the proposed view regarding the interplay of alpha-and beta-band modulations in different brain regions should be considered preliminary. Nevertheless, as a working hypothesis, it can direct future research using analysis techniques that more directly assess cross-frequency signal interactions, such as phase-coupling analysis across different brain regions (cf. Siegel et al., 2008).
Implications for the mechanisms involved in action observation
A final relevant aspect of the present research concerns the perhaps unsurprising finding that the brain areas significantly involved in our task overlap with those often reported in fMRI studies on action observation (Caspers et al., 2010;Rizzolatti and Fabbri-Destro, 2008;Rizzolatti et al., 2006), specifically the PMd (BA 6) and the PPC (BA 7; BA 40). However, it is worthwhile to note that in contrast to the present work, previous MEG studies investigating oscillatory activity during action observation generally reported mu-rhythm (8-12 Hz) and beta attenuation in the primary motor cortex (Caetano et al., 2007;Hari et al., 1998;Kilner et al., 2009). In addition, these studies were focusing on oscillatory activity during action observation within isolated brain regions (e.g., primary motor cortex). In this respect, it is remarkable that an EEG study of Babiloni et al. (2002) reported a decrease of alpha oscillatory power during the observation of movements at electrodes placed over parietal-occipital brain regions as well as beta oscillatory power reduction over motor areas. Certainly, since Babiloni et al. did not perform source-based analysis of oscillatory activity, their inferences concerning the brain sources involved in action observation processes remained vague. However, their findings are nevertheless similar to the present observation of cascaded oscillatory activity starting in visual regions (BA 18, BA 19), then extrastriate regions (pMTG), followed by the PPC (BA 7, BA 40), and then PMd (BA 6). This cascade of oscillatory changes is consistent with insights from neurophysiological studies, demonstrating overlapping neural activity within an integrated visuomotor processing network in the brain (Ledberg et al., 2007). Our observations also accord with the known corticocortical connectivity of the above brain regions and their postulated computations (Wise et al., 1997).
One final question then concerns the more precise functional role of the integrated oscillatory brain activity during action observation. Building on the initial insights of Babiloni et al., we hypothesize that dynamic stimulus information is first processed and motion information accumulated within visual areas BA 18, BA 19, and pMTG. We further assume that these areas continuously transmit processed information along the dorsal stream to the posterior parietal cortex (BA 7; BA 40). The PPC has been taken to play a pivotal role in sensorimotor integration (Andersen and Cui, 2009;Buneo and Andersen, 2006;Colby and Goldberg, 1999;Cui and Andersen, 2007;Gottlieb, 2007;Gottlieb and Balan, 2010) and in visually guided movements (Buneo and Andersen, 2006;Desmurget et al., 2009). More recently, the PPC has also been proposed to be central to the signaling of the intention to perform a certain action (Desmurget et al., 2009;Quian Quiroga et al., 2006). Crucially, the PMd is also important for the integration of visuomotor information (Pesaran et al., 2006), and ultimately plays a key part in coming up with a decision about the to-be-executed action (Cisek, 2006;Cisek and Kalaska, 2010). With this in mind, we speculate that during action observation, the PPC in concert with the PMd integrates incoming motion (or spatial) information from areas BA 18, BA 19, and pMTG with the observer's own action plans, thereby presumably facilitating simulation-based action understanding. However, as indicated by motor-related beta oscillatory changes during action observation, these simulations are not based on the moving limb of the actor (effector mirroring; Bertenthal et al., 2006;Koski et al., 2003) but rather related to the (dynamic) spatial position of the arm, consistent with Kilner et al.'s (2009) findings using a different action observation paradigm.
Conclusions
Within the context of action observation, dynamic allocation of attention in space and the associated preparation of prospective response are reflected in observers' alpha and beta neural activity. Incoming sensory information can provide relevant salience cues that seize our attention, sometimes more than just momentarily, and influence our anticipatory gear-up for prospective action. Our findings suggest that amidst the parallel and sequential neural frequency processes, beta activity within parieto-frontal areas simultaneously participated in integrating alpha-mediated sensory salience and anticipatory response activation.
|
v3-fos-license
|
2019-06-28T13:22:27.635Z
|
2019-06-11T00:00:00.000
|
184488360
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2019.00597/pdf",
"pdf_hash": "19bdc7dd0ea6ba437f1080a298937643e31382a0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2638",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7459b070dd8854e004eac2bf5aa40a6502283d0f",
"year": 2019
}
|
pes2o/s2orc
|
A Phase 2, Randomized, Double-Blind, Placebo-Controlled Trial of CX-8998, a Selective Modulator of the T-Type Calcium Channel in Inadequately Treated Moderate to Severe Essential Tremor: T-CALM Study Design and Methodology for Efficacy Endpoint and Digital Biomarker Selection
Background: Essential tremor (ET) is a common, progressive neurological syndrome with bilateral upper-limb dysfunction of at least 3-year duration, with or without tremor in other body locations. This disorder has a negative impact on daily function and quality of life. A single oral therapy has been approved by FDA for ET. Off-label pharmacotherapies have inadequate efficacy and poor tolerability with high rates of patient dissatisfaction and discontinuation. Safe and efficacious pharmacotherapies are urgently needed to decrease tremor and improve daily living. T-CALM (Tremor-CAv3 modulation) protocol is designed to assess safety and efficacy of CX-8998, a selective modulator of the T-type calcium channel, for ET therapy. Methods/Design: T-CALM is a phase 2, proof of concept, randomized, double-blind, placebo-controlled trial. Titrated doses of CX-8998 to 10 mg BID or placebo will be administered for 28 days to moderate to severe ET patients who are inadequately treated with existing therapies. The primary endpoint will be change from baseline to day 28 of The Essential Tremor Rating Assessment Performance Subscale (TETRAS-PS). Secondary efficacy endpoints for clinician and patient perception of daily function will include TETRAS Activity of Daily Living (ADL), Quality of Life in Essential Tremor Questionnaire (QUEST), Clinical Global Impression-Improvement (CGI-I), Patient Global Impression of Change (PGIC), and Goal Attainment Scale (GAS). Kinesia One, Kinesia 360, and iMotor will biometrically evaluate motor function and tremor amplitude. Safety will be assessed by adverse events, physical and neurological exams and laboratory tests. Sample size of 43 patients per group is estimated to have 90% power to detect a 5.5-point difference between CX-8998 and placebo for TETRAS-PS. Efficacy analyses will be performed with covariance (ANCOVA) and 2-sided test at 0.05 significance level. Discussion: T-CALM has a unique design with physician rating scales, patient-focused questionnaires and scales and objective motor measurements to assess clinically meaningful and congruent efficacy. Patient perception of ET debilitation and therapy with CX-8998 will be key findings. Overall goal of T-CALM is generation of safety and efficacy data to support a go, no-go decision to further develop CX-8998 for ET. Design of T-CALM may guide future clinical studies of ET pharmacotherapies. Clinical Trial Registration: www.ClinicalTrials.gov, identifier: NCT03101241
Background: Essential tremor (ET) is a common, progressive neurological syndrome with bilateral upper-limb dysfunction of at least 3-year duration, with or without tremor in other body locations. This disorder has a negative impact on daily function and quality of life. A single oral therapy has been approved by FDA for ET. Off-label pharmacotherapies have inadequate efficacy and poor tolerability with high rates of patient dissatisfaction and discontinuation. Safe and efficacious pharmacotherapies are urgently needed to decrease tremor and improve daily living. T-CALM (Tremor-CAv3 modulation) protocol is designed to assess safety and efficacy of CX-8998, a selective modulator of the T-type calcium channel, for ET therapy.
Methods/Design: T-CALM is a phase 2, proof of concept, randomized, double-blind, placebo-controlled trial. Titrated doses of CX-8998 to 10 mg BID or placebo will be administered for 28 days to moderate to severe ET patients who are inadequately treated with existing therapies. The primary endpoint will be change from baseline to day 28 of The Essential Tremor Rating Assessment Performance Subscale (TETRAS-PS). Secondary efficacy endpoints for clinician and patient perception of daily function will include TETRAS Activity of Daily Living (ADL), Quality of Life in Essential Tremor Questionnaire (QUEST), Clinical Global Impression-Improvement (CGI-I), Patient Global Impression of Change (PGIC), and Goal Attainment Scale (GAS). Kinesia One, Kinesia 360, and iMotor will biometrically evaluate motor function and tremor amplitude. Safety will be assessed by adverse events, physical and neurological exams and laboratory tests. Sample size of 43 patients per group is estimated to have 90% power to detect
INTRODUCTION Background
Essential tremor (ET) is described as a progressive neurological disorder that elicits involuntary rhythmic trembling of the hands, head, larynx, legs or trunk. ET is considered a syndrome because it is not a single disease and has several possible etiologies (1). A task force of The International Parkinson's Disease and Movement Disorder Society has recently proposed a more succinct definition of ET. This society defined ET as a syndrome with isolated bilateral upper-limb action tremor of at least a 3-year duration, with or without tremor in other body locations (e.g., head, lower limbs). It was also pointed out that other neurologic symptoms, such as dystonia and Parkinsonism, are not associated with ET (2). ET is often inherited as autosomal dominant but tremor-inducing drugs and toxins are also implicated in the causality of the syndrome (3). The pathophysiology of ET is being investigated but several studies have identified multiple abnormally oscillating neuronal circuits connecting the cerebellum, inferior olive, thalamus and areas of the cortex as etiologies (1,3,4). T-type calcium channels (TTCC) are reported to regulate the inferior-olive-cerebellum and thalamocortical neuronal networks. Increased activation of these channels has been shown to promote excessive rhythmicity in these neuronal networks and has been identified as a key pathophysiology and potential therapeutic target for ET (5)(6)(7)(8)(9).
To meticulously evaluate the presence of ET, a comprehensive medical history and neurologic examination are recommended. Age of onset, family history, timing of progression, use of tremorinducing drugs, and exposure to toxins are critical components of the medical history (1,4). The neurologic examination should include the body locations of tremor and arousal conditions for tremor such as rest, posture, and purposeful movements (1,4). Tremor rating scales are also employed to assess severity, degree of disability, quality of life, and effect on activities of daily living (10). The Essential Tremor Rating Assessment scale (TETRAS) was developed by The Tremor Research Group to replace older instruments such as the Fahn-Tolosa-Marin (FTM) scale and to improve psychometric properties, dynamic range, expediency, accuracy, and comprehensive quantification of ET severity (11). Electrophysiology techniques, such as electromyography and accelerometry, have also been utilized for differential diagnosis of ET (12).
ET is reported to be a widespread movement disorder with a 1% incidence worldwide (13). Population-based incidence studies of ET with U.S. census data of 2012 revealed that 2.2% of the U.S. displayed ET (14). It is estimated that there may be more than 7 million ET patients in the U.S. (14). The frequency of ET is directly correlated with increase in age and generally comparable in males and females (1,13). It is estimated that 8 times as many people have ET compared to Parkinson's Disease (14). As the diagnostic tools for ET become more sophisticated, the prevalence of the disorder may be amplified. ET has been detected as early as childhood with incidence peaks in the second and sixth decades (15).
Although ET does not seem to adversely affect life expectancy, the syndrome has a major effect on a patient's ability to adequately function on a daily basis. Activities at home and at work are disrupted and quality of life and social networking are compromised (16,17). ET is recognized as a multisymptomatic disturbance that impacts writing, dressing, eating, self-care, mood, memory, attention, communication, and sleep and is a source of anxiety, depression, and social isolation (18)(19)(20). Disability is reported in more than 90% of ET patients who seek medical care and may be significant enough to warrant invasive surgery such as deep brain stimulation (DBS) or thalamotomy (21). Severely affected patients are unable to feed or dress themselves (22). Due to uncontrollable shaking, 60% of ET patients choose not to apply for job promotions and 15-25% are forced to retire prematurely (15). In addition to ET, patients may also have intention tremor (23), rest tremor (24), and other motor abnormalities, including ataxia (25). This diverse group of tremor types is disabling and causes functional limitations (21,26). Resting and intention tremors are associated with illness of increased duration in ET patients (21,24,27) and suggest that the complexity of tremor phenomenology and severity of ET progressively increase with longstanding disease. ET cases with resting tremors had disease duration of 32.1 ± 24.5 years compared to those without resting tremors (19.6 ± 16.8 years) (24). Higher overall tremor scores were reported in ET patients with intention tremor (23). Disease duration correlated with intention tremor severity. Although frequency of ET tremors generally decreases over time, the amplitude of the tremors gradually increases (28).
ET is a common movement disorder with a major unmet medical need. Therapy for ET consists of drugs that often provide limited efficacy and/or poor tolerability. Neurosurgical interventions (DBS, gamma knife and focused ultrasound thalamotomy) are generally effective with acceptable tolerability but they are invasive and restricted to the most severe (1-3%) ET patients (1,29,30). There are a limited number of safe and efficacious pharmacotherapies for ET. Inconsistent data from clinical trials with small numbers of ET patients are partially responsible for the paucity of approved pharmacotherapies. Published ET clinical trials have been hampered by uncertain diagnosis of ET patients, cross-over design with carryover effect, small numbers of patients, open-label, diverse controls, limited methods for assessment of tremor amplitude reduction, and short duration of treatment (1,31). Most of the available pharmacotherapies treat the symptoms rather than the cause of ET and were originally developed and approved for other indications (1,29,31). Propranolol, a non-selective βadrenergic receptor antagonist, is the only FDA-approved (1967) pharmacologic agent for ET. This approval in 1967 was based upon a 2 week, randomized, double-blind, parallel, placebocontrolled trial with only 9 ET or familial tremor patients (32). At doses of 40-80 mg three times daily (TID), propranolol reduced tremor severity compared to placebo in this study of a small number of patients. More recent clinical trials of propranolol as a monotherapy in drug-naïve ET patients have confirmed that the response rate is 50 to 70% with an average tremor diminution of 50% compared to placebo (4). Propranolol has not generally been effective in patients with severe ET. Based upon clinical evidence, ET pharmacotherapies have been designated as first, second and third line agents (4). Propranolol and primidone are considered first line because they have been evaluated for safety and efficacy in randomized clinical trials of ET with class 1 evidence. Gabapentin, pregabalin, topiramate, clonazepam, alprazolam, and metoprolol are considered second line agents due to a dearth of class 1 evidence from randomized ET clinical trials. On the basis of open label or case studies, nimodopine and clozapine are classified as third line treatments. Due to a limited timeframe and low rate of effectiveness for tremorinduced physical and mental impairments and intolerable side effects, the three lines of ET pharmacotherapies have a high percentage of patient dissatisfaction and discontinuation (1,(33)(34)(35). Thus, in addition to standard of care, there is a definitive need for novel, durably effective ET pharmacotherapies with minimal side effects that have a beneficial impact on not only amplitude of tremor but also activities of daily living and other functional outcomes. This goal can be achieved with additional research for more clinically meaningful therapeutic targets and a better understanding of ET pathophysiology and clinical and genetic heterogeneity.
TTCC have been shown to function as low threshold, voltagegated calcium channels and are located primarily in neurons (36). TTCC activate upon weak depolarization of the neuronal cell membrane and permit calcium entry into excitable cells at the onset of action potential. Under abnormal states, the TTCC CAv3 subtype is upregulated or has increased activity that makes it a prime target for neurologic disorders such as ET (37)(38)(39). CAv3 isoforms are expressed in neurons throughout the central and peripheral nervous systems. CAv3 has been shown to be a mediator of subthreshold oscillations and excessive rhythmicity in neurologic disorders such as tremor, neuropathic pain, epilepsy, and Parkinson's disease (7,(40)(41)(42). The inferior olive has been reported to function as the inducer and intrinsic pacemaker of tremor in animal models (43). CAv3 is highly expressed in the IO and cerebellum. Tremor-related oscillations in the olivocerebellar pathway are key abnormalities underlying ET and trigger onset of tremor-related rhythms (38). Harmaline, a plant alkaloid that effects the cerebellum and IO, induces tremor in animals. Harmaline-induced tremor in animals is comparable to some of the clinical manifestations of human ET and is used as an experimental model for evaluation of efficacy of pharmacotherapies (40,44).
Compounds that target TTCC have been studied for their beneficial effects on ET. Clinical studies of zonisamide and topiramate, two drugs with nonspecific TTCC inhibitory activity, have demonstrated effectiveness for ET. However, unacceptable side effects that result in premature discontinuation have limited further development of these agents for ET (1,6,45,46). CX-8998, a potent, highly selective and state dependent small molecule modulator of TTCC, is under development for treatment of ET (47). Robust efficacy of CX-8998 (and analogous TTCC modulators) has been shown in numerous rat models of CNS disorders including tremor, generalized epilepsy, neuropathic pain, psychosis, and insomnia (48)(49)(50)(51). Several unpublished, nonclinical safety pharmacology studies have documented selectivity, biologic activity and a wide safety margin of CX-8998. Four phase 1 single and multidose safety studies in healthy volunteers, a phase 2A trial in acute psychosis in schizophrenics (52) and clinical pharmacokinetic and pharmacodynamic studies have been conducted. The data from these CX-8998 clinical studies (200 plus patients) have shown that single doses up to 18 mg and multiple doses from 2 to 12 mg for 7 days were well tolerated with transient and mild to moderate AEs. In the phase 2A trial of acute psychosis, 8 mg twice daily (BID) was generally well tolerated. Clinically relevant patterns of abnormalities were not detected in blood and urine laboratory tests, electrocardiograms, physical examinations or pulmonary function in any of the clinical studies. Based upon the favorable safety data from these nonclinical and early clinical studies, CX-8998 was selected for a phase 2 proof of concept study to assess its safety and efficacy for reduction of the severity of ET at doses up to 10 mg twice a day for 4 weeks.
Protocol Outline and Specific Aims
CX-8998 will be evaluated for safety and efficacy in a phase 2, multicenter, randomized, double-blind, placebo-controlled, parallel-group, proof of concept trial in patients with moderate to severe ET. The overall goal of the T-CALM (Tremor-CAv3 modulation) trial is to provide safety and efficacy data that will support a positive or negative decision for late stage clinical development of CX-8998 for treatment of ET. Other goals are to demonstrate efficacy of CX-8998 through relevant and converging endpoints, evaluate performance scales and objective biometric methodologies underlying ET efficacy endpoints, and generate a better clinical understanding and definition of ET patients through clinically relevant trial selection criteria. Plasma exposures of CX-8998 and metabolites will also be evaluated.
If the T-CALM proof of concept trial demonstrates that CX-8998 is effective with a favorable safety and tolerability profile, late stage clinical development will be undertaken and potentially support regulatory approval as a novel, selective, durable, and potent ET pharmacotherapy. The T-CALM study may also generate meaningful guidelines for design, patient selection, and relevant and convergent efficacy endpoints for future clinical trials of novel ET pharmacotherapies. The primary aim of the main T-CALM study is to assess the efficacy of CX-8998, at doses up to 10 mg BID, for reduction of severity (amplitude) of ET. An optional additional component of the main T-CALM study is the T-CALM digital substudy that will evaluate the feasibility of three different digital monitoring platforms for accurate quantification of changes in motor function in ET patients.
STEPWISE PROCEDURES AND ENDPOINTS
A schematic of T-CALM study design is presented in Figure 1.
The main T-CALM study is designed as a proof of concept, multicenter, double-blind, randomized, placebo-controlled, parallel-group trial. The screening period will be up to 4 weeks. Before any study procedures are conducted, patients will read and sign the IRB-approved informed consent in the presence of the investigator or suitable designee. Primidone (a strong CYP3A4 inducer) use is excluded due to the potential of CX-8998 to be subject to CYP3A metabolism. Thus, patients taking primidone will be given 6 weeks of screening to allow for safe discontinuation of the drug. Stable doses of a single anti-tremor medication other than primidone as a standard of care will be allowed during the study.
The study will enroll a population of moderate to severe ET patients inadequately treated with standard of care approaches at 22 clinical sites in the U.S. The clinical sites are identified online at ClinicalTrials.gov under registration number NCT03101241.
Key eligibility criteria for the main T-CALM study are as follows: 1. Signed, informed consent will be obtained for all study participants. 2. Males and females 18-75 years of age will be enrolled. 3. Patients with moderate to severe ET and initial diagnosis prior to age 65 will be included. 4. Tremor severity score of at least 2 in at least one upper limb of the 3 maneuvers on TETRAS-PS. 5. TETRAS-PS score of at least 15 at screen. 6. On stable doses of up to one concurrent anti-tremor medication permitted; use of strong CYP inducer primidone will be excluded. 7. Surgical intervention excluded.
A complete list of inclusion/exclusion criteria for the main T-CALM study is available online at ClinicalTrials.gov under registration number NCT03101241.
Eligibility criteria for the optional T-CALM digital substudy are as follows: 1. Patients must meet all eligibility criteria of the main T-CALM study protocol. 2. Patients must be able to comply with user requirements as assessed by site personnel. 3. Patients will not be issued digital devices/downloads until they have consented to participate in the optional T-CALM digital substudy. Frontiers in Neurology | www.frontiersin.org Patients will be randomized in a 1:1 ratio to receive CX-8998 or placebo with an interactive web response system (IWRS). Randomization will be stratified by concomitant use of antitremor medication and by type of site (main study vs. substudy).
The randomization code will be prepared by an unblinded statistician who is uninvolved in conduct of the study. Patients that meet screening criteria will be randomized to treatment group A or B. Group A will receive titrated doses of CX-8998 up to 10 mg BID. Group B will receive matching placebo.
Randomized study participants will enter a 4-week, double-blind, dose titration period followed by a 1-week safety follow up after the last dose of study medication. At baseline (day 1), patients will have safety and tremor evaluations prior to administration of study treatments. During the first week, patients will receive 4 mg of study drug or matching placebo twice daily (BID). On day 8 (week 2), patients will be assessed at the clinic for safety and dose titration to 8 mg (or matching placebo) BID. On day 15 (week 3), patients will report to clinic for safety and efficacy evaluations and final dose titration to 10 mg (or matching placebo) BID. The final efficacy visit will be day 28 (week 4). The final safety visit will take place on day 35 (week 5). Blood samples will be collected predose on days 8, 15, and 28 and at approximately 4 h post-dose on day 28 for plasma concentration measurements of CX-8998. If intolerable adverse events (AEs) are evident at any of the doses, the dose may be decreased to the next lowest dose at day 8 or 15 or at any time prior to those scheduled visits. If the lowest scheduled dose (4 mg BID) is intolerable, it can be decreased to 2 mg BID. After dose reduction, an increase in dose will not be allowed. If patients do not tolerate dose reduction, they will be withdrawn from treatment. Patients that have been screened and met the eligibility criteria for the main T-CALM study will have the option to additionally participate in the T-CALM digital substudy. Enrollment of patients in the substudy will be randomized to CX-8998 or placebo in the same proportion (1:1) as the main study. After informed consent is signed, patients will be given the option to use one or both of two digital tools, iMotor or Kinesia 360, or to conduct additional testing with Kinesia One for objective measurement of motor function. The dosing regimen and schedule of safety assessments for the substudy will be identical to that of the main study. At the screening visit, patients in the iMotor arm of the substudy will have evaluations completed in the presence of a study staff member. Patients in the Kinesia 360 arm of the substudy will wear the device to collect data for 2 days after screening and then return the device to the study site. Data collected for these 2 days will serve as baseline motor assessments for Kinesia 360. At baseline (day 1), baseline iMotor evaluations will be taken prior to dosing and after completion of the main T-CALM safety and tremor assessments. At the end of weeks 1 and 2, safety and Kinesia 360 and iMotor evaluations will be collected. The final efficacy measurements for both devices will be collected at the end of week 4 visit. The final safety visit will occur at the end of week 5.
The sponsor, patients, investigators, and any others involved in conduct of the study or analysis of data will be unaware of treatment assignments until the study is unblinded. An individual patient's treatment assignment will be unblinded only when knowledge of the treatment is necessary for medical management of the patient or if required for reportable safety events such as an unexpected serious adverse reaction. CX-8998 is formulated in size 4 hard gelatin capsules with 2 mg of hydrochloride salt of the active pharmaceutical ingredient mixed with a blend of excipients. Matched placebo is formulated in size 4 hard gelatin capsules with a blend of comparable excipients.
The primary, secondary, and exploratory efficacy endpoints for the main T-CALM study are listed below: The primary endpoint is the change from baseline to day 28 of the TETRAS performance subscale (TETRAS-PS) rated by investigators (in person) and by 5 independent video raters of patient videotapes. The secondary endpoints are the change from baseline to day 28 for the TETRAS Activity of Daily Living schedule (TETRAS-ADL) and for the Kinesia One score. There are several exploratory endpoints. The change from baseline to day 15 and 28 for Total TETRAS (PS plus ADL) score and for Kinesia One will be measured. The change from baseline to day 15 for the TETRAS-PS (investigators and independent video rater) and Kinesia One Score will be evaluated. Treatment success at the end of therapy will be measured by Patient Global Impression of Change (PGIC), Clinical Global Impression of Improvement (CGI-I), Goal Attainment Scaling (GAS), and Quality of Life in Essential Tremor Questionnaire (QUEST).
There are two exploratory endpoints for the T-CALM digital substudy. Kinesia 360 will measure the change in tremor amplitude from baseline to days 15 and 28. The iMotor test will evaluate five simple motor functions including digital spirography from baseline to days 15 and 28.
ET Performance Scales and Biometric Monitoring Devices
Several performance scales and an objective biometric monitoring device will be utilized in the main T-CALM study. These tools will generate relevant and convergent efficacy data that may more accurately define the response of patients to ET therapies. All site investigators will receive training on use of performance scales and biometric monitoring devices to generate high quality and replicable ET scoring data.
The Essential Tremor Rating Assessment Scale (TETRAS) was recently published (53) and is composed of a 9-item performance subscale and a 12-item activities of daily living (ADL) subscale. These subscales provide a rapid clinical evaluation (<10 min) of ET using pen and paper methodology. The performance subscale measures tremor amplitude (severity) in the head, face, voice, limbs and trunk as well as functional tests including handwriting, drawing a spiral and holding a pen over a dot on a 5-point rating scale where 0 represents no tremor and 4 indicates severe tremor. The sum of the individual scores generates an overall performance subscale score from 0 to 64. The TETRAS-PS will be scored by both investigators at the clinical site and by 5 independent video raters. The scores will be statistically analyzed as the change from baseline to day 28 and serve as the primary efficacy endpoint. The most optimal rating methodology (investigator vs. independent video rated TETRAS-PS) will be selected for late-stage development. The performance subscale data will also be used for exploratory analysis of efficacy on day 15. The ADL subscale evaluates activities of daily living such as speaking, eating, drinking, dressing, personal hygiene, writing, and carrying items. The patient will score each item from 0 (normal activity) to 4 (severe abnormality). The overall score ranges from 0 to 48 and will be analyzed as the change from baseline to day 28 as a secondary efficacy endpoint. The total TETRAS score (sum of the performance subscale and ADL) on the change from baseline to day 15 and 28 will be evaluated as an exploratory endpoint.
Kinesia One platform will be deployed in T-CALM as a digital marker of tremor severity. Kinesia One is FDA cleared for monitoring Parkinson's motor symptom severity with only limited data on assessment of tremor in ET patients. The algorithmically derived score has been developed primarily in Parkinson's disease algorithms and has limited validation in ET (54). The Kinesia One platform integrates accelerometers and gyroscopes to capture kinetic movement disorders (55,56). The Kinesia One device will be placed on the index finger of each ET patient and worn in the clinic after completion of the TETRAS-PS. Four tasks will be performed by the patient on the left and right sides to assess resting, postural, kinetic and lateral wing beating tremor. The Kinesia One score change in score from baseline to day 28 will be evaluated as a secondary efficacy endpoint. The Kinesia One data will also be utilized for exploratory analysis of efficacy on the change from baseline to day 15 for accelerometry score and for change from baseline on amplitude measures for days 15 and 28. Consistent finger sensor placement and consistent task execution are critical factors for valid Kinesia One scores.
Due to the deleterious effects of ET on daily activities and well-being, several quality of life assessments will be conducted to more accurately assess the patient's perception of the disorder and the effects of pharmacotherapy from baseline to the conclusion of treatment (day 28). Each of these questionnaires and scales requires substantial input from the ET patient and the data will be used to address exploratory efficacy endpoints. QUEST (57) will be used to evaluate the consequences of ET on daily life of ET patients from baseline to day 28. The questionnaire contains 30 items that involve 5 subscales (physical, psychosocial, communication, hobbies/leisure, and work/finance) and a total score. There are also three additional items that pertain to sexual capability, satisfaction with tremor control, and side effects of pharmacotherapy. If the treatment program for ET is beneficial whether symptomatic or curative, patients will likely respond in a positive manner to QUEST. The QUEST includes questions that are not expected to change within a 28-day timeframe. However, some items such as the satisfaction with tremor control may generate insightful data.
CGI will generate a clinician's perception of a patient's functioning prior to and after study medication (58). The CGI overall score considers patient history, psychosocial situations, symptoms and behavior with respect to the ability to function. The CGI-Improvement (CGI-I) will involve a single 7-point rating of total improvement or change from baseline CGI-Severity (CGI-S). The clinician rater will select one response (from 1 = very much improved to 7 = very much worse) based upon the question "Compared to your patient's condition at the onset of treatment, how much has your patient changed?" PGIC will quantify a patient's impression of improvement or decline over time with respect to ET treatment (58). The patient will use the PGIC scale to assess current health status vs. initiation of treatment and calculate the difference. The 7-point scale will ask the question "With respect to your ET, how would you describe yourself now compared to when you started taking the study drug?" The patient will reply with one of seven answers from very much worse to very much improved.
GAS (59) will require interaction between the physician and ET patient for development of a written set of individual patientdesired goals to track progress of treatment. At baseline, each patient establishes 3 individual health goals and rates each goal as fairly important = 1, very important = 2, or extremely important = 3. The clinician will rate the degree of difficulty for each goal as probable = 1, possible = 2, or doubtful = 3. During the study, progress will be scored on a 5-point scale from worse than baseline = −2 to best anticipated outcome = +2.
Two additional biometric monitoring tools will be employed in the T-CALM digital substudy to explore their ability to measure changes in motor function of ET patients. The data will be used for exploratory efficacy endpoints.
The Kinesia 360 (Great Lakes Neuro Technologies, Cleveland, OH, USA) (55, 56, 60) is a home monitoring system that utilizes wrist and ankle sensors to objectively and continuously tabulate motion data The Kinesia 360 kit contains a smartphone with the installed Kinesia 360 application, two wearable sensors and charging equipment. The sensors capture 3-dimensional linear acceleration and angular velocity from the wrist and ankle of each ET patient throughout each day with the use of integrated accelerometers and gyroscopes. At the end of each day, the motion data are uploaded from the smartphone to a central server. The data are processed to detail the occurrence and severity of tremor as well as the patient's level of daily activity.
The iMotor (Apptomics, Inc., Wellesley Hills, MA, USA) (61) is a tablet-based application that objectively measures motor function in patients with abnormal movement. The iMotor test will only be conducted during scheduled visits. Each ET patient will be required to conduct 5 simple tasks (finger tapping, hand tapping, hand pronation and supination, reaction to a mild stimulus, and a spiral drawing with a digital stylus) on a tablet. Each task will have a time limit of 30 s and will be done twice (once with each hand).
Adverse Events
All treatment-emergent adverse events will be coded into the Medical Dictionary for Regulatory Activities (MedDRA) version 20 with system organ classes and preferred terms and displayed in frequency tables by treatment group. Adverse events will be characterized by maximum severity, drug-related adverse events, serious adverse events and adverse events leading to discontinuation of study.
Statistical Analysis Plan
All statistical analyses will be performed with the SAS system, version 9.4 or higher. A sample size of 43 patients per treatment group has at least 90% power to detect at least a 5.5point difference between CX-8998 and placebo for the primary endpoint of the change from baseline to Day 28 on the TETRAS-PS score with a standard deviation of 7.5 and alpha = 0.05. This calculation is based on the Wilcoxon-Mann-Whitney test for 2 independent means and assumed normal distributions for each treatment group with a common, but unconfirmed, standard deviation. Approximately 106 patients are planned for enrollment to insure 86 patients are available for inclusion in the efficacy analyses. Five analysis sets will be employed. The Intent to Treat (ITT) analysis set contains all randomized patients and will be used for patient disposition and demographics. The Safety Analysis Set (SAS) has all randomized patients who receive at least 1 dose of study drug. The Full Analysis Set (FAS) includes all patients who receive at least 1 dose of study drug and have both baseline and at least 1 postbaseline efficacy assessment. FAS will be utilized for all efficacy assessments. The Per Protocol Analysis Set (PPS) includes all patients in the FAS with no major protocol deviations. The PPS will be used as a backup analysis for primary and secondary efficacy endpoints. The Day 28 Completers Analysis Set will be composed of all FAS patients who complete the treatment period. This data set will be used as a backup analysis for primary and secondary efficacy endpoints.
The primary efficacy endpoint will be analyzed with the FAS and analysis of covariance (ANCOVA) model, with fixed effects for treatment, anti-tremor medication use, site type and baseline TETRAS-PS score. Testing will be performed with least square (LS) means from the ANCOVA model and a 2-sided test at the alpha = 0.05 level of significance. If the data indicate a departure from the normal distribution, a corresponding rank test will be performed. Multiple imputation will be used to estimate missing data for patients who are missing a TETRAS-PS score on Day 28. Secondary and exploratory efficacy endpoints will be similarly analyzed.
All TEAEs will be coded into MedDRA system organ classes as described above. Descriptive statistics (number, mean, standard deviation, median, minimum and maximum) will be used to summarize observed and change from baseline laboratory, vital sign and ECG data.
Since the T-CALM digital substudy is exploratory, there will not be a formal sample size determination. It is proposed that at least 30 patients will be randomized to CX-8998 or placebo.
Data Management
Data quality management and monitoring of the trial will be conducted by the sponsor and its designated Contract Research Organization. Substantial protocol amendments will be submitted by the sponsor to regulatory authorities and IRB for approval. Protocol deviations will be documented by the investigator and reported to regulatory authorities and IRB. The sponsor or its designee may conduct audits to insure the study is being conducted in compliance with the protocol, standard operating procedures, GCP and regulatory requirements. The sponsor's study safety representative and a separate independent medically qualified and clinical trials experienced safety physician will monitor aggregate study level safety and tolerability on a recurring basis. An extensive 8-point safety monitoring and risk mitigation plan for adverse events will be used with specific measures to minimize risks to enrolled patients. After completion of Visit 4 by about 75% of study participants, the sponsor may convene an independent external data monitoring committee to review unblinded efficacy data in collaboration with the unblinded study statistician and to provide a recommendation about completion, resizing or termination of the study. The data monitoring committee may request a meeting with the independent safety monitor to discuss safety/tolerability findings in support of its recommendation to the sponsor.
Informed Consent
The principal investigator (or an appropriate designee) will be responsible for ensuring that each potential study subject is given full and adequate oral and written explanations of the aims, methods, anticipated benefits and potential risks of the study. Signed, written informed consent will be required of each study participant prior to initiation of any procedure.
DISCUSSION OF ANTICIPATED RESULTS AND LIMITATIONS
The T-CALM main study and substudy are designed to demonstrate the efficacy of CX-8998, a selective TTCC modulator, for treatment of moderate to severe ET inadequately treated with available standard of care approaches. This phase 2, proof of concept, well-powered, multicenter, prospective, randomized, double-blind, placebo-controlled, parallelgroup study utilizes physician rating scales, patient-focused questionnaires and functional scales and digital motor function measurements to generate clinically meaningful and congruent efficacy data. Patient perception of the debilitating aspects of ET and the potential benefits of CX-8998 for daily activities and quality of life will be key findings of the study.
It is important to point out that T-CALM is designed as a rigorous, parallel-group study to generate robust efficacy data, reduce dropouts, minimize time of patient participation and maintain double-blind. A crossover design was considered to minimize sample size but rejected due to possible carryover effects between treatments that may compromise the efficacy data (1,31). Due to limited understanding of the causes and pathophysiology of ET and uncertain diagnosis of the disorder (1), eligibility criteria for the T-CALM study were carefully established and will be enforced to ensure that patients with moderate to severe ET enter the trial. Critical inclusion criteria were diagnosis of definite or probable bilateral ET as defined by the Tremor Investigational Group, tremor severity score of at least 2 in at least one upper extremity on at least one of the three maneuvers on the TETRAS scale and total TETRAS performance scale score of at least 15. Key exclusion criteria were direct or indirect trauma to the nervous system within 3 months preceding the onset of tremor, history or clinical evidence of psychogenic tremor origin and known history of other medical or neurological conditions that may cause or explain patient's tremor, including but not limited to Parkinson's disease, dystonia, cerebellar disease (other than ET), traumatic brain injury, etc. Prior surgical intervention for ET was also excluded.
Since there has been a limited number of well-powered, randomized, controlled, late-stage, clinical trials to evaluate ET pharmacotherapies, some of the methodologies for efficacy endpoints are established and validated whereas others are under development. The TETRAS-PS was selected as the semiquantitative method to generate data for the primary endpoint (reduction of tremor severity by CX-8998) of the T-CALM study. TETRAS has several advantages and known limitations as an efficacy measurement tool for ET. This scale is scientifically validated and clinically grounded, conducted expediently (10 min), accurately and comprehensively measures severe upper limb tremor amplitude as well as tremor of head, face, voice, and lower limb, shows strong reliability among raters, lacks the ceiling effect of prior scales and correlates with ADL and motor function measurements (10,11). The use of independent video rating is hypothesized to reduce investigator bias, placebo effect and variability. The main disadvantages of TETRAS are limited exposure as the primary endpoint in trials of investigational drugs and the inability to rate rest tremor (a rare occurrence in ET patients), provide a comprehensive neurological assessment, evaluate small changes, generate interval data, or evaluate additional motor and nonmotor symptoms of ET such as ataxia, gait abnormalities, anxiety, and depression (11,65). Since TETRAS will be scored by independent video raters on a videotape of each patient and by individual investigators on each patient in live threedimensional observation, it will be interesting to determine the level of correlation and select the appropriate methodology for future clinical trials. The degree of correlation will also depend on inter-rater reliability and quality of the video, especially for head, trunk, and lower limbs (known issues in the original scale publication). The other frequently used scale for evaluation of tremor is the Fahn-Tolosa-Marin (FTM) rating scale (10). Although the tremor data of this scale generally correlate with TETRAS, FTM will not be used in the T-CALM study due to its lengthy and complicated administration process and known ceiling effect (10). The TETRAS-ADL subscale will be used as the semiquantitative tool for a secondary endpoint to assess the effects of CX-8998 on the daily activities of ET patients (11). This validated patient-reported scale is an indicator of everyday life and should support the data of the TETRAS performance subscale. A disadvantage of the ADL and other scales of patient reported outcomes is the challenge to achieve significance in a short duration (28 day) study. The TETRAS-PS and -ADL scores will be combined to allow assessment of the total TETRAS score on efficacy of CX-8998.
The Kinesia One Accelerometer is a validated device for Parkinson's Disease but has limited data for ET (54). This device will be used to perform biometric assessment of reduction of tremor amplitude by CX-8998 in support of secondary and exploratory endpoints. The advantages of this device are objective and precise transducer quantitation of tremor amplitude through combined accelerometers and gyroscopes, generation of interval data and possible correlation with the TETRAS-PS. Until additional data are available, clinical relevance for ET appears to be the main limitation. In addition, placement of the device and exact performance of the tasks may contribute to additional variability of this measure.
Four semiquantitative scales will be assessed through exploratory endpoints to determine treatment success of CX-8998 as perceived by the ET patient through ability to perform daily functions, achievement of specific goals and quality of life. Date generated by PGIC, CGI-I, GAS, and QUEST will be used to enhance support for the beneficial effects for CX-8998 on the physical and functional disabilities inflicted by ET. Although the main limitation will be the subjective nature of the data, this may be mitigated by consistency of the data across the four scales and the relevance of these measures to patients.
Two newly developed, digital platforms (Kinesia 360 and iMotor) that are designed to objectively and quantify motor function in patients with movement disorders will be evaluated through exploratory endpoints in the T-CALM substudy. The obvious advantages of Kinesia 360 will be the continuous capture of quantified motor function during daily activity and the times at which ET motor dysfunction and the level of severity are detected. The Kinesia 360 data will provide objective quantification of the ameliorative effects of CX-8998 on tremor amplitude in ET patients based on integrated accelerometers and gyroscopes. The iMotor is a digital platform developed to monitor and objectively measure functional motor tasks (finger tapping, hand tapping, draw an Archimedes spiral with a digital stylus) in patients with impaired movement. The main attributes of the iMotor technology are the short time interval (30 s) for each task and the precision of the data. The iMotor will generate accurate, quantitative data to substantiate improvement in motor function of individual tasks by CX-8998 in ET patients. Although they may lack the objective nature of quantifying tremor via amplitude-based tasks rated by clinicians or through accelerometry-based devices, the main advantage of Archimedes spirals is that they help to document kinetic tremor during a task reflective of activities of daily living. Drawing of spirals has been an integral part of the routine examination of tremor patients and was integrated into clinical rating scales (66). While graphic evidence of tremor activity is also evaluated clinically by examining writing or drawn spirals as part of the TETRAS per, these are still interpreted subjectively and are not easily standardized across subjects. Thus, the objective and quantifiable data analysis afforded by digital assessment of tremor can be an important tool in research and certain clinical settings (67).
Overall, the goal of T-CALM is to generate robust safety and efficacy data to support a go, no-go decision for further development of the selective TTCC modulator CX-8998 as a treatment for ET. It is also anticipated that the design of T-CALM and use of clinically relevant and convergent efficacy endpoints will guide development of future clinical studies of novel ET pharmacotherapies.
CONTRIBUTION TO THE FIELD
T-CALM was designed to be adequately powered and enable decision making for further development of CX-8998 for inadequately treated ET. Eligibility criteria were carefully defined to insure selection of ET patients with specific disease requirements. Clinically meaningful and convergent clinicianmeasured and patient-reported outcomes with validated performance scales, objective biometric tools and quality of life questionnaires are key endpoints to demonstrate changes in tremor severity. It is anticipated that the T-CALM trial will demonstrate that CX-8998 reduces tremor severity on the basis of several clinically relevant and confluent efficacy endpoints and a favorable safety and tolerability profile. If beneficial efficacy and safety data for CX-8998 are evident in the T-CALM trial, further clinical development of this drug as a novel, promising treatment for moderate to severe ET will be warranted. The unique, comprehensive design of T-CALM will likely provide meaningful guidelines for future clinical trials of novel ET pharmacotherapies.
DATA AVAILABILITY
No datasets were generated or analyzed for this study.
ETHICS STATEMENT
The T-CALM trial will be conducted in accordance with the general ethical principles as stated in the Declaration of Helsinki and in conformance with the International Conference on Harmonization (ICH), Good Clinical Practice (GCP) guidance and applicable US Food and Drug Administration (FDA) requirements regarding IRBs, informed consent, data protection and confidentiality and other statutes or regulations related to the rights and welfare of human subjects participating in biomedical research. The protocol and informed consent document for this study will be reviewed and approved by the institutional review board (IRB) at each participating Investigative site or by a central IRB before the study is initiated at the respective site.
The responsibilities of the sponsor, monitor and investigator are defined in the ICH GCP consolidated guideline, and applicable U.S. regulatory requirements. The investigator is responsible for adhering to the GCP requirements of investigators, for dispensing study drug in accordance with the approved protocol or a signed agreement, and for its secure storage and safe handling throughout the study.
AUTHOR CONTRIBUTIONS
SP, ML, SB, and EN have contributed significantly to the concept, strategy and design of the T-CALM protocol. All authors have read, critically revised and approved the final manuscript.
|
v3-fos-license
|
2018-04-03T00:58:31.527Z
|
2013-08-08T00:00:00.000
|
17775486
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://nutritionj.biomedcentral.com/track/pdf/10.1186/1475-2891-12-116",
"pdf_hash": "949442d4e2f4ebf63cdaae4814d1b075f7d38f72",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2641",
"s2fieldsofstudy": [
"Medicine",
"Agricultural And Food Sciences"
],
"sha1": "1dc13d4004f289e6d48b4d03ce0a620f1907a873",
"year": 2013
}
|
pes2o/s2orc
|
Major food sources of calories, added sugars, and saturated fat and their contribution to essential nutrient intakes in the U.S. diet: data from the national health and nutrition examination survey (2003–2006)
Background The risk of chronic disease cannot be predicted simply by the content of a single nutrient in a food or food group in the diet. The contribution of food sources of calories, added sugars and saturated fat (SFA) to intakes of dietary fiber and micronutrients of public health importance is also relevant to understanding the overall dietary impact of these foods. Objective Identify the top food sources of calories, added sugars and SFA in the U.S. diet and quantify their contribution to fiber and micronutrient intakes. Methods Single 24-hour dietary recalls (Day 1) collected from participants ≥2 years (n = 16,822) of the What We Eat in America, National Health and Nutrition Examination Survey (WWEIA/NHANES 2003–2006) were analyzed. All analyses included sample weights to account for the survey design. Calorie and nutrient intakes from foods included contributions from disaggregated food mixtures and tabulated by rank order. Results No one food category contributes more than 7.2% of calories to the overall U.S. diet, but half of the top 10 contribute 10% or more of total dietary fiber and micronutrients. Three of the top 10 sources of calories and SFA (beef, milk and cheese) contribute 46.3% of the calcium, 49.5% of the vitamin D, 42.3% of the vitamin B12 as well as other essential nutrients to the American diet. On the other hand, foods categorized as desserts, snacks, or beverages, contribute 13.6% of total calories, 83% of added sugar intake, and provide little or no nutritional value. Including food components of disaggregated recipes more accurately estimated the contribution of foods like beef, milk or cheese to overall nutrient intake compared to “as consumed” food categorizations. Conclusions Some food sources of calories, added sugars and SFA make major contributions to American dietary fiber and micronutrient intakes. Dietary modifications targeting reductions in calories, added sugar, or SFA need to take these key micronutrient sources into account so as not to have the unintended consequence of lowering overall dietary quality.
Background
The health promoting quality of the overall diet is associated with total daily calorie and nutrient intakes. The Dietary Guidelines for Americans (DGA) provides advice for choosing healthy eating patterns for all Americans, including those who are overweight or obese and at increased risk of chronic diseases [1]. The DGA calls for individuals to maintain a healthy weight by controlling calorie intake, increasing physical activity and to consume nutrient-dense foods and beverages to ensure adequate nutrient intake within calorie needs. In addition, the population is advised to reduce calories from added sugars and limit calories from saturated fats (SFA). However, it is recognized that small amounts of fats and added sugar can be useful for increasing the palatability of nutrient-dense foods. Americans are encouraged also to meet their nutrient needs primarily through foods, which are complex and variable in their content of calories, dietary essential nutrients as well as added sugars or SFA.
While staying within calorie needs and as part of a healthy eating pattern, increasing intakes of potassium, dietary fiber, calcium, and vitamin D from nutrient-dense foods like vegetables, fruits, whole grains, and milk and milk products is recommended [1]. Consumption of these four nutrients is so low across the US population in general that they were identified as "nutrients of concern" by the DGA and worthy of special public health interest. Iron and folate for pregnant women and vitamin B 12 for older adults are also recognized as nutrients of concern specifically for these populations. Because the vast majority of Americans do not consume recommended intakes of the nutrient-dense foods from the basic food groups [2], it is important that dietary advice to reduce intake of certain foods and food components not have the unintended consequence of leading to reduced intake of dietary essential nutrients, including nutrients of concern.
National dietary surveillance through the What We Eat in America (WWEIA), National Health and Nutrition Education Survey (NHANES) provides a way to examine eating patterns and their impact on calorie and nutrient intakes across the U.S. population. Various approaches to food group classifications [3][4][5][6][7][8] and diet modeling [8,9] have been published. Food groupings based on foods as they are consumed (e.g., chili; Mexican mixed dishes; pizza; soups) compared to those that include components (e.g., cheese; chicken; vegetables; grains; fats and oils; etc.) of disaggregated mixtures produce different pictures of the population's food and nutrient intakes. The DGA used the National Cancer Institute's (NCI) categorization system [10], which creates 96 "specific food" categories using a type of "as consumed" methodology. This approach estimates nutrient contributions from entrées and foods eaten alone, however, it does not assess the total nutrient contributions from foods like cheese that can be consumed both alone and as ingredients in an entrée (e.g., burrito; lasagna; omelet; cheeseburger; pizza; etc.) or other mixed dish. Under the NCI system, cheese is categorized as regular cheese or reduced fat cheese [10]. The cheese group, however, does not include cheese also eaten as a component of 14 other categories of culturally diverse mixed dishes [10]. Other systems [3,4,7,11] of food classification disaggregate foods using recipes for mixtures to provide an estimate of the total nutrient contributions from food sources. The disaggregation (also called "as ingredient") approach was selected by the European Food Safety Administration to harmonize food classification systems across the European Union [12].
This paper builds on previously published food and nutrient intake data by examining the contributions of the top ten food sources of calories, added sugar, and SFA to the population's intakes of the nutrients of concern as well as other essential nutrients using the disaggregated approach to classify food sources. The present study expands on the study by Reedy and Krebs-Smith (2010) that examined the top five food sources of energy, solid fats and added sugars, referred to as major food sources, from foods consumed by children and adolescents using the as consumed food classification approach [6]. The hypothesis for the present study was that top food sources of calories, added sugars and SFA in current eating patterns make major contributions to intakes of the nutrients of concern (i.e. potassium, dietary fiber, calcium, and vitamin D) as well as other dietary essential nutrients.
Methods
The analytical and statistical methods used for grouping foods, disaggregation of recipes and ranking top food sources have been published previously [7,13] and will be described here in brief.
Data source
Data from WWEIA, the dietary component of the 2003-2004 and 2005-2006 NHANES were used in this study [14,15]. NHANES is a nationally representative, ongoing data collection initiative conducted by the Centers for Disease Control and Prevention, National Center for Health Statistics (NCHS), and the dietary component is conducted by the United States Department of Agriculture (USDA). The purpose of NHANES is to collect data on the health and diet of the non-institutionalized civilian population in the United States. The study design is a stratified, multistage probability sample based on selection of counties, blocks, households, and the number of people within households.
Samples and dietary intake
Data from adults, adolescents and children ≥2 years of age (n = 16,822) participating in the WWEIA/NHANES conducted in 2003-2004 and 2005-2006 were combined for these analyses. Food intake data were obtained from in-person 24-hour dietary recall interviews administered using an automated multiple-pass method [16]. Survey participants 12 years and older completed the dietary interview on their own; children 6 to 11 years old were assisted by an adult; parents/guardians reported for children younger than 5 years of age. Food and nutrient intake data from the first dietary recall was used for these analyses. Data judged incomplete or unreliable by USDA Food Surveys Research Group staff were excluded from analyses, as were those from pregnant and/or lactating females (n = 711). Detailed descriptions of the dietary interview methods are provided in the NHANES Dietary Interviews Procedure Manual [17].
Food groupings and composition
The United States Department of Agriculture (USDA) Dietary Sources of Nutrients (DSN) database was used to define food groups [18]. The more than 130 DSN food groups were collapsed into 51 categories that were more consistent with food groups defined by the USDA Food Surveys Research Group [19,20], similar to those listed by Cotton et al. [3] and the 2005 Dietary Guidelines Advisory Committee report [21]. These are the same food groups previously published by our group [7].
Disaggregated ingredients of recipes for mixtures were assigned to DSN food groups using the USDA Nutrient Database for Standard Reference (SR) food codes [18]. Ingredients were linked to the appropriate food composition databases using the SR-Link file of the Food and Nutrient Database for Dietary Studies (FNDDS 2.0 and 3.0 linked to SR18 and SR20, respectively) [19,20], and recipe calculations were performed to determine proportions of the mixture's nutrient composition contributed from the disaggregated DSN food groups. Added sugars composition were derived from the USDA MyPyramid Equivalents Database (version 2.0) [22] and any foods not listed were hand matched to similar foods.
Statistical analyses
The population mean and standard error (SE) of calorie and nutrient intake for those ≥2 years of age from the total diet and from each food group were determined using PROC DESCRIPT of SUDAAN (release 9.0, Research Triangle Institute, Research Triangle Park, NC) using appropriate NHANES weighting factors which adjust for oversampling of selected groups, survey nonresponse of some individuals, and day of the week when the interview was conducted. Percentages of total calorie and nutrient intakes from each food group were calculated from population average consumption of each food group and tabulated by ranked order similar to our previous publication [7].
Disaggregated food mixtures -Among the top 10 sources of calories, added sugars and SFA, some foods had a higher ranking once recipes were disaggregated compared to how they were ranked in the DGA using the "as consumed" method. These foods included beef, poultry, cheese and milk. This difference in ranking reflects how often the food is consumed alone versus how often it is consumed in mixed dishes. A comparison of calorie contributions when analyzed as "disaggregated" versus "as consumed" found that of the total daily amount consumed, about 55% of beef, about 85% of poultry and about 46% of cheese were consumed alone and about 79% of milk was consumed as a beverage or with cereal. The remainder of the time (to a total of 100%), these foods were consumed as a food component in mixed dishes.
Discussion
This analysis of NHANES 2003-2006 data using the food disaggregation approach shows that some of the major sources of calories, added sugars, and SFA in the US diet are also major sources of dietary essential nutrients including nutrients that are underconsumed. That said, three of the top 10 sources of calories, including 'soft drinks, soda, ' 'candy, sugars, and sugary foods, ' and 'alcoholic beverages' contribute calories but have virtually no nutritional value, while the other calorie sources, including beef, poultry, milk, cheese, and baked goods are major sources of nutrients of concern and other essential nutrients. The top five sources of added sugars account for 83% of the population's added sugar intake but with few exceptions, they provide little or no nutritional value. In contrast, the top three sources of SFA (cheese, beef, and milk) contribute more than 40% of the vitamin B 12 , almost half of the vitamin D and calcium, and are major sources of other essential nutrients to the American diet.
The DGA's "as consumed" listings of top sources of calories, added sugar, and SFA tell us what foods Americans are putting on their plates that are contributing to high intake of these food components [1]. This information is useful to help consumers identify healthier forms of these foods or to avoid foods with little or no nutritional value. But, in the case of foods that can be eaten by themselves or as a part of mixed dishes, information from a disaggregated approach gives insight into an individual food's relative contribution to intakes of added sugars and/or SFA as well as essential nutrients to the American diet. For example, compared to DGA rankings, the contribution of beef to SFA intake is actually emphasized by the disaggregated approach as is its importance to the population's zinc (20.1%) and vitamin B 12 intake (18.6%). This additional insight can help enable informed choices; e.g., choosing leaner beef rather than eliminating beef from the diet with associated reductions in intakes of certain essential nutrients. Reduction of total calorie intake for weight loss requires a broad and balanced approach because no one food category makes a large impact on total calories. The food categories with the largest contribution to calorie intake as listed in the DGA are grain-based desserts (6.4% of the total caloric intake) and in the present analyses are 'cakes, cookies, quick bread, pastry, pie' (7.2%). But, the present analysis reveals also that three categories ('soft drinks, soda, ' 'candy, sugars and sugary foods, ' and 'alcoholic beverages') contribute 13.6% of total calorie intake (296 kcal/day) and provide little to no other nutritional value. Reducing intake of these foods could greatly reduce population caloric intake without compromising the overall nutritional quality of the diet.
The predominance of foods providing empty calories is readily apparent in the added sugars analysis. Given the disaggregated food approach in the present study, slightly higher estimates of empty calories are provided by the top five sources of added sugar (83.3%) when compared to the foods listed in the DGA, which are based on the foods as consumed approach (71.7%). The most notable nutrient-dense food in this list, ready-toeat cereals, contributes only 3.9% of the total added sugar intake while providing 6-22% of 11 different vitamins and minerals to the diet of Americans. Recommending healthier ready-to-eat cereals may be an effective means of increasing intakes of nutrients of concern like fiber, but may lead to only modest reductions to the overall intake of added sugars.
In sharp contrast to the added sugars results, while the top three sources of SFA (cheese, beef, and milk) provide a third of dietary SFA, they also contribute 49.5% of vitamin D, 46.3% of calcium, 42.3% of vitamin B 12 and 11.6% of the potassium as well as a host of other nutrients to the diet of Americans. The DGA recommends consuming less than 10% of calories from SFA, which is about a 15% reduction from the current 11.4% of calories. This recommendation is based primarily on the role of SFA in increasing LDL cholesterol, which is linked to increased risk for cardiovascular disease [23,24]. However, not all food sources of SFA are the same. Different fatty acid chain lengths have different biological effects, and other non-fatty acid nutrients contained within specific foods also play a role in modifying disease risk (3). Replacing SFA with PUFA, for example, significantly reduces cardiovascular disease risk, whereas the evidence for replacing SFA with carbohydrate or MUFA is less consistent and robust, suggesting that lowering risk may be more strongly related to increased intakes of PUFA rather than decreased SFA [25][26][27]. Evidence that substituting the omega-6 PUFA, linoleic acid, for SFA may not be beneficial points to the potential for differential effects of specific PUFA [28]. Furthermore, reliance on the level of a single lipid nutrient (SFA) in a food and a single plasma biomarker (LDL-C) may not adequately characterize the cardiovascular impact of complex foods that contain, in addition to SFA, multiple nutrients and other bioactive components that reduce CVD risk. For example, intake of milk and milk products is associated with a reduced risk for CVD despite being a major contributor to SFA intake [1]. Thus, other components in milk and milk products, such as calcium, potassium, magnesium, protein (whey, casein), and vitamins D and B 12 may confer favorable cardiovascular effects [29][30][31][32][33].
The 2010 Dietary Guidelines Advisory Committee (DGAC) report through its evidence based review concluded that not all SFA have the same effect on disease risk, noting that fat from dairy products is an area that requires further study [34]. The report indicated that consumption of milk products may not have predictable effects on blood lipids and future research should examine the role of dairy products in modulating lipid profiles, noting that bioactive components that alter serum lipid levels may be contained in milk fat. The report also states that evidence to date does not suggest that high-fat dairy products are more likely than low-fat dairy products to induce metabolic syndrome.
More frequent consumption of dairy products, vegetables, fruits, and whole grains is recommended to increase intakes of potassium, dietary fiber, calcium, and vitamin D [1]. The DGA recommends preferentially choosing lean meat and poultry and low-fat and fat-free dairy products, including milk, cheese and yogurt, over higher fat forms to help balance calorie intakes. The widespread availability of low-fat and fat-free milks, however, has not offset the overall decline in milk consumption since 1980 (−21%) and the even larger decline in whole milk consumption alone (−65%) [35]. An Australian study of the dietary consequences of recommending lower-fat dairy foods to overweight adults found men decreased their overall intake of dairy foods significantly, rather than switch to lower fat versions [36]. It is not well understood what role the amount of milk fat plays in maintaining or increasing milk consumption among those with a preference for higher fat milk and encouraging milk consumption among those who infrequently consume milk products.
While the DGA recommends mainly choosing lowerfat cheeses, achieving flavor, texture, color, and other attributes comparable to full-fat versions is challenging for cheese manufacturers, particularly at the greater than 50% reductions in fat [37][38][39][40][41] needed to label cheese as low-fat or fat-free. Low-fat cheddar, for example, must contain 80% less fat than its full-fat counterpart to meet federal labeling requirements. Consumers are discerning and acceptance of lower-fat cheeses can be poor, even when differences are small. Consumer acceptance of reduced-fat cheese, which requires a 25% reduction in fat, has seen greater success than low-fat and fat-free forms of cheese [42,43].
Conclusions
Overall, the present study determined the contributions of the top food sources of calories, added sugars and SFA to intakes of nutrients of concern and other micronutrients in the U.S. diet using a disaggregated food categorization approach to analyze the NHANES 2003-2006 dietary data. While foods like desserts, snacks and some beverages are major contributors to American consumption of calories and added sugar, these foods provide virtually no other nutrients. On the other hand, the foods contributing the most to SFA intake are also major contributors to calcium, vitamin D, and vitamin B 12 intake. Thus, the totality of nutrients, and not solely a single component such as added sugars or SFA should be balanced when making food choices to build a healthy diet. The potential impact of reducing or eliminating food sources of saturated fat or added sugars that are also major sources of nutrients in the American diet without substituting lower-fat versions or suitable alternates could result in serious unintended nutritional consequences.
Competing interests VLF as Senior Vice President of Nutrition Impact, LLC performs consulting and database analyses for various food and beverage companies and related entities. PJH of PHJ Nutritional Science is a nutrition science consultant for various food companies and related entities. DRK of Food & Nutrition Database Research, Inc. performs statistical analyses for Nutrition Impact, LLC, various food and beverage companies and related entities. KP and NA were/ are employees of the Dairy Research Institute.
Authors' contributions
The authors' responsibilities were as follows-NA/VLF: designed research, project conception, development of overall research plan, and study oversight; VLF/DK: analyzed data or performed statistical analysis; PJH/NA/KP determined the overall content for the paper; PJH/NA/KP/VLF collaborated on the writing of the manuscript; PJH: had primary responsibility for final content. All authors read and approved the final manuscript.
|
v3-fos-license
|
2019-04-10T13:12:15.907Z
|
2018-10-05T00:00:00.000
|
105745751
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.ajol.info/index.php/tjpr/article/download/178174/167537",
"pdf_hash": "1e1e5988d4adbfc00d231130fe8f958d671c1e94",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2642",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "1e1e5988d4adbfc00d231130fe8f958d671c1e94",
"year": 2018
}
|
pes2o/s2orc
|
Immunomodulatory properties of ethanol extract of Canarium ovatum ( Burseraceae ) pulp
Purpose: To evaluate the immunomodulatory properties of ethanol extract of the pulp of Canarium ovatum (COPE). Methods: The immunomodulatory activity of ethanolic extract of the pulp of C. ovatum was investigated in vivo using Balb/C mice. Extract doses of 300 and 600 mg/kg were orally administered to study its effect on delayed type hypersensitivity and humoral antibody response using sheep red blood cells (SRBC). Acute oral toxicity profile and phytochemical analysis were also determined. Results: Orally -administered COPE did not exhibit any mortality or signs of toxicity at doses 300 2000 mg/kg. Phytochemical analysis revealed the presence of biologically-active compounds such as sterols, triterpenes, flavonoids, alkaloids, saponins, glycosides and tannins. Treatment with COPE for 7 days stimulated the early phase of DTH response through significant increase in foot pad thickness (111.87 ± 9.97 % at 300 mg/kg, and 91.27 ± 7.81 % at 600 mg/kg), when compared to distilled water and cyclophosphamide (CP) groups. Similarly, COPE significantly enhanced antibody titer, with highest titer at the dose of 300 mg/kg. Histological observations of the spleen showed follicles with active germinal centers and proliferating lymphocytes, which are consistent with the immunostimulatory effects of COPE. Conclusion: These results show that COPE has stimulatory effects on cellular and humoral responses in mice, indicating its potential as an immunostimulatory agent.
INTRODUCTION
Immunomodulation is a process in which the immune system is either stimulated or suppressed in response to certain conditions [1].The search for agents with immunomodulatory activities has continued to attract the attention of researchers.This is due to their potential for alleviating immunological dysfunctions such as acquired immunodeficiency syndrome, and for normalizing the immune system after organ transplantation.Plants are very important sources of compounds that are thought to have significant immunomodulatory activities [2].
Canarium ovatum (Burseraceae) is a native plant in the Philippines.It is considered as one of the most important trees in the Bicol region where its nuts and fruit pulp are used in food and as oil source [3].Recent investigations on the C. ovatum fruit pulp showed that it has good antioxidant activity and anticancer potential [4].The present study was aimed at assessing the immunomodulatory potential of C. ovatum fruit pulp.
EXPERIMENTAL Plant sample collection and extraction
Canarium ovatum (pili) fruits were obtained from the Pili Research and Technology Development Center (PRTDC) Albay Philippines.In the laboratory, the dried and powdered fruit pulp was macerated in 95 % ethanol (1:3 w/v ratio) for 48 h.Following filtration, the extract was dried under reduced pressure in a rotary evaporator.The resultant dry powder was weighed and dissolved in distilled water to obtain the various concentrations of the extract (COPE) used in subsequent experiments.
Phytochemical screening
Qualitative phytochemical analysis of the crude extract was carried out using standard tests for alkaloids, flavonoids, glycosides, saponins, sterols, tannins and triterpenes [5].
Animals
Male Balb/C mice weighing 25-30 g were used in this study.They were allowed a 7-day acclimatization to laboratory settings before commencement of the study.The mice were maintained on pellet feed and distilled water ad libitum in standard environmental conditions of 24 -27 °C and equal light/dark periods The animals were handled according to the guidelines of the Animal Care and Use Committee of the University of the Philippines, Manila (Protocol/Approval No.2015-022).
Acute toxicity study
Oral acute toxicity study was carried out in line with the OECD method (guideline no.423).Three female ICR mice were used per step.After an overnight fast, the extract was administered by oral gavage at a dose of 300 mg/kg, and observations were made at different time intervals from 30 min to 24 h.The animals were observed for weight change, changes in skin, tremor, convulsion, salivation, diarrhea, lethargy, coma and death, for 14 days.Body weights before and after the treatment were also recorded.Since COPE did not produce any mortality at the dose of 300 mg/kg, the experiment was repeated using a dose of 2000 mg/kg.
Treatment protocol
The results of the acute toxicity study showed no mortality and no sign of toxicity at doses of 300 to 2000 mg/kg of C. ovatum pulp extract in mice.Therefore, doses of 300 and 600 mg/kg were chosen for use in subsequent experiments.Four mice groups (5 mice/group) were used.Group I served as vehicle control and was given distilled water, while group II and group III received COPE at doses of 300 mg/kg and 600 mg/kg, respectively.Mice in group IV received cyclophosphamide (20 mg/kg) for induction of immunosuppression.
Delayed-type hypersensitivity (DTH) reaction (SRBC) as antigen
Delayed type hypersensitivity reaction was performed using a modified form of the method of Gokhale et al [6].Mice in all the groups were subcutaneously sensitized with 1x10 8 sheep red blood cells SRBC (Fitzgerald Inc., USA).Following the sensitization, group IV mice were intraperitoneally injected with cyclophosphamide 2 h prior to immunization.From day 1 to day 7, mice in groups II and III received COPE at the same doses as earlier indicated, while group I mice were given distilled water.On day 7, the left hind paw thickness was determined in all mice, after which they were subcutaneously challenged with 1x10 8 SRBC which was administered in the same footpad.The immune response was determined by increase in foot pad thickness as measured using a caliper at 0, 2, 24, and 48 h after the SRBC challenge.The values obtained were used to calculate edema index (EI) as in Eq 1.
where LT and RT are the left and right hind paw thickness, respectively.
Hemagglutination antibody titer
In the test for the in vivo antibody production, all mice were sensitized with SRBC (1 x 10 8 ), and the plant extract was administered daily through oral gavage from the time of sensitization for 14 days.On day 14, the animals were again challenged with SRBC and blood was collected from each mice through the retro-orbital puncture.Each blood sample was serially diluted in 20 µL of normal saline mixed with 20 µL of SRBC in microtiter plates.The plates were shaken and kept for I h to settle at room temperature.They were then examined for hemagglutination.The antibody titer was taken as the highest serial dilution showing visible hemagglutination [6].The titers were further converted to mean log values for analytical purposes.
Spleen weight index and histology
The spleen of each mouse was weighed to calculate the spleen index.It was then preserved in 10 % formalin and processed for light microscopy at High Precision Laboratory (Quezon City, Philippines).The microscopic section of the spleen from each mouse was examined and scored on a 4-point scale: 0 = normal, 1 = mild, 2 = minimal, 3 = moderate, and 4 = marked [7].
Statistical analysis
Data are presented as mean ± standard error of the mean (SEM).Groups were compared for statistically significant differences using one-way analysis of variance (ANOVA), followed by Tukey's honestly significant difference (HSD) test.
RESULTS
Phytochemical screening of ethanol extract of the pulp of Canarium ovatum showed that it contained sterols, triterpenes, flavonoids, alkaloids, saponins, glycosides and tannins (Table 1).
Tannins (++) (+) Traces, (++) moderate, (+++), abundant, (-) absent
The extract did not show any sign of toxicity or mortality when given to the test animals at low (300 mg/kg) or high (2000 mg/kg) dose.In addition, the extract did not produce any significant differences in body weight and spleen weight of the treated mice at the two doses (300 and 600 mg/kg), when compared to the vehicle control and cyclophosphamide-immunosuppressed group (Table 2).The administration of the extract for 7 days produced an increase in percentage edema index 2 h after SRBC challenge, and significantly enhanced the titer of circulating antibody, with 300 mg/kg eliciting a higher titer (Table 3).However, cyclophosphamide administration led to decreases in edema index and antibody titer in response to SRBC challenge.After 24 and 48 h of SRBC challenge, the increase in percentage EI was still seen, although the effect was comparable to that seen in group treated with distilled water.The percent edema after 2 h was higher than those seen after 24 and 48 h.
Figure 1 (A to C, inset) shows spleen white pulp with prominent and active germinal centers, and aggregates of darkly-stained lymphocytes within the follicles in mice treated with COPE.A significant increase was observed in the number of proliferating lymphocytes on COPE-treated mice, when compared to the control and cyclosphophamide-immunosuppressed groups (Table 4).
DISCUSSION
DTH which is usually used to determine reaction to the antigen SRBC, indicates potentiating effect on T-lymphocytes and accessory cell types [6].
The early phase of DTH reaction involves clonal expansion of lymphocytes, increased vascular permeability, induction of local inflammation, and influx of neutrophils [8].The results of the present study showed that COPE exerted pronounced effects on early phase of DTH reaction as shown by increases in edema at early stage after SRBC challenge, thereby confirming its stimulatory activity on Tlymphocytes.
Humoral immunity or response involves the interaction of antigens with B-cells which then differentiate into plasma cells [9].The plasma cells then produce antibodies which mediate the humoral response.In this study, administration of COPE produced an increase in the antibody titer in mice.This enhancement of antibody responsiveness to SRBC indicates increases in the population of cells such as macrophages, and T and B lymphocytes which are involved in antibody production [8].Thus, COPE exerts a potentiating effect on the humoral response.
The spleen plays a vital role in immune response.It filters the blood and it is also the site of antibody synthesis, which makes it an important organ for evaluation of changes in the immune system [7,10].Microscopic observations of the spleen showed that treatment with COPE led to pronounced germinal centers and significant increases in the number of lymphocytes.These results suggest that the humoral immune response was enhanced by COPE.It has been suggested that prominent germinal centers are indicative of increases in the population of proliferating B-lymphocytes and stimulation of humoral response [11,12].
The results from phytochemical screening of COPE revealed the presence of sterols, flavonoids, triterpenes, alkaloids, saponins, glycosides and tannins.These compounds are known to modulate the immune system [12].Thus, their presence further validates the immune-modulatory potential of COPE.
CONCLUSION
The results obtained in the present study indicate that ethanol extract of the pulp of Canarium ovatum possesses stimulatory effects on cellmediated and humoral immune functions in mice.
and analyzing the data.All authors participated in writing the manuscript and approved its publication.
Table 1 :
Phytochemical constituents of C. ovatum ethanol pulp extract
Table 2 :
Effects of Canarium ovatum extract on body and spleen weights
Table 3 :
Effect of Canarium ovatum extract on circulating antibody titer and on footpad thickness
|
v3-fos-license
|
2024-06-07T06:17:12.957Z
|
2024-06-05T00:00:00.000
|
270284060
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1186/s12909-024-05603-y",
"pdf_hash": "c099df60c586dd7d2fb2fddea1cac96de7145e6c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2643",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "7c8267bc8fd2ae87560c69fd1dc6f22a98c28af3",
"year": 2024
}
|
pes2o/s2orc
|
Sources of stress and coping strategies among Chinese medical graduate students: a qualitative study
Background The incidence of mental health problems among medical graduate students is much higher than among students of other disciplines. This can have adverse consequences for the medical students themselves as well as their future patients. This study aims to understand the pressures faced by Chinese medical students and the current status of mental health education. It also propose recommendations for the current situation and prospects for the future. Method The authors conducted in-depth semi-structured interviews with 22 master’s students from five medical schools during November 2023. All interview sessions were recorded and transcribed verbatim. The transcriptions were analyzed using the Colaizzi’s seven-step method. Result Three main themes were extracted from the students’ statements: sources of psychological stress, ways to cope with stress, and perspectives on mental health education. The study showed that current mental health education in China is mostly in the form of printed mental health education manuals and mental health lectures, and there is no active tiered intervention for students at different levels. It is suggested that reforms should be made to shift to a model where the school proactively identifies problems and intervenes based on feedback. Conclusion This study reveals the widespread psychological stress and shortcomings in current education methods. To address these challenges, institutions should develop tailored interventions, including tiered support systems, open dialogue promotion, and resilience training. Future research should focus on evaluating innovative interventions’ effectiveness, ultimately fostering a supportive environment that enhances students’ success and contributes to a healthier healthcare workforce.
Introduction
Stress is viewed as a state of real or perceived threat to homeostasis [1].Chinese medical graduate students face challenges such as longer academic years and high clinical pressures [2].Research has shown that the overall prevalence of depression among medical students globally is 28.0%[3], with Asian students having a depression rate of approximately 38.0% [4].In studies conducted in the Chinese national knowledge Infrastructure (CNKI) database, the proportion of students engaging in healthrisk behaviors due to stress was as high as 42.33% [5].The incidence of suicidal ideation among medical students in mainland China is 11.73%, surpassing that of medical students in the United States (11.2%) [6,7].However, only 12.9% of depressed medical students who experienced stress and exhibited health-risk behaviors sought treatment [3].Over the past three years, amidst the global COVID-19 pandemic, the mental health challenges faced by Chinese medical students have become more pronounced [8,9].Existing research on medical student stress primarily focuses on cross-sectional surveys of student anxiety, depression, and related psychological abnormalities, with relatively little research on how medical students cope with stress and alleviate resulting health issues.Current research on stress among medical students mainly focuses on cross-sectional surveys of anxiety, depression, and related psychological abnormalities.Unfortunately, there is limited research on how medical students can cope with stress and alleviate the resulting health issues caused by stress.As future clinical practitioners, medical postgraduates under stress may potentially engage in substance abuse and increase the risk of irreversible harm, such as medical errors [10,11].Urgent attention from educational institutions, society, and the government is needed to address the mental health of Chinese medical students.Additionally, scholars have emphasized the importance of enhancing medical students' psychological resilience and strength.
Moreover, enhancing the psychological resilience and strength of medical students is particularly crucial.Some researchers have improved students' stress resistance through methods such as autonomous training, progressive muscle relaxation, and Mindfulness-Based Stress Reduction (MBSR) [12][13][14][15][16][17].These experiences may shed light on the direction of mental health education for Chinese medical students.Therefore, this study aims to propose an initiative to increase awareness of mental health education for medical graduate students by understanding the various pressures they face, mechanisms for reducing stress, and the acceptance of mental health education.It also suggests providing personalized psychological adjustment methods for this population and designs references for interventions to improve graduate students' mental health for society, government, and universities.
Method
The present study revolves around the formulation of interview guidelines based on positive psychology theories.Educational psychology is the scientific study of the fundamental principles governing teaching and learning within educational environments [18].Positive psychology primarily aims to stimulate and strengthen individual reality and latent capabilities, leading to the development of positive personality traits [19].These traits, in turn, facilitate individuals in adopting more effective coping strategies.We conducted semi-structured interviews with 22 medical postgraduate students of Chinese nationality who were enrolled in medical or related programs at six universities across three countries.Data collection, organization, and analysis were completed between November and December 2023.All study materials were reviewed by the Ethics Committee of Guangdong Provincial Hospital of Chinese Medicine, and all participants provided informed consent.
The researchers and participants
We formed a qualitative research team consisting of six members.The team included a university professor specializing in medical education, two doctors engaged in clinical psychology work, and three research postgraduate students (two in nursing and one in psychology).Initially, we extensively reviewed relevant literature on positive psychology theories to develop a draft of the semi-structured interview guidelines.Two clinical doctors conducted preliminary interviews using the draft guidelines with four eligible research postgraduate students, making revisions to the guidelines.The revised guidelines were then handed over to the university professor for further modifications, leading to the final version.Three research postgraduate students were responsible for data collection and organization, while the analysis stage was a collaborative effort involving the entire research team.
We employed convenience sampling to recruit participants for this study, ensuring that there would be a good rapport between interviewers and interviewees, enabling open and honest expression of their thoughts and feelings.The inclusion criteria for participants were as follows: (1) enrollment in a medical graduate program at a university or a comprehensive university with a medical major; (2) good communication skills, and clear verbal expression, absence of mental disorders; (3) informed consent and voluntary participation in the study.Exclusion criteria included individuals who had already graduated, taken a leave of absence, dropped out, or failed mid-term assessments.Out of the 22 interviewees, 20 were enrolled in Chinese universities, while two were studying in foreign universities.Their ages ranged from 22 to 27 years old, with nine being male and 13 being female.The participants were pursuing master's degrees, and 15 (68.2%) came from middle-income families.Please refer to Tables 1 and 2 for specific details.
Data collection
Interviews were conducted with eligible participants using a pre-established semi-structured interview guide.Prior to the interviews, we explained to the participants the purpose and methodology of the study, assuring them that their privacy would be respected, and their personal information would not be disclosed.Participant numbers instead of real names were used during the interviews.We also followed the principle of convenience for the participants, agreeing on suitable interview times and locations to ensure a confidential, quiet, and undisturbed environment throughout the interviews.
During the interviews, we obtained informed consent from the participants, and if any doubts or concerns were raised, we immediately halted the interview.We fully respected the participants' willingness to express themselves and refrained from evaluating the viewpoints they presented.Instead, we enriched the overall research process by using timely probes and follow-up questions.In order to capture accurate information, we recorded the conversations using two recording devices, while carefully observing the participants' facial expressions and
Data analysis
The interview will be transcribed into a computer file within 24 h after the interview is concluded.Then, the interviewee and one researcher within the team will manually verify and reorganize the transcribed data to ensure that the interviewee's statements are not misunderstood or distorted.Two graduate students will then use the Colaizzi [20,21] 7-step analysis method to analyze the reorganized interview data: (1) Read the data repeatedly to fully understand the interviewee's statements; (2) Identify meaningful statements word by word; (3) Encode recurring viewpoints; (4) Collect codes to form common concepts.After completing these four steps, the two graduate students will exchange opinions and accept or delete the formed codes and common concepts.In cases of significant disagreement, a third graduate student will join to make a joint decision in order to minimize the biases caused by the analysts' subjective intentions.
(5) Elaborate on the common concepts and incorporate typical descriptions provided by the interviewee.After completing the fifth step, the elaborated concepts and the interviewee's typical descriptions will be reviewed by professors from relevant medical schools within the research team to eliminate the narrow analysis resulting from the analysts being graduate students themselves.( 6) Construct themes; (7) Provide the obtained themes to the interviewee to ensure the authenticity and accuracy of the results.
Result
According to the interviewee's statements, three themes emerged: 1) Sources of stress for medical graduate students: The interviewees highlighted various factors that contribute to their stress levels during their studies.
2) Ways to alleviate stress for medical graduate students: The interviewees shared helpful approaches they employ to manage and reduce stress in their lives.
3) The importance and necessity of mental health education.The interviewees emphasized the importance of mental health education in medical graduate programs to support the well-being of students.
To maintain anonymity, each interviewee was assigned a unique number instead of disclosing their identities.
Sources of stress for medical graduate students
Medical graduate students face pressure from multiple sources.According to the interviews, they reported increased pressure compared to their previous stages, such as undergraduate studies or work.Additionally, different groups of students, based on academic years and admission methods, experienced varying levels of stress, suggesting different layers of pressure.
Stress from economic concerns
The pressures faced by medical graduate students primarily stem from economic concerns.These pressures can be attributed to two key factors.Firstly, medical students experience a prolonged duration of study compared to their peers, resulting in a delayed achievement of financial independence.While their same-aged counterparts may have already established themselves economically, medical students remain dependent on financial support.This discrepancy in financial status creates significant stress for medical graduate students.Secondly, the economic pressure experienced by medical students is further influenced by their family's financial situation.For those from financially constrained backgrounds, the burden of financial responsibilities and expectations can be particularly overwhelming.Balancing academic demands with financial obligations adds to the already demanding nature of medical education.
N2: Compared to my peers, they have already achieved basic financial independence and no longer rely on their parents for living expenses… As my parents get older, I feel more pressure than before.
N11: There will be financial pressure, and I really want to be able to earn my own money because many of my friends have started working and earning salaries, which makes me anxious.N22: I haven't graduated yet, and I estimate that it may take me another year to graduate, which means I will be close to 30 by then.The (financial) pressure is quite high, and most of my peers have already bought cars, houses, and gotten married, while my graduation seems to be far off.
Stress from academic studies
Academic pressures also significantly impact medical students, with sources of stress varying between academic years.In lower academic years, students may experience stress due to a lack of clarity and direction regarding their research projects.This ambiguity can lead to uncertainty and anxiety about their future career path.
In contrast, middle and upper academic years are faced with the pressure of producing and publishing research papers.The successful completion of these papers is vital for their academic progress and can significantly impact their career prospects.
Stress from interpersonal relationships
In addition to the factors mentioned earlier, interpersonal relationships play a significant role in putting pressure on medical students, especially in their interactions with supervisors, peers, and clinical preceptors.The dynamics in these relationships can contribute to higher stress levels.One specific group that faces intensified interpersonal pressure is individuals transitioning from clinical practice to an academic setting.Compared to their counterparts with no prior clinical experience, these individuals often encounter more challenges in navigating interpersonal relationships.The increased pressure can stem from various sources, with the loss of personal privacy being a prominent factor.In the learning environment, which requires close collaboration, feedback, and evaluation, personal boundaries are breached, leading to feelings of vulnerability and heightened stress.The career development of medical graduate students is a topic of great concern and pressure.They often face the decision of choosing between working in clinical settings, pursuing further academic studies, or taking on non-clinical positions related to medicine.Our research indicates that nursing graduate students in particular show some aversion towards clinical work.This aversion may stem from concerns about emotional burnout and job-related stress.However, it is important to recognize that these concerns are not unique to nursing students but are likely shared by medical students in different disciplines.The findings highlight the need to support medical students in making informed career decisions, while also prioritizing their personal well-being and self-care.Moreover, promoting the exploration of non-clinical opportunities and expanding the scope of medical training can offer valuable alternatives for medical students.The COVID-19 pandemic has significantly impacted medical students, with school and hospital lockdowns and restrictions generating substantial pressure.The implementation of measures such as social distancing, personal protective equipment protocols, and limitations on clinical placements can cause significant disruptions to medical education and clinical training.However, it is noteworthy that the pandemic has also increased public recognition and appreciation for healthcare workers, including medical students.This recognition has the potential to improve medical students' sense of professional identity and increase their sense of purpose.The outpouring of support and recognition strengthens medical students' connection to their career choice, motivating them to persevere despite the associated challenges.
Strategies for medical graduate students to cope with stress
Medical students often turn to exercise to relieve stress and express their emotions, while seeking very few alternative methods.However, the current stress relief options available to them are limited, focusing predominantly on extroverted emotional release.Interestingly, during interviews, none of the respondents mentioned the utilization of introverted strategies to enhance their psychological resilience and inner strength.
N1: (Ways to relieve stress) Talking to the advisor and exchanging ideas with senior colleagues in research.Additionally, relaxing activities like going back to the playground for a run or listening to music and watching dramas.N3: Facing the problem directly, even if not following the planned schedule, gradually completing tasks.I also accept feedback from friends, reflect on myself, and might adopt their perspectives.N6: Venting out emotions.There must be an outlet, not keeping everything inside.N21: Currently, there may not be many ways to alleviate the situation, so it's essential to face the stress head-on.Dealing with graduation-related issues can indeed be urgent.
Mental health education of medical students
Medical students recognize the significance of mental health education for themselves as well as for diverse social groups and communities.Currently, psychological education for medical graduate students is primarily delivered through online campaigns, offline lectures, and distribution of psychological health handbooks.However, both in terms of format and significance, it has not met expectations.
Source of stress and ways students cope with
The essence of the research, conducted through semistructured interviews, reveals that medical graduate students face various common sources of stress.Initially, transitioning from undergraduate to graduate status induces a sense of discomfort.Heinen's study identifies that stress among medical students in their first year correlates with personal resources and emotions [22].
Furthermore, delayed financial independence compared to peers is a stressor, as most students rely solely on family support due to the demanding clinical and research workload [23].This mirrors the situation in many Western countries.In Su's study, it was found that a good supervisor-graduate student relationship can enhance the positive impact of psychological capital on graduate students' professional commitment [24].
Meanwhile, this study explored the current methods employed by medical graduate students to alleviate stress.Exercise is one common approach; it allows medical students to release inner emotions and improves both physical and mental well-being, consequently enhancing their daily academic and professional performance.Research indicates a dose-response relationship between lack of exercise and adverse mental health outcomes, including self-harm and suicide attempts, highlighting the necessity of promoting physical activity among university students [25].Additionally, music serves as a common relaxation method for students.Linnemanna's research found that music can alleviate daily stress for students, aligning with the positive direction advocated by positive psychology theories [26].However, due to long working hours, these coping mechanisms cannot always be guaranteed, and the study suggests that only a minority can strengthen inner psychological resilience and strength through these strategies.
Psychological education is one of the crucial measures to promote the mental health of medical graduate students [27], primarily achieved through: (1) curriculum education; (2) regular lectures and workshops by experts or experienced physicians; (3) distribution of mental health materials both online and offline; (4) resources such as online videos, applications, etc.; (5) personalized guidance tailored to individual personalities and needs [28].Currently, psychological education for medical graduate students in China mainly consists of online activities, offline lectures, and distribution of mental health handbooks.Most students hold a negative attitude towards the methods and approaches employed in school psychological education, perceiving it as primarily formalistic during the graduate stage, with limited practical significance.Limited by the fact that funding for Chinese universities and research institutions mainly comes from national financial subsidies, attention and resources for students' mental health tend to be overshadowed by support for research and clinical work, making personalized guidance even more challenging.In fact, in the interaction between mentors and students, the relationship tends to be more hierarchical rather than educational and guiding, neglecting the role and influence teachers should have.As the primary person responsible for students' academic and personal development, mentors' neglect and students' resulting self-isolation and reluctance to communicate lead to a sense of disconnection between both parties, depriving students of a means to build a healthy psychological environment.Research indicates that whether mentors provide appropriate support can significantly impact students' academic output [29], potentially related to the incentives and pressures mentors face in their positions.Due to the heavy academic workload and limited free time, graduate students receive fewer resources for mental health assistance, contributing to the persistent and severe mental health issues among medical graduate students in China.
Current approaches to supporting students' mental education
The psychological state is a constantly changing process, and relying solely on a single psychological test at the beginning of the school year to measure students' psychological state throughout the entire learning period is insufficient.Schools must increase their focus on students' mental health and consider it as important as academic performance.We should conduct extensive research to identify the differences in issues faced by graduate students at different stages and use this information to determine timely psychological interventions at different time points.
The results of this study indicate that academic pressure, financial stress, and interpersonal relationship pressure are common stressors among medical graduate students.For academic pressure, pre-entry education is necessary to help students transition from undergraduates to graduates and avoid confusion about their learning lives.Therefore, recommendations for introductory professional books and career development literature can be made during graduate admissions, along with introductions to upcoming course plans and schedules.Additionally, under the background of the "Internet Plus" era, early online meetings and collaborations with senior students can also help newcomers integrate into the new environment [30].
The study found that most graduate students rely on family financial support.Although China has implemented a standardized training system for physicians, graduate students are in a blind spot of the system and currently cannot receive standardized training rewards, even though these rewards can only ensure basic survival for doctors.Therefore, it is necessary to improve the working conditions of medical graduate students [31].In this study, interpersonal relationship stress mainly comes from the relationship between graduate students and their mentors.Mentors should pay more attention to students' mental health rather than just their academic performance and work.The graduate stage should be a process where teachers and students jointly research scientific problems [32], rather than just focusing on outcomes, which may lead to teachers only caring about students' research results and neglecting the cultivation process.
There are clear differences in mental health issues among graduate students.Tailored psychological education for different groups based on their characteristics is the key to addressing the current psychological health education for medical graduate students, rather than mere formalism.For special groups, such as those returning to campus from clinical work, assigning them to the same dormitory to unify their schedules as much as possible could be beneficial.Although assigning separate dormitories can address privacy concerns, it may not be fair to other graduate students.The premise of solving the problem is to minimize the emergence of new problems.For students who delay graduation, we could assign them a new graduation advisor to reduce their psychological pressure effectively.
Future measures to enhance students' mental education
Research has found that psychological issues among graduate students, particularly in the field of medicine, are significant social concerns.Despite the challenges posed by academic, financial, and social pressures, these pressures are regarded as essential aspects of personal development, shaped by societal realities.However, external measures can only rectify issues after they arise, making it imperative for graduate students to cultivate strong resilience and coping mechanisms to mitigate negative emotions.Thus, enhancing proactive self-adaptation abilities among graduate students is crucial for stress alleviation [33].Moreover, prioritizing prevention over treatment as graduate students' psychological issues evolve into mental illnesses underscores the importance of establishing robust mechanisms for early detection and intervention within educational institutions, supported by governmental policies and funding.Educational institutions should provide scientifically effective intervention models, including the implementation of positive tiered interventions based on continuous feedback loops, fostering a culture of open dialogue and support for mental well-being.Integrating psychological education courses and practical resilience-building activities into existing curricula is essential.On the other hand, increasing parents' understanding of graduate student mental health issues is essential [34], enabling guardians to recognize the pressures faced by graduate student groups, especially in identifying abnormal psychological and behavioral changes in their children, which is also one of the important forces for early detection and prevention of graduate student mental health issues.
Building upon these recommendations, society should actively assume corresponding responsibilities.Under government guidance, strengthening collaboration between universities and social institutions to establish a mental health counseling service system in society is crucial [35].Establishing a diverse collaborative network is a complex and winding process.Establishing a comprehensive medical graduate student psychological education system within universities requires understanding and tolerance from families, concrete actions from schools, and strong support from government and society.
Limitations
Although this study focuses on current medical master's students, their perspectives, as learners, may not fully consider the practical feasibility within the real-world context.In subsequent research, we look forward to incorporating insights from educators involved in medical master's student mental health education to ensure maximal safeguarding of students' mental well-being within existing conditions and without compromising the well-being of other demographics.
Several strategies for addressing mental health challenges among medical master's students are proposed in this study.Unfortunately, apart from enhancing students' self-psychological resilience, other recommendations tend towards macro-level interventions.These findings require extensive empirical validation and cooperation from all stakeholders.In subsequent research, we aspire to adopt more specific measures, as only a plethora of testable ideas can gradually enrich the discourse surrounding mental health education for medical master's students.
Conclusion
This study underscores the urgent need for comprehensive mental health support among medical graduate students in Chinese campuses.The results reveal the prevalence of psychological stress within this group and the inadequacies of current mental health education methods.Themes such as identified sources of stress, students' coping strategies, and perspectives on mental health education provide valuable insights for addressing these challenges.Looking forward, it is recommended that educational institutions develop proactive mental health interventions tailored to the diverse needs of medical students.This includes establishing positive tiered interventions based on continuous feedback loops, fostering open dialogue, and promoting a culture supportive of mental health.Additionally, integrating practical coping skills training and resilience-building activities into the curriculum can enable students to more effectively manage internal stressors.Furthermore, future research efforts should focus on evaluating the effectiveness of innovative mental health interventions, such as peer support programs and online mental health resources.Longitudinal studies tracking the mental health outcomes of medical students can offer valuable insights into the sustained impact of intervention strategies.By prioritizing mental health education and implementing evidencebased interventions, we can create a more supportive and resilient environment for medical graduate students, ultimately enhancing their academic success and professional development.This not only benefits individual students but also contributes to cultivating a healthier and more effective workforce in healthcare.
Table 1
Overview of participants
Table 2
Overview of participants
|
v3-fos-license
|
2019-10-17T09:06:22.534Z
|
2019-06-01T00:00:00.000
|
208485947
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.33899/mjn.2019.162875",
"pdf_hash": "94e19e0b44e84b94dddf1443bf64433d63d71eeb",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2644",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "540c1414b97213f74995f367173c289df4bb74c1",
"year": 2019
}
|
pes2o/s2orc
|
ASSESSMENTS 0F THE APPLICATION OF NURSING PROCESS IN SURGICAL WARDS
The nursing process is a problem-solving framework that enables the nursing staff to plan their care for a patents and clients as an individual or group basis . This requires nurses to be accountable for the care that they prescribe with care deliver and to keep clear and accurate records of the discussions .Today, one’s ability to use the nursing process is governed by the standards for pre-registration for any nursing education as outline. The present study aimed at utilize the nursing process in Mosul teaching hospitals The initial sample consisted of (42) nurses whose were selected randomly from surgical and emergency units at Mosul teaching Hospitals . the period of data collection for this study was carried out from 20-1-2019 until 20-2-2019 . in Ibn-Sina and AL-Jamhuori teaching hospitals. The results shows a highly significant of all assessment for applications of nursing process at p. value (0.05) In conclusions of this study The nursing process it's very important to enhancing the quality of care especially at surgical field and The highly significant number for all elements of nursing process. Furthermore, the nursing staff cant not be able to make any management without nursing process. Thus the researcher put some recommendations for the current study such as ;the nursing process case sheet should be putting in patient case folder , to evaluate the patient and his health status systematically, the nursing staff and health care team must be educating to how could they using nursing process and updating, surgical wards should be formulate a time table for all clients to guide them about their care as a schedules .The head of nursing staff as a adviser must be enrolled with conferences and educations centers to discuss the implementation of nursing process .
Introduction:
The nursing process is a problem-solving framework that enables the nursing staff to planning their care for a clients on an individual or group basis .This requires nurses to be accountable for the care that they prescribe with care deliver and to keep clear and accurate records of the discussions (NMC, 2008).
Today, one's ability to use the nursing process is governed by the standards for preregistration for any nursing education as outline.(NMC, 2010) The nursing care plan should be covering all aspects for the patient needs like physical , emotional, social ,spiritual and cultural these demands leading to improvement the ability to make a systematic framework (DoH, 2000) and (NMC, 2010).
In addition the nursing records were of poor quality and showed little understanding of the nursing process (Darzi, 2008).Failure to keep a record of nursing care or to use the nursing process can lead to a breakdown in the quality of care that is provided.
The Delivering of standerd quality care to patients based on patients' needs.the importance of appled and using a systematic plan like nursing process to the provision of nursing care which are can be estimatting with in care time (Miller and Sanderson, 2000).
Flaxabilty
There has been some debate within the profession over the number of stages needed in the nursing process, some suggesting four and others five.With a four-stage approach, the nurse does not have time to reflect on the assessment data that have been collected and instead moves from assessment to planning (
Adoma et al 2005) .
The nursing process make the nursing staff responsible to formulate the care plan dibanding on the patient needs and nursing diagnosis.(NMC, 2008).
This is important to remember in an effort to counteract any criticism surrounding who is ultimately responsible under a system of collective responsibility (Borgelt 2003 ).
the critical thinking of any care plan make the nursing staff a Flexibility to modify and updata the patient management , care and intervention ( Groen 1995) .
the experienced nurses could link patient problems and interventions to gether so that problems can be resolved more efficiently (Grobe, Drew, and Fonteyn1991).
The nursing process in intensive care units (ICU) is increasingly characterized by a heavy reliance on medical equipment.The variety of equipment is large and innovations appear on the market continuously.
Due to these technological developments, the profession of intensive care nursing has changed.Nurses are increasingly required to conduct complex therapeutic and diagnostic procedures using the equipment's (Effken, 1997).
Despite this increased functionality The task of selecting and integrating the vast amount of data into diagnostic information is still the responsibility of the nurse Deciding which actions should be taken is often done under time-critical circumstances.There is a high work pace, and the cumulative work pressure combined with working in shifts results in fatigue (Groen 1995).
On top of this, there is an increasing demand on medical and nursing staff should be a higher efficiency.(Kohn et al 2000).
To minimize inexperience, training is crucial.
However, there is a lack of general training for nursing staff in the use of technology as well as adequate, task-specific training (Bogner, 1994).
Especially The core system of the patient follows its own logic, clinicians have to play two roles.
Assuming the role of collaborator with processes tending towards homeostasis, and saboteur of processes tending away from homeostasis (Miller 2000).
Another important factor in the use of nursing process related to the intended tasks is that the nurses are responsible not only for the device operation, but also for the larger performance goals of the overall system.As a result, the system of people and artifacts evolve over time to produce generally successful performance, even if the usability of that system is poor (Moorhead et al 2004).
Utilization of the nursing process shall be evident in review of the completed Community Health and Safety assessment for all admissions, transfers, long term care stays, and significant changes in condition (Obradovich and Woods 1996).
The shorter Nursing assessment form shall be utilized by the nurse to document body checks, minor injuries, or returns from short hospitalizations with minor changes in condition.The short nursing assessment form has an area to document a focus nursing note data (Miller et al 2003)
Aim of The Study:
The present study aimed at utilized the nursing process in Mosul teaching hospitals
Methodology Ethical consecrations
The first official permission was obtained from clinical science department director ( research councilor) , depending on that permission the researchers team has a second official permission from the Nineveh health office to conduct the present study in Ibn-Sina and AL-Jamhuori teaching hospitals .
Design of the study:
The descriptive study design was applied for the period from 20-1-2019 until 20-2-2019.
Setting of Study:
The current study was carried out in Iben Sina and Al-Jamhori teaching hospitals and Emergency care Unit at Mosul The patient come to this hospital who is suffering from general diseases and surgical problems The number of client for both hospital was is Approximately more than (150) patient per day.
Sample of Study:
The sample was (42) they are selected randomly of the first time for all nurses who are workers at surgical ward and Emergency departments The percentage between male and female was different according to level of the education of the of nursing graduate ,Bachelors e and diploma of nursing .The researchers put some criteria for the selection of the sample according to how application of the nursing process in surgical wards.
Study Tool:
In the present study ,the following steps ware applied in accordance to the a quantitative nursing research .table (2) indicates that the same highly significant results for assessment statement at ( P 0.05).
Table (3) shows Statement Diagnosis
Table (3).indicates that the same highly significant results for diagnosis statement at ( P 0.05).
Discussion
The nursing process as a model its very important for any healthcare program .theapplication of any care plan or management should be formulate depending on systematic nursing process .In this study the researchers try to investigate the ability of nursing staff towered implementation of nursing process and their concept about this approach (Dochterman and Bulechek 2004).
Table (1) simply show the main points about demographical data .The age was 40% belong 18-27 and 28-37 years , this indicate the majority of the sample at mid age and they don't has experience .The male number was higher than female which was about 36 nurse The work setting higher percentage was in intensive care units 14 nurse 42% however 23% working in medical wards and 42% in surgical units .
With regard working time about 28 participants was night shifting 66% however 42% morning duty.
When the researcher talking about nursing process, must be starting with nursing assessment as the first step and last with nursing evaluation or expected out com.
Nursing Assessment:
Table ( 2 ) The nurses answer 55% of the questions related to taking the history of the disease ,which shows that the process of assessment and diagnosis of the patient is entirely based on the doctor , and also confirmed by 40% of nurses that they only collect general data from the patient (Lin 2002 ) The main target of all health care plane is to arranged the steps and process for any care or management time table , the quality should be interact with this process it lead to reflect to health care system as a positive factors .
Nursing diagnosis :
The conclusions of this study include the following: 1.The Variation in the percentage of the sample with regards their variables was not influencing on their knowledge.
2. The nursing process its very important to enhancing the quality of care especially at surgical field 3. The highly significant number for all elements of nursing process .
4. The nursing staff cant not be able to make any management with out nursing process .
Recommendations:
Depending on the aims and objectives of this study, the researchers expecting the following recommendations: 1.The nursing process case sheet should be putting in patient case folder , to evaluate the patient and his health status systematically .
2. The nursing staff and health care team must be educating to how could they using nursing process .
3. The surgical wards should be formulate a l time table for all clients to guide them about their care as a schedules .
4. The head of nursing staff as a adviser must be enrolled at every health center in particular to discuss the implementation of nursing process 5. Assessment from time to time the positive and negative factors which was influencing on expected out come during evaluation care .
1 - 3 - 1 . 2 . frequency and percentage 3 .
The literature review was searched and seek by research plain through using the following key word (emergency word ,surgical word , nursing process , assessment ,diagnoses, planning , intervention and evaluation or expected out com ).2-formalate the questioner survey based on the (application of nursing process).Explore the initial draft to the experts to have a validity of tools.4-the reliability of instrument was getting by doing assessment ,diagnosis, planning, Implementation and evaluation of nursing process and the sample was about (10 nurses pre-post test ) Testing the reliability and validity of the tool:-The validity term refers to be measured and to degrees to which an instrument measures what it is supposed to be measuring.the researchers were had the face validity by experts However the Reliability refers to the degree of consistency or accuracy with an instrument measure an attribute.the higher reliability of an instrument.the lower the amount of error present in the obtained scores.several empirical methods assess various aspects of an instrument reliability(pilot and hunger 1999) Statistical analysis Percentage are used to the description of the sample.SPSS version 17.
85%.With regard training program for nursing process the half of sample non enrolled with any educational or training program 20 nurse 50%.The same results for services duration 50% has less than 5 years, however just 6 nurse has more than 10 years .The educational level about 42% come from university education ( Nursing college ) 1 nurse postgraduate and 30 % from Institute .
Table ( 1
) demographic characteristics of the sample.
Table ( 2
) reveals the statement of assessment .
Table ( 4
) demonstrate the Statement Planning.
Table ( 4
) appears the same highly significant results for planning statement at ( P 0.05).
Table ( 6
) demonstrate the Statement Evaluation.
Table ( 5
) appears the Statement Intervention.
|
v3-fos-license
|
2018-04-03T02:44:07.310Z
|
2014-01-01T00:00:00.000
|
54525459
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://f1000research.com/articles/3-163/v1/pdf",
"pdf_hash": "368b872198c2a90f766c736073d61bf3eabfb609",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2645",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "fc0c5ab5c5e8cb1a208cc5b76c0f40ad4351e4ef",
"year": 2014
}
|
pes2o/s2orc
|
Deletion of ENTPD3 does not impair nucleotide hydrolysis in primary somatosensory neurons or spinal cord [version 1; peer review: 2 approved]
Ectonucleotidases are membrane-bound or secreted proteins that hydrolyze extracellular nucleotides. Recently, we identified three ectonucleotidases that hydrolyze extracellular adenosine 5’monophosphate (AMP) to adenosine in primary somatosensory neurons. Currently, it is unclear which ectonucleotidases hydrolyze ATP and ADP in these neurons. Ectonucleoside triphosphate diphosphohydrolases (ENTPDs) comprise a class of enzymes that dephosphorylate extracellular ATP and ADP. Here, we found that ENTPD3 (also known as NTPDase3 or CD39L3) was located in nociceptive and non-nociceptive neurons of the dorsal root ganglion (DRG), in the dorsal horn of the spinal cord, and in free nerve endings in the skin. To determine if ENTPD3 contributes directly to ATP and ADP hydrolysis in these tissues, we generated and characterized an Entpd3 knockout mouse. This mouse lacks ENTPD3 protein in all tissues examined, including the DRG, spinal cord, skin, and bladder. However, DRG and spinal cord tissues from Entpd3-/mice showed no reduction in histochemical staining when ATP, ADP, AMP, or UTP were used as substrates. Additionally, using fast-scan cyclic voltammetry (FSCV), adenosine production was not impaired in the dorsal spinal cord of Entpd3-/mice when the substrate ADP was applied. Further, Entpd3-/mice did not differ in nociceptive behaviors when compared to wild-type mice, although Entpd3-/mice showed a modest reduction in β-alanine-mediated itch. Taken together, our data indicate that deletion of Entpd3 does not impair ATP or ADP hydrolysis in primary somatosensory neurons or in dorsal spinal cord. Moreover, our data suggest there could be multiple ectonucleotidases that act redundantly to hydrolyze nucleotides in these regions of the nervous Open Peer Review
In the ENTPD family, four (ENTPD1, -2, -3, and -8) are membranebound enzymes that hydrolyze extracellular ATP and ADP (Robson et al., 2006). ENTPD1, -2, and -3 are expressed throughout the central nervous system and display different preferences and kinetics for each nucleotide substrate (Kukulski et al., 2005;Langer et al., 2007). The hydrolysis of ATP by ENTPD1 results in an increase in AMP levels, suggesting ENTPD1 rapidly hydrolyzes ATP and ADP substrates, whereas ENTPD2 preferentially dephosphorylates ATP, resulting in a buildup of extracellular ADP (Figure 1). In contrast, ENTPD3 displays an intermediate activity between ENTPD1 and -2, showing rapid hydrolysis of ATP and transient increases in ADP before conversion into AMP (Kukulski et al., 2005). ENTPD1, -2, and -3 are expressed at similar levels in different cell types of the DRG and spinal cord (Rozisky et al., 2010;Vongtau et al., 2011). Specifically, ENTPD1 is primarily expressed in blood vessels, ENTPD2 is primarily expressed in glial cells, including satellite cells and non-myelinating Schwann cells, and ENTPD3 is preferentially expressed in DRG neurons and their central and peripheral projections (Braun et al., 2004;Vongtau et al., 2011). Further, ENTPD3 co-localizes with markers of nociceptive neurons, such as TRPV1, NT5E, and IB4-binding (Vongtau et al., 2011). These findings suggested that ENTPD3 might contribute to ATP and ADP hydrolysis in nociceptive neurons (Vongtau et al., 2011).
To study the contribution of ENTPD3 to ATP and ADP hydrolysis in nociceptive and non-nociceptive neurons in the DRG, we generated a knockout mouse that globally lacked ENTPD3 protein. As part of these studies, we performed immunohistochemical experiments to determine which subsets of DRG neurons expressed ENTPD3 and how loss of ENTPD3 altered nucleotide hydrolysis and nociceptive behaviors. Fast-scan cyclic voltammetry (FSCV) was used to examine adenosine generation in wild-type (WT) and Entpd3 -/mice. We found no significant differences between WT and Entpd3 -/mice in assays of ectonucleotidase function or in nociceptive behavioral assays, suggesting that additional enzymes are involved in the hydrolysis of ATP and ADP in nociceptive and nonnociceptive neurons.
Methods
Animal care and use All vertebrate animals and procedures used in this study were approved by the Institutional Animal Care and Use Committee at the University of North Carolina at Chapel Hill. Mice were maintained on a 12 h:12 h light:dark cycle, were given food (Harlan 2920X) and water ad libitum, and were tested during the light phase. Mice were acclimated to the testing room, equipment and experimenter 1-3 days prior to testing.
Molecular biology and knockout mouse generation
Recombineering was used to generate the Entpd3 targeting arms from a 129S7/SvEv-derived bacterial artificial chromosome (BAC; bMQ-111o06; CHORI). The start codon, located in exon 2 (Lavoie et al., 2004), was replaced with an AscI site to facilitate cloning of AscI-LoxP-EGFPf-3xpA-LoxP-DTR-pA-Frt-PGK-NeoR-Frt-AscI. EGFPf=farnesylated enhanced GFP (Zylka et al., 2005), DTR=human diptheria toxin receptor (Saito et al., 2001). Use of this construct for axonal tracing and cell ablation of calcitonin gene-related peptide (CGRP)-expressing DRG neurons was previously described (McCoy et al., 2013;McCoy et al., 2012). Correct targeting was confirmed in 5.2% of all embryonic stem cell clones by Southern blotting using flanking 5' and 3' probes and a NeoR internal probe. High percentage chimeras were crossed to C57BL/6 females to establish germline transmission and then crossed to PGK1-FLPo mice [B6(C3)-Tg(Pgk1-FLPo)10Sykr/J, Jackson Laboratory] to remove the Frt-flanked selection cassette (confirmed by Figure 1. Ectonucleotidases, their substrates, and products. Several ectonucleotidases, depicted here, have been shown to hydrolyze adenosine-containing extracellular nucleotides such as ATP in a stepwise process into adenosine. PCR). Mice were backcrossed to C57BL/6 mice for eight generations to remove the PGK1-FLPo allele (confirmed by PCR) and establish the Entpd3 -/line. Note, the knocked-in GFP was undetectable in DRG and spinal cord neurons of the Entpd3 -/line.
Tissue collection and preparation for histology
Hindpaw skin (glabrous and hairy), lumbar DRGs, and spinal cords were removed from male mice (n=3; ~10 weeks old) following decapitation, and immersion-fixed in cold 4% paraformaldehyde in 0.1 M phosphate buffer, pH 7.4, for 3 h, 4 h, and 8 h, respectively, and then cryoprotected in 30% sucrose in 0.1 M phosphate buffer at 4°C. DRGs were sectioned at 20 μm and collected on SuperFrost Plus slides; spinal cords and hindpaw skin were sectioned at 30 μm and 60 μm, respectively, and collected in PBS or a cryoprotectant solution containing PBS, ethylene glycol, and glycerol for longterm storage at -20°C.
Histochemistry
Enzyme histochemistry was performed as described previously (Zylka et al., 2008) with a few modifications. Sections of DRG and spinal cord from 3 WT and 3 Entpd3 -/mice were incubated with a given concentration of a nucleotide (AMP, 6 mM for DRG, 3 mM for spinal cord; ADP, 1 mM for DRG and spinal cord; ATP, 0.2 mM for DRGs and spinal cord; UTP, 0.2 mM for spinal cord) in Trizmamaleate buffer containing 20 mM MgCl 2 , pH 7.0, and 2.4 mM lead nitrate for 3 h at room temperature. For some experiments, we included (in the rinse and substrate incubation steps) 10 mM levamisole to block alkaline phosphatase activity, 5 mM ouabain to block Na+/K+-ATPases, a combination of levamisole and ouabain, or 0.1-1.0 mM ARL67156 (N-diethyl-D-β,γ-dibromomethylene ATP) to nonselectively block ENTPD enzymes. All reagents were purchased from Sigma.
FSCV monitoring of adenosine was performed as previously reported (Street et al., 2011), with the major difference being 100 μM ADP was used as the nucleotide substrate. Briefly, a disk-shaped carbon fiber microelectrode (Amoco) was inserted (Cahill et al., 1996), with the disk facing downwards, into the superficial dorsal horn. The potential of the microelectrode was scanned linearly at 400 V/s from -0.4 V to 1.5 V and back again once every 100 ms and was held at -0.4 V otherwise (all potentials versus Ag/AgCl). A micropipette inserted approximately 100 μm from the microelectrode was used to pressure-eject a bolus of 100 μM ADP using a Picospritzer ® III (Parker Instrumentation, Pinebrook, NJ) (ejection parameters: 1 s, 20 PSI). The current was recorded for 5 ejections, 5 minutes apart, at the same location in each sample to obtain a mean response. The current was processed, as previously described (Street et al., 2011), using the background subtracted current at the voltammetric peak at ~1.0 V potential, which has been shown to be sensitive to adenosine and not to nucleotides, such as ATP, ADP, and AMP (Swamy & Venton, 2007).
Behavioral assays
For all behavioral assays, ~3 month-old male WT (n=10) and Entpd3 -/-(n=10; all mice weighing ~26 g) mice were tested in each assay. Mice were acclimated to handling, testing rooms and facilities prior to testing, and the experimenter was blinded to the genotype of each animal. Heat sensitivity was measured by heating each hindpaw once per day using the Plantar Test apparatus (IITC) with a cut-off time of 20 s. For the tail immersion assay, each mouse was gently restrained in a towel, and the distal one-third of the tail was immersed into a water bath heated to 46.5°C or 49°C or into 75% ethanol cooled to -10°C (Wang et al., 1995). The latency to flick or withdraw the tail was measured once per mouse. The cutoff was set at 40 s, 30 s, and 60 s, respectively. For the hot plate test, the latency to jump, shake, or lick a hindpaw was measured within a 30 s cut-off time. To determine mechanical sensitivity, we used an electronic von Frey apparatus (IITC) with semi-flexible tips. Two measurements from each hindpaw were taken and averaged to determine the paw withdrawal threshold in grams. The tail clip assay (noxious mechanical) and cotton swab assay (innocuous mechanical) were performed as described (Garrison et al., 2012;Lariviere et al., 2002). For the acetone test (Bautista et al., 2007), each mouse was placed into a Plexiglas chamber with a wire mesh floor, 50 μL of acetone was placed onto the left hindpaw, and the time spent licking was measured for 1 minute. The cold plantar assay was performed with mice resting on the glass surface of the Plantar Test apparatus (IITC) (Brenner et al., 2012). For the two-temperature discrimination assay, each mouse was placed into a Plexiglas chamber covering two metal surfaces that could be set at different temperatures (Bautista et al., 2007;Dhaka et al., 2007). The amount of time mice spent on each side over a 10 minute period was recorded. Hot and cold sensitivity was assessed on a metal plate heated/cooled to a range of temperatures (5-55°C), with a cut-off time of 30 s, as described (Gentry et al., 2010). For measuring itch responses, histamine (10 μg/μL), chloroquine (CQ; 4 μg/μL) or β-alanine (20 μg/μL) dissolved in 0.9% saline was injected subcutaneously into the nape of the neck (50 μL injection volume). The number of scratching bouts was measured for 30 minutes in 5 minute blocks. One bout consisted of a set of scratches at the injection site until the hindpaw was either licked or placed onto the floor. For the water repulsion assay (Westerberg et al., 2004), the mouse was immersed in a 37°C water bath for 2 min. The mouse was removed from the water and placed onto a paper towel for 5 s, then weight and rectal temperature (deep body temperature, Tb, measured using a digital thermometer, Acorn Temp TC Thermocouple) were measured every 5 min for 60 min. The Complete Freund's adjuvant (CFA) model of inflammatory pain and the lysophosphatidic acid (LPA) model of neuropathic pain were performed as described (Sowa et al., 2010a;Zylka et al., 2008). Twenty microliters of CFA was injected into the left hindpaw centrally beneath the glabrous skin, and 5 nmol of LPA was administered intrathecally.
Data analysis
Data analysis was performed in Excel (version 2010) using t-tests for all behavioral studies and cell counts with all graphs created in GraphPad Prism. The FSCV data were analyzed using the analysis portion of the freely available software HDCV (Version 4). The software is available for download from: http://www.chem.unc.edu/ facilities/index.html?display=electronics&content=software. Average peak currents from the FSCV data were compared using paired t-test. Significance was determined as p ≤ 0.05. ENTPD3 colocalizes with nociceptive and non-nociceptive neuronal markers in DRG ENTPD3 is expressed throughout the nervous system, including nociceptive neurons (Belcher et al., 2006;Langer et al., 2007;Vongtau et al., 2011). To determine which subsets of lumbar DRG neurons expressed ENTPD3, we immunostained for ENTPD3 and markers of nociceptive and non-nociceptive neurons. As previously reported (Vongtau et al., 2011), most DRG neurons, including small-, medium-, and large-diameter neurons, showed some level of staining for ENTPD3 ( Figure 2). For colocalization studies, we assessed only those neurons that were stained moderately to strongly for ENTPD3. All cells that expressed ENTPD3 also expressed NeuN, recapitulating previous results showing that ENTPD3 was primarily associated with neuronal cell types ( Figure 2A-C, Table 1) (Belcher et al., 2006;Langer et al., 2007;Vongtau et al., 2011). Conversely, 56.8% of all DRG neurons (identified by NeuN expression) labeled for ENTPD3 (Figure 2A-C, Table 1). PAP, a marker of nonpeptidergic and some peptidergic nociceptive neurons, was extensively colocalized with ENTPD3-the majority (72.7%) of DRG neurons expressing PAP also expressed ENTPD3, while almost half (43.5%) of all ENTPD3 + neurons expressed PAP ( Figure 2D-F, Table 1). These results were similar to those found by Vongtau and co-workers, who reported that 97% of IB4-binding nonpeptidergic DRG neurons expressed ENTPD3 (Vongtau et al., 2011). NF200, a marker for large-diameter, non-nociceptive neurons and smaller, thinly myelinated (Aδ) nociceptive neurons, colocalized with ENTPD3 ( Figure 2G-I, Table 1), suggesting that ENTPD3 was expressed by some non-nociceptive neurons. Finally, an antibody to CGRP was used to identify peptidergic neurons ( Figure 2J-L). Of CGRP-expressing neurons, 48.7% were also positive for ENTPD3 (Table 1). Thus, our results indicate that ENTPD3 is expressed in nociceptive and non-nociceptive neurons of the DRG.
ENTPD3 expression in spinal dorsal horn
We also immunostained lumbar spinal cord sections to ascertain where ENTPD3 was located in the dorsal horn, the spinal region where axons of nociceptive and non-nociceptive sensory neurons terminate. ENTPD3 + nerve terminals were located primarily in lamina II, where IB4 terminals are located ( Figure 3A-D,I), consistent with a previous report (Vongtau et al., 2011). ENTPD3 + terminals also extended dorsally into lamina I, an area occupied by CGRP + terminals ( Figure 3E,G,I) and ventrally into lamina III, an area with Protein Kinase Cγ (PKCγ)-expressing spinal neurons ( Figure 3F,H,J). We also observed small ENTPD3 + spinal neurons in laminae I, II, and III ( Figure 3A,B,G,H) as was reported by Vongtau and co-workers (Vongtau et al., 2011). This localization pattern in spinal laminae and spinal neurons suggests that ENTPD3 might hydrolyze extracellular nucleotides in spinal pathways devoted to nociception and somatosensation.
Generation and characterization of an Entpd3 -/mouse To assess the extent to which ENTPD3 was necessary for extracellular nucleotide hydrolysis, we disrupted the Entpd3 gene by knocking a LoxP-flanked GFP construct into the start codon of ENTPD3 ( Figure 4A). Expression of GFP was not detectable in DRG or spinal cord even when amplified with antibodies against GFP (image not shown). We were thus unable to use GFP to mark cells that expressed Entpd3. Using immunoblotting, we detected ENTPD3 protein in DRG and bladder (tissues known to express high levels of ENTPD3 (Vongtau et al., 2011;Yu et al., 2011)) from WT mice, but no ENTPD3 protein was detectable in tissues from Entpd3 -/mice ( Figure 4B). These results confirmed that ENTPD3 protein was eliminated in our knockout line and that the antibody we used was specific for ENTPD3. We also immunohistochemically stained DRG, spinal cord, and hindpaw skin of WT and Entpd3 -/mice. We found that lumbar DRG sections from WT mice showed neuronal staining characteristic of ENTPD3, whereas sections from Entpd3 -/mice showed no staining ( Figure 4C,F). Similarly, sections of lumbar spinal cord and hindpaw skin from Entpd3 -/mice showed none of the ENTPD3 + neural profiles observed in WT spinal cord and hindpaw skin ( Figure 4D-E,G-H). Mice lacking ENTPD3 produced normalsized litters (5-9 pups/litter) and had normal weights relative to WT mice (at 3 months ~26 g WT; ~27 g Entpd3 -/-).
Next, we used immunohistochemistry to determine if primary somatosensory neurons or axon terminals were affected by deletion of Entpd3. In DRG, the number of neurons that expressed nociceptive and non-nociceptive markers was not changed with the exception of a small, but statistically significant decrease in the number of neurons expressing NT5E (Table 2). In WT mice, 35% of DRG neurons expressed NT5E, but in Entpd3 -/animals this percentage was reduced to 30.5% (Table 2). We also used immunohistochemistry to assess whether the spinal dorsal horn of Entpd3 -/mice exhibited altered organization in comparison with that of WT animals. The laminar organization in the dorsal spinal cord of Entpd3 -/mice, as revealed by staining for CGRP and PKCγ and binding of IB4, was indistinguishable from that of WT mice ( Figure 5), suggesting that there was no alteration in the organization of primary afferents or spinal neurons in the dorsal horns of mice that lack ENTPD3.
Finally, to determine if cutaneous innervation was altered in Entpd3 -/mice, we co-stained sections of glabrous and hairy skin of WT and Entpd3 -/mice with antibodies to ENTPD3 and PGP9.5, a pan-neuronal marker ( Figure 6). ENTPD3 marked most PGP9.5 + epidermal free nerve endings in hairy and glabrous skin as well as Meissner corpuscles and Merkel cells in volar pads ( Figure 6A-F). These findings were similar to the previously reported staining pattern of ENTPD3 in skin sections (Vongtau et al., 2011). Sections of skin from Entpd3 -/mice lacked all ENTPD3 staining. Expression of PGP9.5 was retained, revealing no differences in the density or structure of free nerve endings, Meissner corpuscles, and Merkel cells in Entpd3 -/mice compared to those observed in skin from WT mice ( Figure 6G-L). Thus, cutaneous innervation was not altered by the loss of ENTPD3. Further, nerve fibers co-expressing ENTPD3 and PGP9.5 were found on blood vessels in the dermis and deep dermis of the hindpaw (image not shown). There was no difference in the density of innervation of blood vessels (as revealed by PGP9.5 immunostaining) between WT and Entpd3 -/mice (image not shown). Taken together, these results suggest that, with the exception of a small decrease in NT5E staining in DRG neurons, deletion of Entpd3 did not affect afferents in the skin, DRG neurons, or primary somatosensory afferents in the dorsal spinal cord.
Entpd3 -/mice do not exhibit deficits in nucleotide hydrolysis or adenosine generation We previously reported that AMP hydrolysis in the DRG and dorsal spinal cord was redundantly carried out by three ectonucleotidases, PAP, NT5E, and TNAP (Street et al., 2013). However, the enzymes that contribute to ATP and ADP hydrolysis in DRG and spinal cord have not yet been fully characterized. To determine if ENTPD3 contributed to nucleotide hydrolysis in DRG, we performed histochemistry at a neutral pH (7.0) on DRG sections from WT and Entpd3 -/mice using the indicated nucleotides (Figure 7). AMP histochemical staining was found in cell bodies of small-and medium-diameter neurons ( Figure 7A,D; where PAP and NT5E are located); ADP histochemical staining was strongest in blood vessels (where ENTPD1 is located) and on the membrane of most neurons ( Figure 7B,E); and ATP histochemical staining was present on blood vessels and the cell membrane of most neurons ( Figure 7C,F). These staining patterns matched what was previously seen in DRG sections from WT mice (Sowa et al., 2010b;Street et al., 2011;Vongtau et al., 2011;Zylka et al., 2008).
When comparing staining between WT and Entpd3 -/-DRGs, we saw no difference in AMP histochemical staining ( Figure 7A,D), consistent with the fact that AMP is not a substrate for ENTPD3 (Ciancaglini et al., 2010). Surprisingly however, there were also no differences in histochemical staining between WT and Entpd3 -/-DRGs when ADP or ATP was used as substrates ( Figure 7B-C,E-F). These data suggest either that ENTPD3 does not hydrolyze these nucleotides in DRG or that other ADP-and ATP-hydrolyzing ectonucleotidases are present and function redundantly with ENTPD3. To determine if ENTPD3 hydrolyzed ADP and ATP redundantly with alkaline phosphatases at pH 7.0, we inhibited alkaline phosphatase activity in histochemical experiments with levamisole (10 mM). However, we observed no difference in staining between WT and Entpd3 -/-DRGs in the presence of levamisole (image not shown). These data suggest DRG neurons contain additional ectonucleotidases besides TNAP and ENTPD3 that hydrolyze ATP and ADP at neutral pH. production of adenosine ( Figure 9C). We saw no significant differences in adenosine generation from ADP between spinal cord slices of WT and Entpd3 -/mice.
These FSCV results, when combined with enzyme histochemistry results, suggest that there are multiple ectonucleotidases that function redundantly to dephosphorylate ATP and ADP in DRG and superficial dorsal horn. Determining the molecular identities of these enzymes will require future studies with additional ectonucleotidase knockout mice and pharmacological inhibitors. Intriguingly, a redundant group of enzymes mediates AMP hydrolysis in the spinal cord, as PAP, NT5E, and TNAP must all be inhibited to completely block the generation of adenosine from AMP (Street et al., 2013). Likewise, TNAP can fully compensate for the loss of NT5E and generate adenosine from nucleotides in the hippocampus (Zhang et al., 2012).
Nociceptive behaviors are not impaired in Entpd3 -/mice Given the high expression of ENTPD3 in nociceptive neurons, we examined whether loss of ENTPD3 affected nociceptive-related behaviors by testing heat, cold, mechanical, and itch sensation (Table 3). In tests of heat sensitivity, there was no difference between WT and Entpd3 -/mice in the tail immersion assay (46.5°C or 49°C; Table 3). Similarly, there was no difference in withdrawal latency in the hot plate test (Table 3). There was also no difference in responses between WT and Entpd3 -/mice in any of the cold assays (acetone evaporative cooling, cold tail immersion at -10°C, or cold plantar; Table 3). To further validate our thermal data, we used a hindpaw withdrawal assay (Gentry et al., 2010) that measures sensitivity to temperatures ranging from noxious cold to noxious hot ( Figure 10A). No difference was found between WT and Entpd3 -/mice at any temperature. We also examined responses to mechanical stimuli and observed no difference between WT and Entpd3 -/mice in noxious mechanical (tail clip) and innocuous mechanical (cotton swab) assays (Table 3).
To determine if loss of ENTPD3 affected itch, we injected pruritogens (histamine, chloroquine, β-alanine) into the nape of the neck and quantified scratching responses. Histamine-and chloroquinemediated itch were not altered in Entpd3 -/mice compared to WT We also found that enzyme histochemical staining was equivalent in the superficial dorsal spinal cord of WT and Entpd3 -/mice when the indicated nucleotides were used as substrates (Figure 8).
To determine if other enzymes contributed to histochemical staining in the dorsal spinal cord when ATP and UTP (0.2 mM) were used as substrates, we used levamisole to block activity of alkaline phosphatases (10 mM), ouabain to block activity of Na + /K + -ATPase (5 mM), and ARL67156 (0.1 and 1 mM), an inhibitor of ENTPD1 and ENTPD3 (Levesque et al., 2007). The addition of these inhibitors did not result in any change in the staining intensity or pattern in the superficial dorsal horn of WT mice relative to Entpd3 -/mice, but adding ARL67156 caused a near-complete loss of histochemical staining in microglia in the spinal gray in both genotypes, presumably because of blockade of ENTPD1 activity (Braun et al., 2000) (image not shown). Vongtau et al. also tested various inhibitors (ouabain, levamisole, and sodium azide) to block Na + /K + -ATPase, alkaline phosphatase, and ENTPD1 activity, respectively (Vongtau et al., 2011). They found that none of these inhibitors affected ATP or UTP hydrolysis in the spinal cord and concluded that ENTPD3 might be responsible for the remaining staining. Our study demonstrates that the level of nucleotide histochemical staining was the same in the Entpd3 -/mice in the presence of ouabain and levamisole plus an ENTPD1/3 inhibitor (ARL67156), suggesting that one or more enzymes other than ENTPD3 are present that hydrolyze nucleotides in the spinal cord.
Enzyme histochemistry detects phosphate that is produced following nucleotide hydrolysis. As an alternative, we used FSCV to quantify adenosine production upon nucleotide hydrolysis in spinal cord slices of WT and Entpd3 -/mice. As previously reported, FSCV can be used to detect adenosine based on characteristic oxidation voltages at 1.0 and 1.5 V (Swamy & Venton, 2007). We applied 100 mM ADP to lamina II and then measured adenosine production at the tip of a carbon-fiber microelectrode (Street et al., 2011). Application of ADP led to the generation of adenosine in WT and Entpd3 -/mice, detected as an increase in measured current at oxidation voltages of 1.0 and 1.5 V ( Figure 9A-B). Currents at 1.0 V were then converted to adenosine concentration. We then compared the peak adenosine concentration in WT and Entpd3 -/mice (n=5 slices/ genotype) to determine if mice lacking ENTPD3 had any deficit in the mice, but there was a statistically significant reduction (a decrease of 34%) in β-alanine-mediated itch (Table 3). β-alanine activates the Mas-related G-protein-coupled receptor D (MRGPRD) in nonpeptidergic nociceptive neurons (Liu et al., 2012;Rau et al., 2009;Shinohara et al., 2004). Therefore, it is possible that loss of ENTPD3 affects nonpeptidergic DRG neurons. When taken together, these data suggest that ENTPD3 does not play a widespread role in regulating sensitivity to noxious or innocuous somatosensory stimuli.
Temperature discrimination and thermoregulation are not impaired in Entpd3 -/mice We next tested WT and Entpd3 -/mice in a two-temperature discrimination assay. In this assay, the amount of time spent in chambers with equal or different floor temperatures is quantified. Four temperature pairs were evaluated (25°C versus 25°C, 25°C versus 30°C, 20°C versus 30°C, and 30°C versus 40°C). There were no significant differences between WT and Entpd3 -/mice at any of the tested temperature pairs ( Figure 10B). These data indicate that temperature discrimination is not impaired in Entpd3 -/mice.
We next examined the extent to which Entpd3 -/mice regulate body temperature in the water repulsion assay. Mice were placed in a 37°C water bath for 2 minutes and their core body (rectal) temperatures and body weights were measured every 5 minutes for 60 minutes after removal from the water bath ( Figure 10C,D). Following removal from the water bath, WT and Entpd3 -/mice showed no differences in the initial body temperature increase or in the subsequent rate to Entpd3 -/-6.9 ± 0.7 n = 10 mice/group, *p < 0.05.
Hyperalgesia and allodynia in Entpd3 -/mice are not impaired in models of chronic pain Lastly, we sought to determine if deletion of ENTPD3 affected the magnitude of allodynia and hyperalgesia in models of inflammatory pain and neuropathic pain. Lysophosphatidic acid (LPA) is a pronociceptive ligand that sensitizes nociceptors and produces a chemically-induced form of neuropathic pain when injected intrathecally (i.t.) (Inoue et al., 2004). Administration of CFA into the hindpaw causes thermal hyperalgesia and mechanical allodynia and serves as a model of inflammatory pain. We monitored thermal and recover their body temperature following hypothermia ( Figure 10C). These data demonstrate that Entpd3 -/mice have no deficits in body temperature regulation due to evaporative cooling.
The water repulsion assay also tests fur barrier function (Westerberg et al., 2004). Once the mouse is removed from the water bath, the initial increase in body weight is indicative of the amount of water absorbed by the fur. We found no significant difference between WT and Entpd3 -/mice in this assay ( Figure 10D), including in the rate at which water is removed/evaporates from the mice. All authors read and approved the final manuscript.
Competing interests
No competing interests were disclosed.
Grant information
This work was supported by grants to MJZ from NINDS (R01NS067688) and a grant to RMW from the NIH (R01NS038879). The Molecular Neuroscience Core and the Confocal and Multiphoton Imaging Core, where imaging work was performed, are funded by grants from NINDS (P30NS045892) and NICHD (P30HD03110).
The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Acknowledgments
We would like to thank JrGang Cheng and the Molecular Neuroscience Core at the UNC Neuroscience Center for generating the BAC targeting clone and Gabriela Salazar for technical assistance and help with managing the mouse colony.
mechanical sensitivity before and after administration of either LPA (i.t.) or CFA (into hindpaw) and observed no differences between WT and Entpd3 -/mice in either chronic pain model ( Figure 10E,F).
Conclusions
We generated a mouse that globally lacks ENTPD3 to evaluate the extent to which ENTPD3 was necessary for normal extracellular nucleotide hydrolysis in primary somatosensory neurons and dorsal spinal cord. Despite being expressed at high levels in many nociceptive and non-nociceptive somatosensory neurons, deletion of ENTPD3 did not affect extracellular nucleotide hydrolysis. Further, there were no changes in nociceptive behaviors in Entpd3 -/mice, though we did observe a small reduction in β-alanine-mediated itch response in knockout animals. These findings suggest that other enzymes are present that dephosphorylate extracellular nucleoside di-and triphosphates in primary somatosensory neurons. While ENTPD3 may function redundantly with other ectonucleotidases in these neurons, our Entpd3 knockout line could prove useful in determining the physiological role of ENTPD3 in other organ systems where this ectonucleotidase is expressed, including in neurons that control wakefulness and feeding behavior (Appelbaum et al., 2007;Belcher et al., 2006;Kiss et al., 2009), in the cochlea (Vlajkovic et al., 2006), in cells that regulate insulin secretion (Lavoie et al., 2010;Syed et al., 2013), and in the gastrointestinal system (Lavoie et al., 2011).
Taylor-Blake B, Zylka MJ: Prostatic acid phosphatase is expressed in
The key findings were generated through functional analysis of the knockout mice, including nucleotidase histochemistry, extensive behavioral analysis of nociceptive and non-nociceptive thresholds in naïve and inflamed mice, and fast-scan cyclic voltammetry (FSCV), an innovative method for measuring adenosine levels in situ in spinal cord slices. These studies found that ENTPD3 was dispensible for nucleotide hydrolysis, and all measured behavioral variables were unaltered in knockout mice, with the exception of a reduction in b-alanine-mediated itch behavior. The results suggest the possibility that additional ectonucleotidase(s) are co-expressed with ENTPD3 that are sufficient for normal nucleotide triphosphate/diphosphate hydrolysis, similar to the situation described by these authors for the ectonucleotidases that generate adenosine from AMP. The histological results support the conclusion that ENTPD3 is in position to impact both noxious and non-noxious somosensory transduction and transmission, but underscore that the regulation of somatosensory purinergic signaling is complex and likely to be regulated by multiple enzymes acting in tandem. This is a significant finding for the pain field, because ENTPD3 is the only ectonucleotidase identified in primary sensory neurons that regulates the availability of extracellular ATP and UTP, which have been extensively implicated in nociceptive signaling as agonists for P2X and P2Y receptors.
A key point not fully addressed here is whether knockout of ENTPD3 results in upregulation of other NTPDases in DRG neurons and/or dorsal horn, which could provide an explanation for the mild knockout phenotype. In particular, analysis of neuronal mRNA/ protein levels and distribution for ENTPD1, 2 and 8 in ENTPD3 knockout mice would increase the impact of the findings reported here. An intriguing possibility is that the alternate enzymes responsible are not members of the ENTPD family. The authors do demonstrate that the ENTPD1 inhibitor ARL67156 did not alter the distribution of enzyme histochemical staining in knockout tissue compared to WT, but did eliminate microglial labeling in both genotypes.
One question that the authors might want to address in the discussion is how to evaluate whether FSCV is capable of resolving neuronal ENTPD3 activity in the dorsal horn when the neurons are surrounded by microglia expressing ENTPD1 (the active site of these enzymes is extracellular). As the authors suggest, further analysis in mice with multiple ENTPD gene deletions may be informative. However, the substantial behavioral evidence indicates that loss of ENTPD3 is not critical for normal sensory processing.
Thus, in lieu of examining a candidate list of ectonucleotidases, we felt a compromise would be to address this comment as follows (by revising the Conclusions section): "Our use of inhibitors ruled out the possibility that some ENTPDs, alkaline phosphatases and Na/K-ATPase compensated for the loss of ENTPD3. However, we cannot exclude the possibility that additional known or unknown enzymes with ectonucleotidase activity might be upregulated in Entpd3-/-mice and compensate for the loss of ENTPD3. Determining which enzymes act redundantly with ENTPD3 will require use of additional inhibitors and additional ectonucleotidase knockout lines." Dr Molliver: "One question that the authors might want to address in the discussion is how to evaluate whether FSCV is capable of resolving neuronal ENTPD3 activity in the dorsal horn when the neurons are surrounded by microglia expressing ENTPD1 (the active site of these enzymes is extracellular). As the authors suggest, further analysis in mice with multiple ENTPD gene deletions may be informative. However, the substantial behavioral evidence indicates that loss of ENTPD3 is not critical for normal sensory processing." To address this comment, we added the following sentence to the Results & Discussion section: "Note that FSCV cannot resolve neuronal ENTPD3 activity in the dorsal horn from spinal microglial ENTPD1 activity, so the adenosine detected by FSCV after applying ADP could originate from microglial ENTPD1 or other ectonucleotidases in the tissue. For example, this adenosine could originate from PAP and/or TNAP, as these enzymes are located in the same region and can also hydrolyze ADP to adenosine (Figure 1)." assays were performed. These include tail immersion assays, a hot plate test, tail clip assay, cotton swab assay, acetone test, cold plantar assay, and 2-temperature discrimination assays. In addition, several assays for itch were performed, including histamine, chloroquine, and beta-alanine induced itch. Also, several other behavioral assays, including the water repulsion assay, the Complete Freund's Adjuvant (CFA) inflammation pain assay and the LysoPhosphatidic Acid (LPA) neuropathic pain assays were performed.
In general, the work is well done and detailed, and includes the appropriate controls. However, the main problem with this work is that there is no indication of the physiological function of NTPDase3/ENTPD3 revealed by any of the experimental results. Of course, negative data is sometimes important, and this is so in this case. However, what is really missing from this work is analysis for other nucleotidases that are likely to compensate for the loss of NTPDase3 in these mice. These include the nucleotide pyrophosphatase/phosphodiesterase enzymes (NPPs), and more importantly, the other members of the cell-surface NTPDase class of nucleotidases, especially NTPDase1/ENTPD1, and NTPDase2/ENTPD2. Since the authors claim in their abstract that "there could be multiple ectonucleotidases that act redundantly to hydrolyze nucleotides in these regions of the nervous system" (which seems logical and likely), it is somewhat curious that no analyses for these other nucleotidases were performed. If such experiments were done, and if upregulation of one or more of these enzymes was observed, this study would be more interesting, and the paper more important, since this might suggest putative physiological role(s) for NTPDase3/ENTPD1.
There are a couple of interesting and possibly problematic experimental details reported in the paper. First, why was 20 mM magnesium chloride used in the enzyme histochemistry experiments? This seems to be an unreasonably high, non-physiologic, concentration of magnesium. In addition, many of these nucleotidases, including NTPDase3 and other NTPDases, are, in fact, more active using calcium as a divalent cation as opposed to magnesium. So the choice of 20 mM magnesium chloride seems odd. Also, one change in these knockout mice that is noted in terms of possible effects on nucleotide hydrolysis is the decrease in 5'-nucleotidase protein seen in the DRG neurons of the knockout mouse, which is reported in Table 2. However, as reported in Figure 7, there is no apparent decrease in hydrolysis of AMP in the same dorsal root ganglia, which is apparently not consistent with Table 2, although other enzymes could come into play (but don't seem to change in the KO). However, enzyme histochemistry is difficult to accurately quantitate and is usually regarded as a semi-quantitative technique. Thus, a relatively small change in hydrolysis rates may not be evident from enzyme histochemical data. This is a potential problem with Figure 7, and begs the question as to why tissue homogenates were not evaluated by solution-based, quantitative nucleotidase enzyme assays. The same limitations are applicable to the data reported in Figure 8 on spinal cord sections from wild type and knockout mice. As reported in Table 3, the authors did find a significant difference in itch response to betaalanine in the KO mice. However, again, the other itch data, and the rest of the data in Table 3, show no difference between wt and KO mice for responses to itch, heat, or cold behavioral stimuli.
In their conclusion section, the authors do mention other roles that have been suggested for NTPDase3/ENTPD3, including possible roles in the hypothalamus for controlling wakefulness and feeding behavior, for hearing in the cochlea, in the beta cells of the pancreas for regulation of ATP-controlled insulin secretion, and in the G.I. tract. It would be interesting to report any experiments designed to monitor for changes in any of these putative physiological functions of NTPDase3. These could include measurements designed to detect abnormal sleep times or cycles, abnormal eating habits, and abnormal plasma glucose and insulin levels and/or abnormal responses to glucose tolerance tests.
In conclusion, this study is well done and thorough with respect to the attributes that were evaluated in the DRG and spinal cord. Unfortunately, the results do not suggest likely physiological function(s) for NTPDase3/ENTPD3. Also, there is no data reported for other related cell-surface nucleotidases, such as NTPDases 1 and 2, which may be up-regulated in a compensatory response to the knockout of NTPDase3. In addition, there is no mention of experiments designed to address other putative physiological functions of NTPDase3, which are not related to the DRG or spinal cord. Hopefully, these points will be addressed in future work on these knockout animals.
analysis would be more comprehensive but would not provide insights as to which upregulated genes are biologically relevant-knowing what genes change will not allow us to prove they compensate for the loss of Entpd3.
Our study was not focused on examining potential compensatory mechanisms in the Enptd3-/-mice. Such a study will require substantial effort to do properly (i.e. we would first need to identify all the enzymes that are upregulated and then demonstrate, in a systematic manner, that each one does or does not act redundantly using inhibitors and/or double/triple knockout mice). Such an endeavor would require considerable effort. For example, it took us several years and multiple knockout lines to rigorously demonstrate that PAP, NT5E and TNAP act redundantly to generate adenosine from AMP.
3.
Thus, in lieu of examining a candidate list of ectonucleotidases, we felt a compromise would be to address this comment as follows (by revising the Conclusions section): "Our use of inhibitors ruled out the possibility that some ENTPDs, alkaline phosphatases and Na/K-ATPase compensated for the loss of ENTPD3. However, we cannot exclude the possibility that additional known or unknown enzymes with ectonucleotidase activity might be upregulated in Entpd3-/-mice and compensate for the loss of ENTPD3. Determining which enzymes act redundantly with ENTPD3 will require use of additional inhibitors and additional ectonucleotidase knockout lines." Dr Kirley: "There are a couple of interesting and possibly problematic experimental details reported in the paper. First, why was 20 mM magnesium chloride used in the enzyme histochemistry experiments? This seems to be an unreasonably high, non-physiologic, concentration of magnesium. In addition, many of these nucleotidases, including NTPDase3 and other NTPDases, are, in fact, more active using calcium as a divalent cation as opposed to magnesium. So the choice of 20 mM magnesium chloride seems odd." To address this comment, we performed new experiments. We performed histochemistry experiments with 2 mM CaCl 2 , 20 mM CaCl 2 and 20 mM MgCl 2 , in WT and Entpd3-/-mice. These data are shown in new Figure 9.
We also updated the results to include this new information: (Figure 9; with deletion of ENTPD3 confirmed in these sections using immunostaining, Figure 9H). Thus Mg 2+ and Ca 2+ appear to be interchangeable in this histochemical assay." Also please note, we used 20 mM MgCl in our histochemical experiments because a previous study, which we based our histochemistry method on, found that ATP ectonucleotidase activity in skin Langerhans cells was divavelent cation dependent, with complete interchangability between Ca 2+ and Mg 2+ , and with optimal staining at a 20 mM concentration of either divalent (see Chaker et al., 1984). 20 mM MgCl 2 or 20 mM CaCl 2 thus appears to be optimal for ATP histochemical staining. And in biochemical assays with ENTPDs, Mg 2+ and Ca 2+ were interchangeable when ATP and ADP were used as substrates ( Rucker et al., 2008).
Dr Kirley: "Also, one change in these knockout mice that is noted in terms of possible effects on nucleotide hydrolysis is the decrease in 5'-nucleotidase protein seen in the DRG neurons of the knockout mouse, which is reported in Table 2. However, as reported in Figure 7, there is no apparent decrease in hydrolysis of AMP in the same dorsal root ganglia, which is apparently not consistent with Table 2, although other enzymes could come into play (but don't seem to change in the KO). However, enzyme histochemistry is difficult to accurately quantitate and is usually regarded as a semi-quantitative technique. Thus, a relatively small change in hydrolysis rates may not be evident from enzyme histochemical data. This is a potential problem with Figure 7, and begs the question as to why tissue homogenates were not evaluated by solution-based, quantitative nucleotidase enzyme assays. The same limitations are applicable to the data reported in Figure 8 on spinal cord sections from wild type and knockout mice." We agree that histochemical staining provides a semi-quantitative readout of enzyme activity. This is why we turned to FSCV in spinal cord slices. FSCV provides a quantitative electrochemical method for measuring hydrolysis of ADP to adenosine, in the precise anatomical region where Entpd3 is located. Since we found no differences between WT and Entpd3-/-mice using this quantitative electrochemical technique, we feel these data are sufficient to show that loss of Entpd3 alone has no measurable effect on nucleotide hydrolysis.
And, as can be seen from our micrographs, Entpd3 is restricted to the dorsal spinal cord while ATP and ADP histochemical activity is broadly distributed. Use of a solution-based assay would entail creating homogenates from spinal cord or DRG, thus disrupting the integrity of the tissue and introducing more ectonucleotidases into the assay (which would reduce signal-to-noise).
The small 4.5% reduction in NT5E in DRG was statistically significant, although it appears to have no effect on AMP hydrolysis, as assessed histochemically. This likely reflects that AMP can be hydrolyzed by NT5E, PAP and TNAP, as we previously found. Since this did not constitute a major finding, we did not focus or discuss this in the text. Table 3, the authors did find a significant difference in itch response to beta-alanine in the KO mice. However, again, the other itch data, and the rest of the data in Table 3, show no difference between wt and KO mice for responses to itch, heat, or cold behavioral stimuli."
Dr Kirley: "As reported in
We felt it would be difficult to experimentally pursue the mechanistic basis for this itch phenotype because it was extremely small in magnitude. Such small behavioral effects are not easy to pursue. Moreover, it was the only sensory phenotype out of a large number of sensory functions we probed, suggesting it is a very mild sensory phenotype.
Dr Kirley: "In their conclusion section, the authors do mention other roles that have been suggested for NTPDase3/ENTPD3, including possible roles in the hypothalamus for controlling wakefulness and feeding behavior, for hearing in the cochlea, in the beta cells of the pancreas for regulation of ATP-controlled insulin secretion, and in the G.I. tract. It would be interesting to report any experiments designed to monitor for changes in any of these putative physiological functions of NTPDase3. These could include measurements designed to detect abnormal sleep times or cycles, abnormal eating habits, and abnormal plasma glucose and insulin levels and/or abnormal responses to glucose tolerance tests." These are indeed interesting topics for future study. However, we feel they are beyond scope of our present study, which is focused on examining the function of Entpd3 in primary somatosensory neurons and dorsal spinal cord.
Dr Kirley: "In conclusion, this study is well done and thorough with respect to the attributes that were evaluated in the DRG and spinal cord. Unfortunately, the results do not suggest likely physiological function(s) for NTPDase3/ENTPD3. Also, there is no data reported for other related cell-surface nucleotidases, such as NTPDases 1 and 2, which may be upregulated in a compensatory response to the knockout of NTPDase3. In addition, there is no mention of experiments designed to address other putative physiological functions of NTPDase3, which are not related to the DRG or spinal cord. Hopefully, these points will be addressed in future work on these knockout animals." We agree that future studies are warranted. By reporting our findings with these first ever Entpd3 knockout mice, it will now be possible for us and others to study Entpd3 in other physiological contexts and to explore possible compensatory mechanisms.
The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias • You can publish traditional articles, null/negative results, case reports, data notes and more • The peer review process is transparent and collaborative • Your article is indexed in PubMed after passing peer review • Dedicated customer support at every stage • For pre-submission enquiries, contact research@f1000.com
|
v3-fos-license
|
2018-04-03T06:25:12.365Z
|
2016-01-11T00:00:00.000
|
15701477
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1186/s13052-016-0211-5",
"pdf_hash": "bd88768029f9a96bc3f67bd64bad323736f107f2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2646",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "bd88768029f9a96bc3f67bd64bad323736f107f2",
"year": 2016
}
|
pes2o/s2orc
|
Diagnostic role of inflammatory markers in pediatric Brucella arthritis
Background As a multisystem infectious disease, there is an inflammation, which causes increase in acute phase reactants in brucellosis. The mean platelet volume (MPV), platelet distribution width (PDW), red cell distribution width (RDW), neutrophil to lymphocyte ratio (NLR) and platelet to lymphocyte ratio (PLR) have been identified as markers of inflammation. The present study aimed to evaluate diagnostic values of these biomarkers in brucella arthritis (BA). Methods The study included 64 children with BA and 66 healthy control subjects. Demographic features, joint involvement, erythrocyte sedimentation rate (ESR), C-reactive protein (CRP) and hematological variables were retrospectively recorded. In addition, results of synovial fluid and serum tube agglutination test for brucella together with treatment regimens were recorded. Results The mean age of the patients (53.1 % male) was 92.3 ± 41.2 months. The most commonly affected joint was ankle (53.1 %). Synovial fluid puncture-brucella agglutination test was positive in 22 (34.3 %) patients. Puncture culture was positive in 9 patients. Most of the patients (57.8 %) were treated with a combination of rifampicin plus sulfamethoxazole/trimethoprim and gentamicin. Significantly higher mean PDW, RDW, MPV, NLR and PLR values were found in children with BA compared to control subjects (p < 0.05). A positive correlation was found between MPV and NLR values (R2 = 0.192, p < 0.001). Conclusion Our findings indicated that NLR and PLR are indirect markers of inflammation that may be observed abnormally increased in children with brucella arthritis. Further longitudinal studies are needed to investigate this topic to establish the more clear associations.
Background
Brucellosis, the most common bacterial zoonosis in the world, is still endemic in many developing countries. The clinical presentation of brucellosis is non-specific and the course and the severity of infection is variable; in humans, it presents as a multisystem disease involving many organs and tissues [1]. Fever and arthritis are the most common signs. Osteoarticular involvement is one of the most frequent complications of brucellosis. Although in adults with osteoarticular brucellosis due to Brucella abortus from Northwestern Spain sacroiliitis and spondylitis were more common than peripheral arthritis [2], monoarthritis is now considered as the predominant musculoskeletal manifestation of brucellosis [3,4]. The most commonly affected joints are the hip and the knee. Unlike in adults, the sacroiliac joint and the axial skeleton are rarely affected. Monoarthritis is more common than polyarthritis. This may lead to confusion with pyogenic arthritis in children; therefore, in a community where brucella is common, awareness about this entity should prompt the investigation of this disease, and physicians should have a high index of suspicion for brucella arthritis (BA) [5].
Laboratory findings may be normal in some pediatric cases of brucella arthritis; however, it is not possible to take synovial fluid from all of these patients. Therefore, new inflammatory markers are required in diagnosis of pediatric BA patients. There are few previous studies on the parameters indicating new inflammatory markers in pediatric brucella arthritis. The present study aimed to investigate the levels of MPV, PDW, RDW, NLR and PLR as possible indirect inflammatory markers in children with brucella arthritis.
Methods
This retrospective study was performed by the two center of medical faculty of university pediatric clinics. The medical records of all patients with BA between November 2011 and January 2014 were obtained from the hospital records. A total of 64 children with BA and 66 age-and gender-matched healthy controls were enrolled in the study.
Healthy subjects were children who applied to hospital for routine check-up or for preoperative evaluation of minor elective surgery such as circumcision or hernia repair. Control group subjects were recruited from hospital records of these children. Children with any sign of infection or systemic illness were excluded from the control group.
Arthritis occurred for the first time in all patients within the week before admission to hospital. The diagnosis of arthritis was made if the subjects had joint pain, restriction of movement, and swelling. Swelling was not essential for the diagnosis of hip, spine, or sacroiliac arthritis. Although encountered in many cases, additional signs such as effusion, redness and increased temperature on joint were not considered essential for the diagnosis of arthritis.
The diagnosis of brucellosis with joint involvement was established according to the presence of all of the following criteria; a clinical picture compatible with arthritis, isolation of Brucella from blood or synovial fluids, positive brucella serology test 1:≥160, using the Standard Agglutination Test (SAT) for patients presenting with symptoms suggestive of brucellosis. For screening and in the absence of clinical indicators of active brucellosis, a titer of 1:320 or higher is more specific for the presence of the disease. Pediatric patients with a synovial fluid culture positive for brucella and available results of the cytological examination of the synovial fluid aspirate were identified. Relevant demographic, clinical and laboratory data, and treatment modalities and outcomes were obtained from patients' follow-up cards and hospital records.
NLR and PLR were calculated as the ratio of neutrophils to lymphocytes and platelets to lymphocytes, respectively. These hematological variables were measured and recorded in the healthy control subjects as well. Comparison between the study and the control subjects was performed with regards to WBC, neutrophil count, lymphocyte count, PDW, RDW, PLT, MPV, NLR and PLR. Blood samples were obtained using a vacutainer and collected in tubes containing standard EDTA. All blood samples were tested for hematological parameters using the same regularly calibrated analyzer (Abbott CELL-DYN 3700, United States).
Joint fluid was aspirated from the affected joint following a strict sterile technique. Since usually only small amounts of fluid were obtained, the synovial fluid specimens were only sent for cytological and bacteriological examination, and tested for antibrucella antibodies using microagglutination test.
WBC, Hb, neutrophil count, lymphocyte count, PLT, MPV, NLR and PLR values were compared between the study and the control groups.
Patients with a clear-cut underlying pathology like various bone and joint diseases, connective tissue, rheumatic disorders, chronic disorders, anemia or other hematological diseases, acute bacterial infection as well as fever of other etiologies, who were over 18 years old and whose file records were inaccessible, were excluded from the study.
The Non-Interventional Clinical Ethics Committee of Dicle University Medical Faculty approved the study protocol.
Statistical analysis
The normality of data distribution was determined using the Kolmogorov-Smirnov test. Normally distributed numerical variables were expressed in mean plus/minus standard deviation. Normally distributed numeric variables were compared using the Student's t-test or One-way ANOVA test. Data corresponding to an abnormal distribution were compared using the non-parametric Mann-Whitney U-test or Kruskal-Wallis test. The Chi-square test was used to compare categorical variables between the groups. Correlations between numerical variables were evaluated using Pearson's or Spearman's correlation analysis. P-values of less than 0.05 were considered statistically significant. The data were analyzed using Statistical Package for Social Sciences (SPSS) version 18.0 program for Windows.
Results
The mean age of the patients was 92.3 ± 41.2 months and 53.1 % (n = 34) of the patients were male. The mean age of the control group was 98.5 ± 44.0 months and 53 % (n = 35) were male. There were no significant differences in the mean age and gender distribution between the study and the control groups (p > 0.05).
Discussion
Although hematological changes are common in brucella arthritis, they are not diagnostic and usually do not require treatment. In childhood brucella arthritis, hematological disorders may occur as leukocytosis, anemia, relative lymphocytosis along with leukopenia, thrombocytopenia and pancytopenia [6]. The study by El-Koumi et al. found anemia in 43 %, leukopenia in 38 %, leukocytosis in 20 % and pancytopenia in 18 % of the cases [7]. Similar to the previous studies, the present study found anemia in 45.3 %, thrombocytopenia in 21.8 %, leukopenia in 10.9 %, leukocytosis in 9.3 % and pancytopenia in 7.8 % of the patients. Significantly higher leukocyte and neutrophil counts were found in brucellosis patients compared to the control group, whereas the lymphocyte and thrombocyte counts were lower.
The present study aimed to investigate the predictive contribution value of NLR, PLR and MPV in the diagnosis of BA. Our findings showed that NLR, PLR and MPV were higher in patients with BA compared to the control group.
NLR can be determined from routine blood differentials at no additional cost. Changes in the relative TMP-SMX trimethoprim-sulfamethoxazole, SD standard deviation abundance of leukocyte subgroups occur in parallel with the increase in overall leukocyte count. Lymphocyte count decreases when neutrophil count increases. NLR increases in inflammatory conditions and this increase is considered as an indicator of systemic inflammation [8]. Studies showed that platelets also play an active role in inflammation, while having regulatory effects on the immune system [9]. The study by Günes et al. [10] was conducted on patients with juvenil rheumatoid arthritis (JRA) and demonstrated that NLR was higher in patients with juvenile idiopathic arthritis compared with the control group. In another study, significantly higher NLR values were found in patients with ankylosing spondylitis [11]. In the study by Türkmen et al., the PLR ratio showed better performance than the NLR ratio in the prediction of inflammation in patients with end-stage renal disease [12]. As a result of changes caused by the inflammation in neutrophils, platelets and lymphocytes, NLR and PLR have turned into inflammatory markers. Based on the results of the present study and other similar studies, we suggest that the NLR and PLR ratios may be inflammatory markers that can be used in the diagnosis and follow-up of the disease in children with brucella arthritis. WBC white blood cell, PDW platelet distribution width, RDW red cell distribution width, MPV mean platelet volume, NLR neutrophil to lymphocyte ratio, PLR platelet to lymphocyte ratio, SD standart deviation Fig. 1 The relationship between mean platelet volume and neutrophil to lymphocyte ratio Hematologic abnormalities are observed in brucellosis. One of these abnormalities is thrombocytopenia. Over release of proinflammatory cytokines and acute-phase reactants can suppress the size of platelets [8,9]. The study by Okan et al. found that MPV was statistically significantly lower in brucellosis cases compared to control group [9]. Küçükbayrak et al. and Bozkurt et al. conducted studies on adult patients and established that MPV was increased statistically after treatment in brucellosis cases [13,14]. The literature contains many studies regarding several diseases related to MPV, PDW and RDW, and a part of such studies demonstrated increased MPV in acute coronary syndrome, diabetes mellitus, cerebrovascular conditions, preeclampsia, renal artery stenosis, hypercholesterolemia, smoking and sepsis [15][16][17]. However, there are few studies on brucellosis cases. The present study found higher MPV, PDW and RDW in brucella arthritis patients than control group.
Increased CRP and ESR have been reported to be involved in active inflammation and are often considered as useful criteria for the diagnosis and follow-up effectiveness of treatment in brucella and other inflammatory conditions [1,17,18]. The studies evaluating the correlation between MPV and CRP, ESR and SAT reported different results. Kader et al. [19] found a significant negative correlation between MPV and SAT. Öztürk et al. [20] found a negative correlation between MPV and CRP, while two studies found positive correlation between MPV and CRP [21,22]. The present study found a significant positive correlation between NLR and MPV, whereas there was no significant correlation between MPV, SAT, ESR and CRP.
Our study has several limitations. Firstly, it is a retrospective study with a relatively small sample size and synovial fluid was not taken from all patients. It would be very useful if we can compare the markers between patients with brucella arthritis, septic arthritis and reactive arthritis. The studies with a larger number of patients and more comprehensive analyses can provide further data on these variables.
Conclusion
MPV, PDW, RDW, NLR and PLR values can be useful complementary indirect markers for diagnosis of BA in children. We believe that these variables can be taken into account as quick, cheap and easily measurable new inflammatory markers in patients with BA. Further prospective studies are required to externally cross-validate our findings in larger cohorts of BA patients.
|
v3-fos-license
|
2018-08-01T19:03:36.984Z
|
2018-07-01T00:00:00.000
|
51619811
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/23/7/1610/pdf",
"pdf_hash": "13ffe518f1a2e146eee65742c645bd2e147380fc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2647",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"sha1": "13ffe518f1a2e146eee65742c645bd2e147380fc",
"year": 2018
}
|
pes2o/s2orc
|
Molecular Modeling for Structural Insights Concerning the Activation Mechanisms of F1174L and R1275Q Mutations on Anaplastic Lymphoma Kinase
Anaplastic lymphoma kinase (ALK) is a receptor tyrosine kinase involved in various cancers. In its basal state, the structure of ALK is in an autoinhibitory form stabilized by its A-loop, which runs from the N-lobe to the C-lobe of the kinase. Specifically, the A-loop adopts an inhibitory pose with its proximal A-loop helix (αAL-helix) to anchor the αC-helix orientation in an inactive form in the N-lobe; the distal portion of the A-loop is packed against the C-lobe to block the peptide substrate from binding. Upon phosphorylation of the first A-loop tyrosine (Y1278), the αAL-helix unfolds; the distal A-loop detaches from the C-lobe and reveals the P+1 pocket that accommodates the residues immediately after their phosphorylation, and ALK is activated accordingly. Recently, two neuroblastoma mutants, F1174L and R1275Q, have been determined to cause ALK activation without phosphorylation on Y1278. Notably, F1174 is located on the C-terminus of the αC-helix and away from the A-loop, whereas R1275 sits on the αAL-helix. In this molecular modeling study, we investigated the structural impacts of F1174L and R1275Q that lead to the gain-of-function event. Wild-type ALK and ALK with phosphorylated Y1278 were also modeled for comparison. Our modeling suggests that the replacement of F1174 with a smaller residue, namely leucine, moves the αC-helix and αAL-helix into closer contact and further distorts the distal portion of the A-loop. In wild-type ALK, R1275 assumes the dual role of maintaining the αAL-helix–αC-helix interaction in an inactive form and securing αAL-helix conformation through the D1276–R1275 interaction. Accordingly, mutating R1275 to a glutamine reorients the αC-helix to an active form and deforms the entire A-loop. In both F1174L and R1275Q mutants, the A-loop rearranges itself to expose the P+1 pocket, and kinase activity resumes.
Introduction
Anaplastic lymphoma kinase (ALK) is a member of the superfamily of the insulin receptor protein tyrosine kinase; ALK participates in embryonic nervous system development during embryogenesis with decreased expression after birth [1]. Accumulating evidence indicates that dysregulation of ALK is associated with numerous diseases such as anaplastic large cell lymphomas [2], lung cancer [3] and neuroblastomas [4]. Full-length ALK consists of an extracellular portion responsible for ligand binding; a transmembrane segment; and an intracellular portion with a juxtamembrane (JM) segment, protein kinase domain and carboxy terminal tail. In its basal condition, the kinase domain of the ALK is inactive, but can be activated through binding with an activating ligand such as midkine [5] condition, the kinase domain of the ALK is inactive, but can be activated through binding with an activating ligand such as midkine [5] or pleiotrophin [6] at the extracellular portion. Ligand binding induces ALK dimerization, resulting in the transphorylation of Y1278 on the activation loop (A-loop) by the partner ALK protein kinase domain [7]. Figure 1 presents an overview of the structure of the apo and inactive ALK protein kinase domain retrieved from the Protein Data Bank (PDB code: 3L9P) [8]. ALK is a typical protein kinase whose kinase domain consists of two lobes: the N-terminal small lobe (N-lobe) and C-terminal large lobe (C-lobe). The N-lobe includes one α-helix (αC-helix, residues 1158-1173) and five β-strands that form a relatively rigid antiparallel β-sheet. The C-lobe is mainly composed of helices with flexible loops. Between the two lobes is a cleft to accommodate adenosine triphosphate (ATP). In its active form, the N-lobe can move toward the C-lobe, whereas in its inactive form, the N-lobe is dynamically rigid and unable to take in ATP [9]. The ALK domain is autoinhibited because the JM segment (residues 1096-1103) folds in a β-turn motif and C1097′s amino hydrogen atom forms a hydrogen bond with the Y1278 hydroxy group, thereby prohibiting phosphorylation on Y1278. The A-loop structure also has an inhibitory arrangement; a short fragment of the proximal A-loop (the so-called αAL-helix (residues 1272-1280)) is packed beneath the αC-helix and accordingly prevents ALK from relaxing to its active conformation. Moreover, the distal portion of the A-loop (residues 1281-1291) is packed against the C-lobe, which blocks the P+1 pocket formed by the P+1 loop (residues 1292-1300, right after the A-loop's C-terminus); thus, the P+1 pocket cannot accommodate the residues next to the residue to be phosphorylated by ALK [10][11][12][13]. Figure 1. ALK structure retrieved from PDB 3L9P. Structural elements including the αC-helix, αAL-helix, A-loop, P+1 loop and components of the R-spine are highlighted. Residues to be altered including Y1278, F1174 and R1275 are also highlighted. Color code: orange for N-lobe, green for C-lobe, yellow for αAL-helix, blue for αC-helix, and red for R-spine. Figure 1. ALK structure retrieved from PDB 3L9P. Structural elements including the αC-helix, αAL-helix, A-loop, P+1 loop and components of the R-spine are highlighted. Residues to be altered including Y1278, F1174 and R1275 are also highlighted. Color code: orange for N-lobe, green for C-lobe, yellow for αAL-helix, blue for αC-helix, and red for R-spine.
Many fingerprint features distinguish the active and inactive structures of a kinase. For example, in an active state, the conserved DFG motif adopts a DFG-in with the phenylalanine pointing inward, which causes the ATP site to become available; by contrast, in an inactive state, the conserved DFG motif adopts a DFG-out position where the phenylalanine points outward and obstructs the ATP site [14][15][16]. In an active state, a salt bridge forms between a conserved glutamine on the αC-helix and a conserved lysine on the β3-strand and secures αC-helix orientation so that the relative motion between the N-lobe and C-lobe can favor the active state [17][18][19]. The assembled regulatory-spine (R-spine) is another hallmark of the active state [15,18,20,21]. Although the ALK domain in its basal condition is in an inactive state, it possesses several structural features of the active state. For example, the DFG motif adopts a DFG-in conformation, where F1271 points inward to make space for ATP binding; the salt bridge between E1161 on the αC-helix and R1192 on the β4-strand is formed so that the αC-helix orients perpendicularly to the αAL-helix. Moreover, the hydrophobic R-spine composed of C1182 (on the β4-strand), I1171 (on the βC-strand), F1271 (on the DFG motif, but only under the circumstance of DFG-in conformation), H1247 (on the HRD motif) and D1311 (on the αF-helix) is present in this inactive state structure. Because of these structural features, the ALK domain is a highly unique intermediate conformation between the active state (especially in the N-lobe region) and inactive state [15,18,20,21].
Mutations within the ALK domain that promote constitutive, ligand-independent activation are frequently involved in many diseases. Three "hot spot" residues are reportedly involved in 85% of all mutations, namely R1275 (43%), F1174 (30%) and F1245 (12%), where glutamine or leucine is typically substituted for R1275; leucine, isoleucine, valine, cysteine or serine replace F1174; and F1245 is replaced with leucine, isoleucine, valine or cysteine [22,23]. Experimental data have indicated that both F1174L and R1275Q transform ALK into a ligand-binding independent active form, evidenced by the rate constant k cat (in min −1 ) values of 9.32 ± 0.85 for the wild type (WT), 119 ± 13 for R1275Q, 365 ± 61 for F1174L and 425 ± 63 for pY1278 [24]. In this study, we were interested in F1174L and R1275Q, which are commonly present in patients with neuroblastoma, a childhood cancer [1,25,26]. Figure 1 illustrates that F1174 is at the C-terminal end of the αC-helix and R1275 is situated at the middle of the αAL-helix. Molecular dynamics (MD) simulation has served as a useful tool to correlate protein structure and function at the atomic level [27][28][29][30][31]. In the present study, MD simulations were conducted to evaluate four ALK systems, namely WT, the F1174L variant, the R1275Q variant and phosphorylated WT (the pY1278 variant). Of these, WT is an inactive form, and the other three are active forms; we studied pY1278 to identify the common structural characteristics of the active form in the F1174L and R1275Q variants that correspond to the pY1278 variant. With this study, we wish to link the point mutation to the varied kinase function.
System Setup
Three available crystal structures of the ALK, namely apo WT (PDB code: 3L9P) [8], the F1174L variant (PDB code: 4FNW) [25] and the R1275Q variant (PDB code: 4FNX) [25], were retrieved to serve as the initial structures for MD simulation to determine a possible structural discrepancy responsible for varied kinase activity. To construct the initial structure for the pY1278 variant, we used the aforementioned inactive apo WT structure (PDB code: 3L9P) for the structural base and replaced the A-loop fragment through homology modeling [32], which disclosed an A-loop from an active form structure of the insulin receptor tyrosine kinase (IRK, PDB code: 1IR3) [10]. Because both ALK and IRK belong to the insulin receptor kinase superfamily with the YXXXYY motif for autophosphorylation in their A-loops, we used the IRK structure, sharing a sequence identity of 41.3% with the ALK sequence, for A-loop homology. The sequence alignment of the IRK sequence against the target ALK sequence is shown in Figure 2. We then phosphorylated Y1278 on the replaced A-loop. The MD simulations were performed using the AMBER 12.0 software package [33,34] with ff03.r1 [35] and ff99SB [36,37] force fields. All hydrogen atoms of the four ALK systems were assigned using the LEaP module, which considered ionizable residues set at their default protonation states at a neutral pH value. Each studied ALK system was immersed in a cubic box of the TIP3P water model (10 Å minimum solute-wall distance), and five, six, five and six Na + ions were added to neutralize WT, the pY1278 variant, the F1174L variant and the R1275Q variant, respectively. Each solvated ALK system underwent three stages of energy minimization; each stage consisted of 5000 steps of the steepest descent algorithm and 5000 steps of the conjugate gradient algorithm with a non-bonded cutoff of 8.0 Å. In Stage 1, all atoms in the ALK domain were restrained, thereby enabling the added TIP3P water molecules to reorient. In Stage 2, atoms in the protein backbone were restrained, and thus, the atoms in the amino acid side chains were rendered to interact with the added water molecules. In Stage 3, the whole solvated system was minimized without restraint to minimize conformational conflict. states at a neutral pH value. Each studied ALK system was immersed in a cubic box of the TIP3P water model (10 Å minimum solute-wall distance), and five, six, five and six Na + ions were added to neutralize WT, the pY1278 variant, the F1174L variant and the R1275Q variant, respectively. Each solvated ALK system underwent three stages of energy minimization; each stage consisted of 5000 steps of the steepest descent algorithm and 5000 steps of the conjugate gradient algorithm with a non-bonded cutoff of 8.0 Å. In Stage 1, all atoms in the ALK domain were restrained, thereby enabling the added TIP3P water molecules to reorient. In Stage 2, atoms in the protein backbone were restrained, and thus, the atoms in the amino acid side chains were rendered to interact with the added water molecules. In Stage 3, the whole solvated system was minimized without restraint to minimize conformational conflict. The MD simulations in this study were performed in accordance with the standard protocol, which specifies gradual heating, density equilibration, equilibration and production procedures in the isothermal isobaric ensemble (NPT, P = 1 atm and T = 300 K) MD. A minimized solvated system was used as the starting structure in subsequent MD simulations. In the 100-ps heating procedure, the system was gradually heated from 0-300 K within 40 ps; this was followed by density equilibration at 300 K for 100 ps and then constant equilibration at 300 K for 1000 ps. Following the equilibration procedure, each complex system underwent two independent 100-ns production runs at a 2-fs time step. We recorded a snapshot every 10 ps throughout the production runs. An 8 Å cutoff was applied to treat nonbonding interactions such as short-range electrostatics and van der Waals interactions; the particle-mesh-Ewald method [38] was applied to treat long-range electrostatic interactions; and the SHAKE algorithm [39,40] was used to constrain all bonds containing hydrogen atoms to their equilibrium lengths. For structural and energetic analysis, we used the trajectory in the last 50 ns of each 100 ns MD run, which covered 200 × 2 = 400 conformation snapshots for each complex system.
MD Stability
The Cα root-mean-square deviation (RMSD) values for the four studied ALK systems, each system undergoing two independent 100 ns simulation runs, in the production duration given as a function of time are plotted in Figure 3; these values were used to monitor simulation trajectory quality and convergence. The curve of pY1278 Simulation 1 fluctuated at nearly a 2 Å magnitude (with a minimum of 1.8 Å at 71 ns and a maximum of 3.8 Å at 98 ns), whereas the other seven curves fluctuated at a minor magnitude of 1 Å variance. For each studied ALK system, we collected 400 conformations from the two independent MD simulation runs (200 conformations from one MD trajectory within 50-100 ns) and conducted structural and dynamic analysis. Using the 400 collected conformations, the root-mean-square fluctuation (RMSF) per amino acid residue was gauged and plotted over the structure shown in Figure 4. As evident in Figure 4A, concerning WT, the most flexible regions were the β2-β3 loop (scaled in red) and the P-loop (between β1 and β2, scaled in greenish yellow). Notably, the β2-β3 loop was too flexible to be determined in the crystallography solved structure; therefore, the high mobility in the modeled β2-β3 loop was expected. Furthermore, the low mobility on the αC-helix, αAL-helix and distal A-loop was anticipated to maintain ALK The MD simulations in this study were performed in accordance with the standard protocol, which specifies gradual heating, density equilibration, equilibration and production procedures in the isothermal isobaric ensemble (NPT, P = 1 atm and T = 300 K) MD. A minimized solvated system was used as the starting structure in subsequent MD simulations. In the 100-ps heating procedure, the system was gradually heated from 0-300 K within 40 ps; this was followed by density equilibration at 300 K for 100 ps and then constant equilibration at 300 K for 1000 ps. Following the equilibration procedure, each complex system underwent two independent 100-ns production runs at a 2-fs time step. We recorded a snapshot every 10 ps throughout the production runs. An 8 Å cutoff was applied to treat nonbonding interactions such as short-range electrostatics and van der Waals interactions; the particle-mesh-Ewald method [38] was applied to treat long-range electrostatic interactions; and the SHAKE algorithm [39,40] was used to constrain all bonds containing hydrogen atoms to their equilibrium lengths. For structural and energetic analysis, we used the trajectory in the last 50 ns of each 100 ns MD run, which covered 200 × 2 = 400 conformation snapshots for each complex system.
MD Stability
The Cα root-mean-square deviation (RMSD) values for the four studied ALK systems, each system undergoing two independent 100 ns simulation runs, in the production duration given as a function of time are plotted in Figure 3; these values were used to monitor simulation trajectory quality and convergence. The curve of pY1278 Simulation 1 fluctuated at nearly a 2 Å magnitude (with a minimum of 1.8 Å at 71 ns and a maximum of 3.8 Å at 98 ns), whereas the other seven curves fluctuated at a minor magnitude of 1 Å variance. For each studied ALK system, we collected 400 conformations from the two independent MD simulation runs (200 conformations from one MD trajectory within 50-100 ns) and conducted structural and dynamic analysis. Using the 400 collected conformations, the root-mean-square fluctuation (RMSF) per amino acid residue was gauged and plotted over the structure shown in Figure 4. As evident in Figure 4A, concerning WT, the most flexible regions were the β2-β3 loop (scaled in red) and the P-loop (between β1 and β2, scaled in greenish yellow). Notably, the β2-β3 loop was too flexible to be determined in the crystallography solved structure; therefore, the high mobility in the modeled β2-β3 loop was expected. Furthermore, the low mobility on the αC-helix, αAL-helix and distal A-loop was anticipated to maintain ALK inactivity. As depicted in Figure 4B, the pY1278 variant demonstrated notable mobility on the β2-β3 loop (scaled in red), P-loop (scaled in yellow), N-terminal end of the αC-helix (scaled in orange and yellow) and P+1 loop (scaled in green). Figure 4C illustrates that the αAL-helix remained structured despite the F1174L mutation; however, the distal A-loop exhibited high mobility all the way to the P+1 loop, as indicated by the red fragment in the figure. Figure 4D illustrates how the A-loop in the R1275Q variant was entirely destructured and mobile. Moreover, the four residues encompassed by the continuous red surfaces in Figure 4A-D indicate that all four studied ALK systems had assembled R-spines, as mentioned in the Introduction section. Counted from the N-lobe to the C-lobe, the third component of the R-spine was F1271 in the DFG motif only under the condition that the DFG motif adopted a DFG-in conformation, and the pointing-inward F1271 participated in R-spine formation and enforced R-spine assembly. inactivity. As depicted in Figure 4B, the pY1278 variant demonstrated notable mobility on the β2-β3 loop (scaled in red), P-loop (scaled in yellow), N-terminal end of the αC-helix (scaled in orange and yellow) and P+1 loop (scaled in green). Figure 4C illustrates that the αAL-helix remained structured despite the F1174L mutation; however, the distal A-loop exhibited high mobility all the way to the P+1 loop, as indicated by the red fragment in the figure. Figure 4D illustrates how the A-loop in the R1275Q variant was entirely destructured and mobile. Moreover, the four residues encompassed by the continuous red surfaces in Figure 4A-D indicate that all four studied ALK systems had assembled R-spines, as mentioned in the Introduction section. Counted from the N-lobe to the C-lobe, the third component of the R-spine was F1271 in the DFG motif only under the condition that the DFG motif adopted a DFG-in conformation, and the pointing-inward F1271 participated in R-spine formation and enforced R-spine assembly. Figure 5 displays the electrostatic interactions centered at the A-loop and its nearby regions, namely the αC-helix and P+1 loop. Regarding WT ALK, Figure 5A presents dense electrostatic interactions between the αCand αAL-helices to anchor the αAL-helix. D1160, D1163 and E1167 on the αC-helix interacted with R1275, R1279 and R1284 on the αAL-helix. The hydrogen bond between Y1278 and C1097 (on the N-terminal β-turn) and π-π interaction between Y1278 and Y1096 (also on the N-terminal β-turn) also stabilized the αAL-helix. Additionally, the salt bridge formed by the adjacent R1275 and D1276 stabilized the αAL-helix structure. R1284 on the distal A-loop was anchored by D1163 on the αC-helix and D1276 on the αAL-helix, and thus, the distal A-loop packed against the C-lobe and sat above the P+1 loop, thereby blocking the P+1 pocket for peptide substrate binding. Figure 5 displays the electrostatic interactions centered at the A-loop and its nearby regions, namely the αC-helix and P+1 loop. Regarding WT ALK, Figure 5A presents dense electrostatic interactions between the αC-and αAL-helices to anchor the αAL-helix. D1160, D1163 and E1167 on the αC-helix interacted with R1275, R1279 and R1284 on the αAL-helix. The hydrogen bond between Y1278 and C1097 (on the N-terminal β-turn) and π-π interaction between Y1278 and Y1096 (also on the N-terminal β-turn) also stabilized the αAL-helix. Additionally, the salt bridge formed by the adjacent R1275 and D1276 stabilized the αAL-helix structure. R1284 on the distal A-loop was anchored by D1163 on the αC-helix and D1276 on the αAL-helix, and thus, the distal A-loop packed against the C-lobe and sat above the P+1 loop, thereby blocking the P+1 pocket for peptide substrate binding. As shown in Figure 5B, which illustrates the pY1278 variant, the electronegative phosphate group on Y1278 released the linkage between the A-loop and N-terminal β turn and generated a new linkage toward the adjacent R1279 that was used to form salt bridges with the αC-helix in WT. The rearrangement of Y1278 and R1279 unfolded the αAL-helix and consequently interrupted the aforementioned electrostatic contact between the αC-and αAL-helices, thereby causing the A-loop to move backward rather than to sit above the P+1 loop. Furthermore, a newly-formed salt bridge between R1284 (on the A-loop) and E1303 (on the αEF-helix) played an auxiliary role in fixing the C-terminal portion of the A-loop and significantly exposed the P+1 pocket. Figure 5C for the F1174L variant shows that the αAL-helix structure still held, as did the interaction between the αAL-helix and αC-helix. Lee et al. found that in WT, a hydrophobic F-core was formed by F1174 (on the C-terminal αC-helix), F1098 (on the N-terminal β-turn), F1271 (on the DFG motif) and F1245 (on the C-loop); when the smaller residue, namely leucine, replaced F1174, the F-core was maintained [41]. Our structural analysis suggested that the F-core became relatively compact because of the smaller L1174; this structural compactness drew the distal A-loop upward, as evidenced by the enhanced salt bridges between R1284 and K1285 (both on the distal A-loop) toward D1160 (on the αC-helix) and D1163 (on the αC-helix), and consequently moved the A-loop upward, as well. Together, these conformational changes rendered the P+1 pocket accessible.
Structural Variance on the A-loop
As discussed, R1275 on the αAL-helix plays a dual role, namely securing the electrostatic interactions between the αC-and αAL-helices and preserving the helical conformation of the αAL-helix through the R1275-D1276 interaction. In the R1275Q variant displayed in Figure 5D, the substituted Q1275 no longer linked to D1276, and the detached D1276 moved backward and formed a salt bridge with R1248 on the C-loop. The αAL-helix was distorted and lifted the electropositive R1284 and R1285 to interact with the electronegative D1160 and D1163 on the αC-helix. This upward movement also elevated the A-loop and revealed the P+1 pocket. The conformations that can show the abovementioned interactions were chosen for Figure 5. As shown in Figure 5B, which illustrates the pY1278 variant, the electronegative phosphate group on Y1278 released the linkage between the A-loop and N-terminal β turn and generated a new linkage toward the adjacent R1279 that was used to form salt bridges with the αC-helix in WT. The rearrangement of Y1278 and R1279 unfolded the αAL-helix and consequently interrupted the aforementioned electrostatic contact between the αCand αAL-helices, thereby causing the A-loop to move backward rather than to sit above the P+1 loop. Furthermore, a newly-formed salt bridge between R1284 (on the A-loop) and E1303 (on the αEF-helix) played an auxiliary role in fixing the C-terminal portion of the A-loop and significantly exposed the P+1 pocket. Figure 5C for the F1174L variant shows that the αAL-helix structure still held, as did the interaction between the αAL-helix and αC-helix. Lee et al. found that in WT, a hydrophobic F-core was formed by F1174 (on the C-terminal αC-helix), F1098 (on the N-terminal β-turn), F1271 (on the DFG motif) and F1245 (on the C-loop); when the smaller residue, namely leucine, replaced F1174, the F-core was maintained [41]. Our structural analysis suggested that the F-core became relatively compact because of the smaller L1174; this structural compactness drew the distal A-loop upward, as evidenced by the enhanced salt bridges between R1284 and K1285 (both on the distal A-loop) toward D1160 (on the αC-helix) and D1163 (on the αC-helix), and consequently moved the A-loop upward, as well. Together, these conformational changes rendered the P+1 pocket accessible.
As discussed, R1275 on the αAL-helix plays a dual role, namely securing the electrostatic interactions between the αCand αAL-helices and preserving the helical conformation of the αAL-helix through the R1275-D1276 interaction. In the R1275Q variant displayed in Figure 5D, the substituted Q1275 no longer linked to D1276, and the detached D1276 moved backward and formed a salt bridge with R1248 on the C-loop. The αAL-helix was distorted and lifted the electropositive R1284 and R1285 to interact with the electronegative D1160 and D1163 on the αC-helix. This upward movement also elevated the A-loop and revealed the P+1 pocket. The conformations that can show the abovementioned interactions were chosen for Figure 5.
Conclusions
In this MD simulation study, we used the X-ray crystal structures of ALK in its WT, F1174L and R1275Q variants to elucidate the activation mechanism of the F1174L and R1275Q mutations. We also generated a structure of WT-carrying phosphorylated Y1278 to obtain structural features of the active ALK. The DFG motif in each of the four studied ALK systems adopted the DFG-in conformation, and thus, the DFG phenylalanine was able to participate in R-spine assembly regardless of the inactive state of WT ALK. Both DFG-in and the presence of R-spine were characteristic of the active state of kinases. The WT ALK structure was known to be an intermediate between the active and inactive states. Because of the DFG-in conformation, the DFG phenylalanine pointed inward and rendered the ATP binding site available, as in active kinase structures. However, the N-terminal portion of the A-loop folded as the αAL-helix remained in close contact with the αC-helix and gradually influenced the spatial arrangement of the C-terminal portion of the A-loop to seal the P+1 pocket generated by the P+1 loop. With respect to the pY1275, F1174L and R1275Q variants, the A-loop moved either upward or to the right and accordingly exposed the P+1 pocket for the substrate. In summary, we conducted protein cavity detection using fpocket [42] and plotted the results in Figure 6, where the regions in mesh represent cavities in the protein. In all four studied ALK systems, the ATP binding sites were attainable; however, in contrast to the three variants, WT's P+1 pocket for the peptide substrate was occluded by the C-terminal portion of the A-loop. The ATP site was ready for processing phosphorylation in WT ALK, but the peptide substrate binding site was not yet available. Our results provide the key insight that regulation on the P+1 pocket plays an equally decisive role, as does the ATP pocket in the activation mechanism in kinases. The purple regions in mesh are predicted grooves for substrate binding. The cavity is searched by a probe, in terms of the alpha sphere, screening along the protein surface. An alpha sphere is a contact sphere that touches four atoms in 3D space without any internal atoms. As for the fpocket parameter setting, the alpha sphere radius was set as 3 and 7 Å for the minimum and maximum, and a cavity must contain at least 25 spheres. Color code: blue for αC-helix, yellow for intact or disrupted αAL-helix, and pink for A loop. predicted grooves for substrate binding. The cavity is searched by a probe, in terms of the alpha sphere, screening along the protein surface. An alpha sphere is a contact sphere that touches four atoms in 3D space without any internal atoms. As for the fpocket parameter setting, the alpha sphere radius was set as 3 and 7 Å for the minimum and maximum, and a cavity must contain at least 25 spheres. Color code: blue for αC-helix, yellow for intact or disrupted αAL-helix, and pink for A loop.
Funding:
The authors gratefully acknowledge the financial support provided for this study by the Ministry of Science and Technology of Taiwan (104-2815-C-390-005-B).
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2021-12-03T16:12:21.166Z
|
2021-11-30T00:00:00.000
|
244827384
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/26/23/7263/pdf",
"pdf_hash": "b311ff840769a8628302cc3e1a7463d238a358c6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2649",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "d86d88995543e6c26f0a83bd9f26a641a145ded9",
"year": 2021
}
|
pes2o/s2orc
|
Xanthurenic Acid in the Shell Purple Patterns of Crassostrea gigas: First Evidence of an Ommochrome Metabolite in a Mollusk Shell
Ommochromes are one of the least studied groups of natural pigments, frequently confused with melanin and, so far, exclusively found in invertebrates such as cephalopods and butterflies. In this study focused on the purple color of the shells of a mollusk, Crassostrea gigas, the first evidence of a metabolite of ommochromes, xanthurenic acid (XA), was obtained by liquid chromatography combined with mass spectrometry (UPLC-MS). In addition to XA and various porphyrins previously identified, a second group of high molecular weight acid-soluble pigments (HMASP) has been identified with physicochemical and structural characteristics similar to those of ommochromes. In addition, fragmentation of HMASP by tandem mass spectrometry (MS/MS) has revealed a substructure common to XA and ommochromes of the ommatin type. Furthermore, the presence of melanins was excluded by the absence of characteristic by-products among the oxidation residues of HMASP. Altogether, these results show that the purple color of the shells of Crassostrea gigas is a complex association of porphyrins and ommochromes of potentially ommatin or ommin type.
Introduction
Molluscan shell pigments are generally assigned to carotenoids, melanins and tetrapyrroles [1]. While the presence of carotenoids and few tetrapyrroles such as uroporphyrin and biliverdin are well established [1][2][3][4], the occurrence of melanins in shells of bivalves is apparently less common than generally expected, as illustrated by the recent work of S. Affenzeller et al. [5]. For instance, the black color of the adductor muscle scar of shells of the edible oyster Crassostrea gigas, initially hypothesized as a contribution of melanins by S. Hao et al. [6], was subsequently ruled out but without resolving the nature of this color. Recently, uroporphyrin and derivatives were identified in the mantle of C. gigas and the purple and dark patterns of its shell [7], constituting an evidence of the hemebased cellular respiration of C. gigas [8]. These represent only a small proportion of the overall acid-soluble pigments among which, the occurrence of ommochromes would corroborate the recent identification of genes associated with their biosynthetic pathways [9]. However, the precise chemical structure of ommochromes is generally unknown at present, their occurrence in a natural sample is usually postulated by the identification of specific biosynthetic metabolites such as 3-hydroxykynurenine (3-HK), 3-hydroxyanthranilic acid (3-HA) and XA [10][11][12][13].
Our previous study having established the presence of porphyrins [7], the present study focuses on the composition of porphyrins-free acid-soluble pigments of purple patterns of shells of C. gigas in order to establish the absence or presence of melanins and ommochromes, carotenoids were not considered, being the only group of acid-insoluble 2 of 12 molluscan shell pigments [1,[14][15][16]. Among know biosynthetic metabolites, only XA, a precursor and/or a degradation product of ommochromes [12], was identified from the resulting group of acid-soluble pigments. These also displayed physicochemical properties similar to those reported in the literature for ommochromes (insoluble in most aqueous and organic solvents without acidifier, absorption bands from 400 to 600 nm). In addition, they were constituted by a sub molecular unit common to that observed from the fragmentation of XA by tandem mass spectrometry. Finally, the absence of melanins was established by the method described by S. Affenzeller et al. [5]. Altogether, this leads us to consider the purple color of C. gigas as an association of acid-soluble porphyrins and a type of ommochromes.
Identification of Xanthurenic Acid
After collection and decontamination, colorful purple fragments of shells of adult C. gigas were dissolved in aqueous hydrochloric acid (1M HCl (aq) ) and filtered. A fraction of acid-soluble pigments, free of porphyrins, was obtained by preparative chromatography in opened system. The resulting fraction, named purple fraction (PF, 0.37 wt.%), was analyzed by UPLC-MS. The molecular formula of the compound eluted at 4.89 min, corresponding to C 10 Figure 1c,d, the pure XA standard in Figure 1e-h, mass spectra in ESI− and co-injected with the shell sample in Figure S1, adapted from [17]).
Our previous study having established the presence of porphyrins [7], the present study focuses on the composition of porphyrins-free acid-soluble pigments of purple patterns of shells of C. gigas in order to establish the absence or presence of melanins and ommochromes, carotenoids were not considered, being the only group of acid-insoluble molluscan shell pigments [1,[14][15][16]. Among know biosynthetic metabolites, only XA, a precursor and/or a degradation product of ommochromes [12], was identified from the resulting group of acid-soluble pigments. These also displayed physicochemical properties similar to those reported in the literature for ommochromes (insoluble in most aqueous and organic solvents without acidifier, absorption bands from 400 to 600 nm). In addition, they were constituted by a sub molecular unit common to that observed from the fragmentation of XA by tandem mass spectrometry. Finally, the absence of melanins was established by the method described by S. Affenzeller et al. [5]. Altogether, this leads us to consider the purple color of C. gigas as an association of acid-soluble porphyrins and a type of ommochromes.
Identification of Xanthurenic Acid
After collection and decontamination, colorful purple fragments of shells of adult C. gigas were dissolved in aqueous hydrochloric acid (1M HCl(aq)) and filtered. A fraction of acid-soluble pigments, free of porphyrins, was obtained by preparative chromatography in opened system. The resulting fraction, named purple fraction (PF, 0.37 wt.%), was analyzed by UPLC-MS. The molecular formula of the compound eluted at 4.89 min, corresponding to C10H7NO4 ([M + H] + obs at m/z 206.0454, Figure 1a,b), is consistent with XA ([M + H] + calc at m/z 206.0453). The different retention times of XA in PF and the XA standard (4.89 and 5.03 min, respectively) is certainly due to the matrix effect. The identification of XA was further confirmed by the comparative UPLC-MS/MS analysis of a commercial standard (MS/MS spectra of XA in PF in Figure 1c,d, the pure XA standard in Figure 1eh, mass spectra in ESI− and co-injected with the shell sample in Figure S1, adapted from [17]). The identification of XA is of particular interest since it was described as a precursor as well as a side or degradation product of ommochrommes, exclusively related to their biosynthesis in invertebrates [12]. Consequently, the [M + H] + signals of other known metabolite precursors and side products of possible acid-soluble molluscan pigments other than porphyrins ( Figure S2), i.e., ommochromes and melanins, were searched from the UPLC-MS data of PF. Among these compounds, only two signals potentially corresponding to anthranilic acid and kynurenic acid were detected in PF at 3.71 and 4.50 min, respectively (Table S1), questioning the presence of melanins among the set of acid-soluble pigments in PF.
Characterization of the Purple Fraction of Acid-Soluble Pigments
The physicochemical properties of PF were investigated to define more precisely the chemical nature of its acid-soluble pigments. An important solubility was observed in alkali solution and in acidified solvents, especially in methanol containing HCl(aq) ( Table 1). Clearly, the solubility of PF is similar to that of ommochromes [16], being described in the literature as insoluble in almost all aqueous and organic solvents and slightly soluble in pure methanol but turning fully soluble when acidified with HCl [16,18].
The absorption spectrum of PF in the UV-visible region (Figure 2a) was characterized by an absorption band at 360 nm and a large band from 400 to 600 nm, with λmax at 464, 496 and 552 nm. This absorption profile is comparable to those of ommochromes (a large band from 400 to 600 nm and a smaller around 310 or 380 nm depending on the pH and chemical structure) [16,[18][19][20], but strongly differs from those synthetic and natural melanins of different sources, all characterized by a continuous decreasing absorption towards the visible region without specific absorption band from 400 to 800 nm [21] ( Figure 2b). The identification of XA is of particular interest since it was described as a precursor as well as a side or degradation product of ommochrommes, exclusively related to their biosynthesis in invertebrates [12]. Consequently, the [M + H] + signals of other known metabolite precursors and side products of possible acid-soluble molluscan pigments other than porphyrins ( Figure S2), i.e., ommochromes and melanins, were searched from the UPLC-MS data of PF. Among these compounds, only two signals potentially corresponding to anthranilic acid and kynurenic acid were detected in PF at 3.71 and 4.50 min, respectively (Table S1), questioning the presence of melanins among the set of acid-soluble pigments in PF.
Characterization of the Purple Fraction of Acid-Soluble Pigments
The physicochemical properties of PF were investigated to define more precisely the chemical nature of its acid-soluble pigments. An important solubility was observed in alkali solution and in acidified solvents, especially in methanol containing HCl (aq) ( Table 1). Clearly, the solubility of PF is similar to that of ommochromes [16], being described in the literature as insoluble in almost all aqueous and organic solvents and slightly soluble in pure methanol but turning fully soluble when acidified with HCl [16,18].
The absorption spectrum of PF in the UV-visible region ( Figure 2a) was characterized by an absorption band at 360 nm and a large band from 400 to 600 nm, with λ max at 464, 496 and 552 nm. This absorption profile is comparable to those of ommochromes (a large band from 400 to 600 nm and a smaller around 310 or 380 nm depending on the pH and chemical structure) [16,[18][19][20], but strongly differs from those synthetic and natural melanins of different sources, all characterized by a continuous decreasing absorption towards the visible region without specific absorption band from 400 to 800 nm [21] (Figure 2b).
Comparative Analysis with Natural Melanin
The discrimination of melanins from ommochromes in a natural sample is challenging and often lead to confusion from the macroscopic (granular morphology) to the molecular point of view (polycarboxylic and polyaromatic structure) [19,[26][27][28]. In order to establish whether or not melanins are present in PF, an oxidation method [29], recently applied by S. Affenzeller et al. [5], was used since it has allowed to investigate the presence of melanins in the black adductor muscle scar of shells of C. gigas. Briefly, after alkaline oxidation of PF with a 30% H2O2 aqueous solution (pH > 10), the resulting products were analyzed by UPLC-MS in negative ionization mode and compared to eumelanin of Sepia officinalis treated in the same conditions. From the latter, the molecular ions of pyrrole-2,3dicarboxylic acid ( The infrared (IR) spectrum of PF (Figure 2c) is also comparable to those of ommochromes either ommatin or ommin types [20,[22][23][24]. Among characteristic bands, the broad band at 3000-3500 cm −1 may be representative of carboxylic acid function. The bands at 1634 cm −1 potentially correspond to C-O stretching vibration of carboxylic acid function, and 1410 cm −1 to N-H bending vibrations. In addition, the band at 1725 cm −1 is in agreement with that described as specific to ommochromes of the ommatin type [20,24]. In contrast, the IR spectrum of Sepia officinalis eumelanin is characterized by broad bands at 3262 cm −1 , 1557 cm −1 and 1361 cm −1 (Figure 2d) as already mentioned in the literature [25].
Given the structural diversity of melanins, additional bands can be observed, some of which are common to ommochromes due to the presence of carboxylic acid, amide, amine, aromatic amine and phenolic functions [25].
The analysis of PF by UPLC-MS in negative ionization mode (more sensitive for acidic compounds compared to the positive ionization mode), was characterized by ions with a state of charge of 2 in the m/z 700−770 range. The major ion at m/z 722 investigated by MS/MS was characterized by multiple neutral losses of CO 2 (≥9), being representative of carboxylic acid groups (Figure 2e). In addition, the intense product ion at m/z 160, corresponding to C 9 H 6 NO 2 − , was also observed in the fragmentation spectrum of XA (Figure 1d), suggesting a common sub-structural unit. These results were systematically observed for other ions with a state of charge of 2 in the m/z 700-770 range ( Figure S3).
Comparative Analysis with Natural Melanin
The discrimination of melanins from ommochromes in a natural sample is challenging and often lead to confusion from the macroscopic (granular morphology) to the molecular point of view (polycarboxylic and polyaromatic structure) [19,[26][27][28]. In order to establish whether or not melanins are present in PF, an oxidation method [29], recently applied by S. Affenzeller et al. [5], was used since it has allowed to investigate the presence of melanins in the black adductor muscle scar of shells of C. gigas. Briefly, after alkaline oxidation of PF with a 30% H 2 O 2 aqueous solution (pH > 10), the resulting products were analyzed by UPLC-MS in negative ionization mode and compared to eumelanin of Sepia officinalis treated in the same conditions. From the latter, the molecular ions of pyrrole-2,
Discussion
In mollusks, like in other living organism, "similar shell colors can arise from different pigments" [3]. Conversely, a given group of pigments can produce different shell colors especially those with a complex polymeric structure varying according to the living organism and its environment. In this study, we noted that at least two groups of acid-soluble pigments were involved in the purple color of shells of the oyster C. gigas. Among possible pigments supported by the genes associated with their biosynthesis (carotenoids, melanins, ommochromes and porphyrins) [9], acid-soluble porphyrins (uroporphyrin I or III and derivatives) were recently established [7,8]. Besides, after separation of porphyrins from PF, the absence of animal melanins and corresponding known metabolites is established here, in line with the recent study of S. Affenzeller et al. [5]. If animal melanins are deposited in the shell purple patterns of C. gigas, they are not among acid-soluble pigments.
Pioneering studies on ommochromes have proposed a subdivided classification according to their dialysis profile: ommatins (rather dialyzable), ommins (almost non-dialyzable) and ommidins (intermediate) [18]. To date, the structures of approximately fourteen natural ommatins have been established but the structure of ommins and ommindins is poorly described. For example, the well accepted structure of ommin A [18,30] (Figure 4) is solely based on chemical properties and elemental determination [19,31]. Besides,
Discussion
In mollusks, like in other living organism, "similar shell colors can arise from different pigments" [3]. Conversely, a given group of pigments can produce different shell colors especially those with a complex polymeric structure varying according to the living organism and its environment. In this study, we noted that at least two groups of acid-soluble pigments were involved in the purple color of shells of the oyster C. gigas. Among possible pigments supported by the genes associated with their biosynthesis (carotenoids, melanins, ommochromes and porphyrins) [9], acid-soluble porphyrins (uroporphyrin I or III and derivatives) were recently established [7,8]. Besides, after separation of porphyrins from PF, the absence of animal melanins and corresponding known metabolites is established here, in line with the recent study of S. Affenzeller et al. [5]. If animal melanins are deposited in the shell purple patterns of C. gigas, they are not among acid-soluble pigments.
Pioneering studies on ommochromes have proposed a subdivided classification according to their dialysis profile: ommatins (rather dialyzable), ommins (almost nondialyzable) and ommidins (intermediate) [18]. To date, the structures of approximately fourteen natural ommatins have been established but the structure of ommins and ommindins is poorly described. For example, the well accepted structure of ommin A [18,30] ( Figure 4) is solely based on chemical properties and elemental determination [19,31]. Besides, ommidins have completely disappeared from experimental investigations subse-quent to the work of B. Linzen in 1974 [19]. Only a recent review points out their possible occurrence in invertebrates [18]. In the ommochrome literature, the identification of XA is a decisive parameter. For examples, the red, red-brown and yellow pigments of wings of Junonia coenia (common buckeye) were attributed to dihydroxanthommatin, ommatin D and xanthommatin (ommochromes of the ommatin type) but none were detected by MS/MS liquid chromatography, in contrast to XA [10,32]. Indeed, in invertebrates, XA is described as a key metabolite of the biosynthesis of ommochromes [12,13,18,19], exclusively related to this biological route [12]. To date, two main biosynthetic pathways of ommochrome pigments have been proposed, both involving XA. The first involves the condensation of XA with 3-hydroxyanthranilic acid and/or 3-hydroxykynurenine [11]. The second involves only 3-hydroxykynurenine (3-HK) as an intermediate by condensation of two units [12]. In this case, XA is described as a side product of the intramolecular cyclisation of 3-HK or as a degradation product of higher ommochromes of the ommatin type. Since XA was identified among known ommochrome metabolite precursors and intermediates, such a similar process could occur in the case of shell purple patterns of C. gigas. Starting from tryptophan as the initial precursor, the genes responsible for XA biosynthesis in the ommochrome pathway, i.e., tryptophan-2,3-dioxygenase, kynurenine formamidase and kynurenine-3-monooxygenase [18], have been identified in the genome of C. gigas [33]. Only 3-HK transaminase was not identified yet (3-HK → XA). Therefore, in the case of this study XA may either be a metabolite produced in excess or a degradation product of multiple possible origin (during dissolution in aqueous acid conditions with light exposition, biosynthesis or during the development of the shell). From a structural point of view, a XA sub molecular unit can be distinguished from the phenoxazine unit of ommatins (red in Figure 4), a characteristic also observed from the acid-soluble pigments of PF investigated by tandem mass spectrometry, where none of known ommochromes were identified ( Figure S5). The molecular weight of the acid-soluble pigments of PF, higher than those of ommatins, and the numerous carboxylic acid groups of their structure may be consistent with the polymeric/oligomeric nature of ommins described in the literature. However, there is no available commercial standard to confirm this assumption. Since XA was identified in PF, it may originate from a degradation of acid-soluble pigments of PF. Experiments on the mantle edge epithelium by a non-or soft-destructive process could also give reliable information on the structure of acid-soluble pigments of PF, and could allow the correlation of a potential function in living conditions. Experiments using matrix assisted laser desorption ionization combined with mass spectrometry conducted on solid state samples (PF or shell purple fragments) could give reliable information on species not detected by MS liquid chromatography as was the case for allomelanin from black oat [34]. To date, the strong absorption of acid-soluble pigments of PF in the visible region suggests a potential protection against light but other properties could emerge, eventually related to an oxidation process as observed in the production of uroporphyrin and derivatives [7,8].
Whatever is the definitive structure of these compounds; the high number of carboxylic groups is reminiscent to those of uroporphyrin and derivatives. Related to the mineralization process of the shell, their occurrence is consistent with the binding of pigments to the calcite part of the shell via an ionic carboxylate-Ca 2+ bond, that is also suitable for the transport and fixation of calcium in the shell. It remains to be elucidated whether this was selected by nature, in order to ensure the binding of pigments designed for a specific function, or whether pigments are a carboxylic-rich by-product of the physiology of the animal resulting in their coincidental accumulation on the shell surface. A potential perspective of this study would lie in the selective extraction of XA from purple shells of C. gigas as a natural source substituting synthetic XA, and its use to study neurological and tryptophan metabolism disorders [35][36][37]. Whatever is the definitive structure of these compounds; the high number of carboxylic groups is reminiscent to those of uroporphyrin and derivatives. Related to the mineralization process of the shell, their occurrence is consistent with the binding of pigments to the calcite part of the shell via an ionic carboxylate-Ca 2+ bond, that is also suitable for the transport and fixation of calcium in the shell. It remains to be elucidated whether this was selected by nature, in order to ensure the binding of pigments designed for a specific function, or whether pigments are a carboxylic-rich by-product of the physiology of the animal resulting in their coincidental accumulation on the shell surface. A potential perspective of this study would lie in the selective extraction of XA from purple shells of C. gigas as a natural source substituting synthetic XA, and its use to study neurological and tryptophan metabolism disorders [35][36][37].
Shell Fragments
Approximately 1 kg of shell fragments were collected by hand on living adult oysters in August 2017 (Thau lagoon, Marseillan, France, GPS coordinates: 43.382127, 3.555193). Shell fragments were rinsed with tap water at the farm and transported to the laboratory. Shell fragments were extensively rinsed with tap water and suspended in 0.0155M NaOCl(aq) with sonification and regular manual stirring (1:10 wt/v, 120 min). Shell fragments were rinsed several times and suspended in demineralized water with sonification and regular manual stirring (1:10 wt/v, 120 min). Shell fragments were rinsed several times with demineralized water and dried in oven (overnight, 40 °C). Shell fragments were sorted in three classes according to their color. Only fully purple shell fragments were used in this study. Samples were stored in the dark at 25 °C before use.
Identification of XA in PF
Approximately 10 g of decontaminated purple fragments of shells of C. gigas were dissolved in 1M HCl(aq) under magnetic stirring (1:20 wt/v, 30 min, 700 RPM). The solution of acid-soluble pigments was obtained after filtration on a glass sintered filter (POR 4) filled with Fontainebleau sand. The solution of acid-soluble pigments (40 mL) was deposited on a C18 grafted silica gel (approximately 40 g) previously equilibrated with 1M HCl(aq). After deposition, decalcification was performed with 80 mL 1M HCl(aq) followed by 80 mL 0.1% TFA. Separative elution was monitored by fluorescence at λex 400 nm and conducted with 420 mL of ultrapure water/acetonitrile (80:20 v/v + 0.1% TFA). The resulting non-photoluminescent PF was freeze-dried and weighted (0.37 wt.%). Separation was followed with 140 mL acetonitrile +0.1% TFA. The resulting photoluminescent fraction
Shell Fragments
Approximately 1 kg of shell fragments were collected by hand on living adult oysters in August 2017 (Thau lagoon, Marseillan, France, GPS coordinates: 43.382127, 3.555193). Shell fragments were rinsed with tap water at the farm and transported to the laboratory. Shell fragments were extensively rinsed with tap water and suspended in 0.0155M NaOCl (aq) with sonification and regular manual stirring (1:10 wt/v, 120 min). Shell fragments were rinsed several times and suspended in demineralized water with sonification and regular manual stirring (1:10 wt/v, 120 min). Shell fragments were rinsed several times with demineralized water and dried in oven (overnight, 40 • C). Shell fragments were sorted in three classes according to their color. Only fully purple shell fragments were used in this study. Samples were stored in the dark at 25 • C before use.
Identification of XA in PF
Approximately 10 g of decontaminated purple fragments of shells of C. gigas were dissolved in 1M HCl (aq) under magnetic stirring (1:20 wt/v, 30 min, 700 RPM). The solution of acid-soluble pigments was obtained after filtration on a glass sintered filter (POR 4) filled with Fontainebleau sand. The solution of acid-soluble pigments (40 mL) was deposited on a C 18 grafted silica gel (approximately 40 g) previously equilibrated with 1M HCl (aq). After deposition, decalcification was performed with 80 mL 1M HCl (aq) followed by 80 mL 0.1% TFA. Separative elution was monitored by fluorescence at λ ex 400 nm and conducted with 420 mL of ultrapure water/acetonitrile (80:20 v/v + 0.1% TFA). The resulting nonphotoluminescent PF was freeze-dried and weighted (0.37 wt.%). Separation was followed with 140 mL acetonitrile +0.1% TFA. The resulting photoluminescent fraction (porphyrins) was freeze-dried and weighted (<0.1 wt.%). The purple fraction (1 mg) was solubilized in 200 µL of 1M HCl (aq) , filtered on a polyethersulfone syringe filter (0.22 µm) and analyzed on a UPLC Synapt G2-S system (Waters Corporation, Milford, MA, USA) equipped with an electrospray ionization source (UPLC-DAD-Q-ToF-HRMS, Waters Corporation, Milford, MA, USA). UV-vis spectra were recorded with a UPLC LG 500 nm DAD detector from 200 to 500 nm with a resolution of 1.2 nm and a sampling rate of 20 points/sec. Separation was carried out using a 150 × 2.1 mm Kinetex 2.6 µm EVO C18 100 Å reverse stationary phase, operating at 30 • C with a constant flow rate of 0.5 mL/min using ultrapure water (0.055 µS/cm) and acetonitrile HPLC grade as eluents both containing 0.1% formic acid. Mass spectra were recorded in the m/z range of 50 to 3000 with a ZQ spectrometer fitted with Micromass Q-Tof spectrometer operating at capillary voltage of 3 kV and cone voltage of 30 V, using phosphoric acid as an internal standard. Masslynx software (version V4.1, Waters Corporation, Milford, MA, USA) was used for instrument control and data processing. Samples were kept at 10 • C in the autosampler. Appropriate blank analysis was performed before each sample (V inj : 10 µL). Blank TIC chromatogram was systematically subtracted to the corresponding sample TIC chromatogram before data processing. Separation was performed with a gradient system: 0 to 50% acetonitrile in 20 min, followed by 50 to 100% acetonitrile in 5 min, followed by 100% acetonitrile in 1 min, followed by 100% ultrapure water in 0.1 min and finally 4.9 min with 100% ultrapure water. A solution of 10 mg/mL of XA was prepared in 1 mL of 1M HCl (aq) under magnetic stirring (60 min, 700 RPM), followed by filtration on polyethersulfone syringe filter (0.22 µm), XA being slightly soluble in water. The resulting solution was analyzed by UPLC-DAD-Q-ToF-HRMS according to the method previously employed. MS/MS experiments were performed in collision-induced dissociation mode with a trap collision energy ramp from 15 to 40 eV and using auto transfer collision energy of 2 eV. Argon was used as the collision gas.
Characterization of PF
The qualitative estimation of the PF solubility was conducted at 1 mg/mL (60 min, 500 RPM, 20 • C), followed by centrifugation (20 min, 4400 RPM). Absorption spectra were recorded from 200 to 800 nm using UV-1800 spectrophotometer, 10 mm optical path length (Shimadzu Corporation, Kyoto, Japan). Appropriate auto zero on solvent blank was performed before each measurement. The absorption spectrum of PF was obtained with 1 mg in 1mL of 1M HCl (aq) and diluted by a factor 10 and 100. The absorption spectrum of S. officinalis was obtained with 1 mg in 1mL of 1M HCl (aq) and filtered on polyethersulfone syringe filter 0.22 µm (very slightly soluble). IR spectra were recorded using a spectrum two FTIR spectrometer (ATR mode, PerkinElmer, Waltham, MA, USA). UPLC-MS/MS was conducted according to the previously described method. Automatic MS/MS experiments were conducted using auto transfer collision energy of 2 eV. Argon was used as the collision gas. Collision-induced was performed in dissociation mode. MS/MS range was set from 50 Da to 1500 Da. The number of fragmented compounds was set at 3 × 4. MS/MS fragmentation was set to switch after 2 s with a scan time of 0.1 s. Peak detection was used in intensity based peak detection mode and peak detection window with a charge state tolerance of m/z 0.2. The trap MS/MS collision energy was set according to a ramp from 30 to 50 eV. The cone voltage was set at 40 V. The collision energy was set according to a ramp from low mass (50 Da, 10-20 eV) to high mass (1500 Da, 80-140 eV).
Comparative Analysis with Natural Eumelanin
The oxidation was conducted with 10 mg of PF, ultrapure water (1 mL), 1M K 2 CO 3(aq) (3.75 mL) and 30% H 2 O 2(aq) (250 µL), under magnetic stirring (20 h, 500 RPM, 20 • C). A volume of 500 µL of 10% Na 2 SO 3(aq) was added. A volume of 550 µL of the solution was mixed with 140 µL of 6M HCl (aq) . After centrifugation (20 min, 4,400 RPM), the supernatant was collected and purified by solid phase extraction (Strata-X 200 mg, Phenomenex Inc., Torrance, CA, USA)). Conditioning was conducted with methanol (6 mL) followed by ultrapure water (6 mL). After sample loading, washing was conducted with 0.3% aqueous formic acid (3 mL). Elution was conducted with methanol (3 mL) and ethyl acetate (3 mL). The collected fraction was evaporated under a constant flux of argon for approximately 5h. After evaporation, the solid residue was solubilized in ultrapure water (200 µL) and analyzed by UPLC-Q-ToF-HRMS with the previously described Water alliances UPLC Synapt G2-S system in electrospray negative ionization mode (Waters Corporation, Milford, MA, USA). Separation was carried out using a 100 × 2.1 mm Kinetex 1.7 µm EVO C18 100 Å reverse stationary phase, operating at 45 • C with a constant flow rate of 0.2 mL/min using ultrapure water (0.055 µS/cm) and acetonitrile HPLC grade as eluents both containing 0.1% formic acid. Mass spectra were recorded in the m/z range of 50 to 1500 with a ZQ spectrometer fitted with Micromass Q-Tof spectrometer (Waters Corporation, Milford, MA, USA) operating at capillary voltage of 2.4 kV and cone voltage of 30 V, using phosphoric acid as an internal standard. Masslynx software (version V4.1, Waters Corporation, Milford, MA, USA) was used for instrument control and data processing. Samples were kept at 10 • C in the autosampler (V inj : 10 µL). Separation was performed with a gradient system: 0 to 20% acetonitrile in 20 min, followed by 20 to 100% acetonitrile in 1 min, followed by 100% acetonitrile in 2 min, followed by 100% ultrapure water in 0.1 min and finally 4.9 min with 100% ultrapure water. The entire process was repeated with 10 mg of Sepia officinalis eumelanin.
Supplementary Materials:
The following are available online, Figure S1: Identification of XA in PF, Figure S2: Known metabolite precursors and side products of ommochromes and melanins searched in PF, Figure S3: UPLC-MS/MS analysis of PF, Figure S4: Absence of melanin oxidation products in both S. officinalis eumelanin and PF samples, Figure S5: Absence of ommin A and some known ommatins in PF; Table S1: Identification of ommochrome metabolites in PF.
|
v3-fos-license
|
2018-12-21T04:58:39.931Z
|
2014-03-31T00:00:00.000
|
59393890
|
{
"extfieldsofstudy": [
"Economics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://jssidoi.org/jesi/article/download/18",
"pdf_hash": "08aedb0bfc42128a001a640a59fe26d03310ca1a",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2650",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "08aedb0bfc42128a001a640a59fe26d03310ca1a",
"year": 2014
}
|
pes2o/s2orc
|
PROCESSES OF ECONOMIC DEVELOPMENT : CASE OF LITHUANIAN REAL ESTATE SECTOR
The enlargement of the EU has impacted development of housing market in Lithuania as well as in other Central and Eastern countries. The country was significantly influenced by favorable landing and expansion of private sector credit. Hence, Lithuania experienced the period of the financial and asset price boom, which was followed by economic downturn, and consequently, the burst of price bubble. This paper aims to reveal relationships among demand and supply side determinants and housing prices. Hence, the question is being raised if fundamental determinants affect housing prices. The growing dependence of Lithuania on energy resources leads us towards another research question. We will test if housing prices are linked to energy prices. Regression analysis tool, we believe, allows revealing if fundamental determinants are equally important.
Introduction
The enlargement of the EU has impacted development of housing markets in all Central and Eastern countries.The market was significantly influenced by expansion of private sector credit and favorable landing in the region.Hence, the growth of housing prices has been observed in [2004][2005][2006][2007] in almost all countries.Lithuania, as well as other countries experienced the period of the financial and asset price boom that was followed by economic downturn and consequently the burst of price bubble.Analysis of determinants of housing prices requires careful examination.This paper aims to reveal relationships among demand and supply side determinants and housing prices.In order to reveal if and how demand and supply side determinants impact housing prices in Lithuania we will raise and test a set of hypotheses.The first group of hypotheses is focused on relationships between house prices and fundamental supply and demand side factors.The second group of hypotheses is focused on relationships between house prices and energy prices.The remainder of the paper is organized as follows.In section 2 the overview of relevant literature is analyzed.In section 3 the overview of determinants impacting housing prices in Lithuania are discussed.In section 4 the methodology and results are presented and final part concludes.
Overview of the literature
The researches linked to the determinants of housing prices are seen as vast and growing trend in the scientific literature.Prevailing literature suggests that in industrialized economies house prices are related to a set of macroeconomic variables, market specific conditions and financing characteristics (Glindro et al. 2011), consistent patterns of economic development as well have to be taken into account (Dudzevičiūtė 2013;Laužikas, Krasauskas 2013;Vosylius et al. 2013;Mačiulis, Tvaronavičienė 2013;Tvaronavičienė 2014).Notably, demand and supply factors, that have longer-term and shorter-term influence, are distinguished (Tsatsaronis, Zhu 2004).The main demand-side factors include the growth in household disposable income, the average level of interest rates, gradual shifts in demographics and permanent shifts of the tax system.
According to scholars, disposable income and interest rates are seen as key factors determining housing prices (Hilbers et al. 2008).The rise of income has led to the increase of housing prices in different countries.Hence, scholars argue that demand for housing is impacted by real household income and wealth (Sutton 2002).On the other hand, the role of interest rates is dual: mortgage rate determines financing costs, while the risk-free interest rate services as an indicator of opportunity costs.Notably, a lot of attempts were made in order to investigate the causal relationship between macroeconomic variables, financing characteristics and house prices.One stream of scholars has investigated the link in one direction.The explorations carried out by Borio et al. (1994) conclude that there is a relatively close link between the ratios of private credit to GDP and asset price movements.Some scholars argue that causality is not that straightforward (Dubauskas 2011;Šimelytė, Antanavičienė 2013).Some authors e.g., Goodhart and Hofmann (2008,) claim that "the effect of property prices on credit appears to be stronger than the effect of credit on property prices".
Discussions in the prevailing literature distinguish the obvious importance of demographics for the demand of housing.The main underlying premise adopted by scholars is that high rates of the net migration and increases in population shares impact housing demand (Cerny et al. 2005;Balkytė, Tvaronavičienė 2011;Radović Marković 2011).Koetter and Poghasyan (2010) confirm that "increasing demand due to population and income growth increases equilibrium real estate prices".Notably, population in the 25-44 years age range is seen as the measure more explicitly reflecting the migration effect (Stecenson 2008, Radović Marković 2011;Šileika, Bekerytė 2013).Meanwhile, Maennig and Dust (2008) state, that "growth in population numbers has no significant price effects, whereas declining population numbers lead to significant negative effect".The observations reveal that in some countries like Japan and Germany, house prices decline due to a low share of households of individuals in their thirties (Girouard et al. 2006).Glaeser et al. (2005) note that too often scholars attempt to understand housing prices only by focusing on demand-side factors, while ignoring supply-side factors.Hence, supply-side factors have to be taken into consideration.The main supply-side factors include the availability and cost of land, the cost of construction and investments in the improvement of the existing housing stock.Accessibility of financial capital and indebtedness of business companies have their own implications (Baikovs;Zariņš 2013).Besides that, tough rules and building regulations as well as slow administrative procedures are seen as constrains of supply (Girouard et al. 2006).Discussions in the prevailing literature emphasize that house prices are seen as local phenomenon (Himmelber et al. 2005).The study carried out by Egert and Mihaljek (2007) has indicated factors specific to Central and Eastern Europe (CEE).According to scholars, development of housing market institutions, in particular banking sector, has led to the development of housing markets and housing environments.A main underlying premise adopted by authors is that the EU accession process has impacted demand which has led to the growth of the housing prices.Hence, house prices in CEE are determined by fundamental factors such as, GDP per capita, real interest rates, housing credit, demographic factors as well as transition specific factors.
The determinants of housing prices in Lithuania
To see how development of economy impacted real estate market, we overview key trends shaping different patterns.Lithuania became an independent state in 1990, what has led to radical political, social and economic changes.On the other hand, Lithuania's accession to the EU in 2004 has impacted liberalization of trade due to a number of unilateral decisions and treaties.Notably, in 2004Notably, in -2008 2).In 2007 unemployment rate was the lowest and reached 3.8% (Šileika, Bekerytė 2013).Hence, in the period of economy growth, wage growth and income tax reduction boosted household disposable income.The Baltic countries responded to economic crisis through internal adjustment of prices and wages.Consequently, unemployment rose sharply in Lithuania and in 2010 reached the highest rate -17.9%.It is noticeable, that unemployment rate grew significantly in all Baltic States and was higher than the EU average (Figure 2).Taking into considerations recent surveys, unemployment in Lithuania is still only approaching the natural unemployment rate (Bank of Lithuania 2013).On the other hand, unemployment rate of young population and increasing outward migration are seen as the major issues.
Different studies conclude that low real interest rates and favorable lending standards impacted the growth of demand for housing in all Baltic States (Bukeviciute, Kosicki 2012; Purfield, Rosenberg 2010; Kuodis, Ramanauskas 2009;Dubauskas 2011;Tvaronavičiene et al. 2013).Notably, bank landing and a corresponding acceleration of domestic demand were distinguished as the key drivers of growth (Purfield, Rosenberg 2010;Dubauskas 2011).To generalize, we can conclude that, investment and employment increased in non-tradable sectors, in particular in real estate, construction, retail and financial services.On the other hand economic crisis has triggered decline of wages and diminished private consumption.
Notably, the growth and decline of country's economy has been shaping real estate market.According to Ivanauskas et al. (2008) development of real estate market in Lithuania can be described by different patterns.For instance, the first stage of development (1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002) is described as the rise of commercial real estate market.Notably, acceleration of real estate market was impacted by privatization processes, which have led to the development of service sectors.The growth of demand for residential real estate is seen as a common feature for the second stage of development (2002)(2003)(2004)(2005).The scholars conclude that by the third stage of development (2005)(2006) the housing market had reached its summit.As it was indicated above, the development was triggered by variety of factors, including low interest rates and favorable landing, globalization patterns (Dubauskas 2011;Šimelytė, Antanavičienė 2013;Tvaronavičienė et al. 2013;Vosylius et al. 2013).
Methodology and results
The above discussions lead to the conclusion that different demand and supply side factors determine housing prices.In our research we will focus on such: interest rates, disposable income, unemployment, inflation, GDP per capita, population and construction cost index.Additionally, our research will focus on energy prices.Before verifying the hypotheses formulated below, let us explain why energy prices are taken into consideration.Notably, all Baltic States have a high level of import dependency on such energy resources like gas and oil, which are imported exclusively from Russia (Karnitis 2011).Taking into consideration recent trends, we can conclude, that Lithuania's energy dependence has increased significantly.For instance, in 2000 it was 59.82% and in 2010 it was 81.92% (Eurostat).In comparison to other Baltic States, the increase of energy dependence in Lithuania was the highest (Karnitis 2011;Miškinis et al. 2013).Starting in 2010 Lithuania imports a significant amount of electricity due to decommissioning of Ignalina nuclear power plant and fluctuations in domestic supply and prices.We need to point, that recent scientific surveys confirm that energy security issues affect development of key economic sectors of any country (Janeliunas 2008;Karnitis 2011;Tvaronavičienė 2012, Lankauskienė, Tvaronavičienė 2012;Vosylius el al. 2013, Dudzevičiūtė 2013;Miškinis et al. 2013, Tvaronavičienė 2014).A close look at Figure 4 and Figure 5 confirms that growth of gas and electricity prices for household consumers was higher in Lithuania in 2003-2008.Hence, the increase of energy prices significantly affects disposable income of households.In that context, association arises about interrelationships of energy prices and real estate prices.In our research house price is calculated as one square meter price of average 55 m2 flat in the Old Town of Vilnius provided by www.ntspekuliantai.lt.Notably, real estate prices in Vilnius attracted considerable attention of various researchers.For instance, Burinskienė et al. (2011) investigated effects of quality of life on the price of real estate.In particular scholars aimed to reveal why differences of the quality of life exist.The research took into consideration such factors as home, work, leisure, safety and health, center and aesthetics.Accordingly, different research methods, i.e. survey of residents and examination of socioeconomic factors were applied.Obtained results and insights allowed scholars concluding that the price of real estate was mostly affected by the prestige of Vilnius district (Burinskienė et al. 2011).Hence, prices in the Old Town remain 2.5 times higher than in other districts.The survey carried out by Raslanas et al. (2006) aimed to compare housing prices in the South East London and Vilnius.The scholars took into considerations a set of factors impacting prices: flat size, flat conditions, and construction type.Meanwhile, Ambrasas and Stankevicius (2007) investigated peculiarities and various factors, impacting housing market in Vilnius.On the other hand, not going deep into elaborate discussions regarding non-fundamental determinants, influencing housing prices, the authors of this paper will focus on demand and supply-side fundamental determinants.Firstly, the analysis of scientific literature allows us to formulate the following hypotheses, regarding fundamental determinants: Hypothesis 1: The decrease of interest rates will be positively associated with the growth of real estate prices.
Hypothesis 2: The growth of disposable income will be positively associated with the growth of real estate prices.
Hypothesis 3: The decrease of unemployment rate will be positively associated with the growth of real estate prices.
Hypothesis 4: The growth of inflation rate will be positively associated with the growth of real estate prices.Hypothesis 5: The growth of GDP per capita will be positively associated with the growth of real estate prices.
Hypothesis 6: The growth of population will be positively associated with the growth of real estate prices.Hypothesis 7: The growth of construction costs will be positively associated with the growth of real estate prices.
The hypotheses were tested taking into consideration country level data for the period of 2003-2011, which are provided by the European Commission and the Lithuanian Department of Statistics.Table 1 provides obtained data for associations between house prices and selected variables.The first hypothesis about interrelations of interest rates and real estate prices was tested taking into consideration Central bank interest rates annual data and house prices.The obtained results allow us to provide the following interpretations.First, the magnitude of the correlation coefficient indicate, that the associations between house prices and interest rates are slightly above average.On the other hand the direction of correlation coefficient implies that house prices are increasing while interest rates are decreasing.Notably, the obtained P value (0.0464) is lower than 0.05 and allow us to interpret, that the relationship between house prices and interest rates is statistically significant.Meanwhile, the coefficient of determination (0.340435) implies that thirty four percent of price changes can be explained by changes of interest rates.The linear regression model:
Housing price=2388.08-146.943×interest rates
It implies that the decrease of iterest rates by 1% will increase the price of one squere meter by 146.943EUR.Hence, the conclusion we can draw is that the hypothesis was verified.
The second hypothesis about interrelations of average disposable income amounts and real estate prices was verified.A closer look at the correlation coefficient allow us conclude that the associations between house prices and disposable average income are strong.Hence, large values of the house prices tend to be associated with the large values of average disposable income and imply that house prices are increasing while average disposable income rates are increasing.The obtained P value (0.0073) is lower than 0.05 and allow us to confirm, that the relationship between house prices and disposable average income is statistically significant.Meanwhile, the coefficient of determination (0.529582) implies that fifty two percent of price changes can be explained by changes of average disposable income.The linear regression model:
Housing price=166.87+4.51×disposable income
It implies that the increase of disposable income by 1 EUR will increase the price of one squere meter by 4.51 EUR.
The third hypothesis about interrelations of unemployment rate and real estate prices allows us to interpret the following.The first, the magnitude of the correlation coefficient allow us conclude that the associations between house prices and unemployment rates are strong.The second, the direction of correlation coefficient implies that house prices are increasing while unemployment rates are decreasing.Hence, the obtained P value (0.0051) is lower than 0.05 and allow us to confirm, that the relationship between house prices and unemployment rate is statistically significant.Meanwhile, the coefficient of determination (0.559833) implies that fifty five percent of price changes can be explained by changes of unemployment rate.The linear regression model:
Housing price=3092,41-115,232×unemplyment rate
It implies that the decrease of unemplyment rate by 1% will increase the price of one squere meter by 115.232EUR.Hence, the conclusion we can draw is that the third hypothesis was verified.
The fourth hypothesis about interrelations of inflation rate and real estate prices was verified.Taking into consideration, the magnitude and direction of the correlation coefficient we can conclude that the associations between house prices and inflation rate are strong.Hence, large values of the house prices tend to be associated with the large values of inflation rate and imply that house prices are increasing while inflation rates are increasing.Meanwhile, the obtained P value (0.0027) is lower than 0.05 and allow us to interpret, that the relationship between house prices and inflation rates are statistically significant.On the other hand, the coefficient of determination (0.586007) implies that fifty eight percent of price changes can be explained by changes of inflation rate.The linear regression model:
Housing price=1194,99+176,67×inflation rate
It implies that the increase of inflation rate by 1% will increase the price of one squere meter by 176,67 EUR.
The fifth hypothesis tested interrelations of GDP per capita rate and real estate prices.Notably, the magnitude and direction of the correlation coefficient allow us conclude that the associations between house prices and GDP per capita rates are strong.Hence, large values of the house prices tend to be associated with the large values of GDP per capita rates and imply that house prices are increasing while GDP per capita rates are increasing.The obtained P value (0.0005) is lower than 0.05 and allow us to interpret, that the relationship between house prices and GDP per capita rates are statistically significant.Meanwhile, the coefficient of determination (0.722021) implies that seventy two percent of price changes can be explained by changes of GDP per capita rate.The linear regression model:
Housing price=-158,226+1,14031 ×GDP per capita
It implies that the increase of GDP per capita rate by 1 EUR will increase the price of one squere meter by 1,14031 EUR EUR.Hence, the conclusion we can draw is that hypothesis was verified.
The sixth hypothesis aimed to verify interrelations between the growth of population rate and real estate prices.The magnitude and direction of the correlation coefficient allow us conclude that the associations between house prices and population rates are weak and negative.The obtained P value (0.1401) is higher than 0.05 and allow us to interpret, that the relationship between house prices and population rate is statistically non-significant.Meanwhile, the coefficient of determination (0.499129) implies that forty nine percent of price changes can be explained by changes of population.The linear regression model: Housing price=12964,96-3,41 ×population rate It implies that the decrease of population rate by 1 person will increase the price of one squere meter by 3,41 EUR.Hence, the conclusion we can draw is that hypothesis was not verified.To generalize, the increase of house prices in 2000-2011 was not necessary driven by decrease of population in Lithuania due to high emigration rate.
The seventh hypothesis aimed to test interrelationship of supply-side variable, in particular construction costs, and housing prices.In our research we used construction costs index provided by the Lithuanian Department of Statistics.The magnitude and direction of the correlation coefficient allow us conclude that the associations between house prices and construction costs are weak and positive.The obtained P value (0.2941) is higher than 0.05 and allow us to interpret, that the relationship between house prices and construction costs is statistically non-significant.Meanwhile, the coefficient of determination (0.140676) implies that fourteen percent of price changes can be explained by changes of construction costs.The linear regression model:
Housing price=-2287,68+38,81 ×construction costs
It implies that the increase of construction costs will increase the price of one squere meter by 38,81 EUR.
Hence, the conclusion we can draw is that hypothesis was not verified.To generalize, the increase of house prices in 2000-2011 was not necessary driven by increase of construction costs.
Energy prices are seen as important determinants of production, impacting its volume, growth rates and quality.To generalize, the growth of energy prices lead to the growth of prices of other goods and services.
On the other hand, as it was indicated above, growth of electricity and gas prices negatively affect disposable income of households.Hence, the assumptions about interrelationships of reals estate prices and energy prices allow us to formulate the following hypotheses: Hypothesis 8: The growth of oil prices will be positively associated with the decrease of real estate prices.Hypothesis 9: The growth of electricity prices will be positively associated with the decrease of real estate prices.
Hypothesis 10: The growth of gas prices will be positively associated with the decrease of real estate prices.The first hypothesis was tested taking into consideration statistical data from 2000 to 2011.We use data, which are provided by the European Central Bank Table 2 provides obtained data for associations between oil prices and real estate prices.Taking into consideration presented data, we can interpret the following.The magnitude and the direction of the correlation coefficient allow us to confirm that the associations between house prices and oil prices are strong.The obtained P value (0.0051) is lower than 0.05 and allow us to interpret, that the relationship between house prices and oil prices is statistically significant.Meanwhile, the coefficient of determination (0.5610871) implies that fifty six percent of price changes can be explained by changes of oil price.The linear regression model:
Housing price=310.22+31.32×oil price
It implies that the increase of oil price by 1 EUR will increase the price of one squere meter by 31.32 EUR.
Hence, the conclusion we can draw is that the hypothesis was not verified.To generalize, the increase of house prices in 2000-2011 was driven by increase of oil prices, what has led to increase of prices of other goods and services.
The second hypothesis was tested taking into consideration statistical data from 2004 to 2011 provided by the European Commission.Table 2 provides correlation coefficients for associations between electricity prices and real estate prices.The obtained results (Table 2) about associations between electricity prices for household consumers and real estate prices allow us to provide the following interpretations.Notably, the magnitude of the correlation coefficient allow us conclude that the associations between house prices and electricity prices are weak.The obtained P value (0.07059) is higher than 0.05 and allow us to interpret, that the relationship between house prices and electricity prices is statistically non-significant.Meanwhile, the coefficient of determination (0.025461) implies that two percent of price changes can be explained by changes of electricity price.The linear regression model:
Housing price=2470,55-4178,32×electricity price
It implies, that the increase of electricity price by 1 EUR will decrease the price of one squere meter by 4178,32 EUR.Hence, the conclusion we can draw is that the hypothesis was not verified.
The third hypothesis was tested taking into consideration statistical data from 2004 to 2011.We use data, which are provided by the European Commission Table 2 provides correlation coefficients for associations between gas prices and real estate prices.The obtained results about associations between gas prices for household consumers and real estate prices allow us to provide the following.First, the magnitude of the correlation coefficient allow us conclude that the associations between house prices and gas prices are weak.The obtained P value (0.7260) is higher than 0.05 and allow us to interpret, that the relationship between house prices and gas prices is statistically non-significant.Meanwhile, the coefficient of determination (0.022039) implies that two percent of price changes can be explained by changes of gas price.The linear regression model:
Housing price=2368,22-28,27×gas price
It implies, that the increase of gas price by 1 EUR will decrease the price of one squere meter by 28,27 Euros.Hence, the conclusion we can draw is that the hypothesis was not verified.
Conclusions
The research was based on prevailed scientific literature and analyzed the relationships among supply and demand side determinants and house prices using data from Lithuania for the period of 2000-2011.We tested if and how supply and demand side determinants impact house prices.Our study established strong and positive relationships between house prices and GDP per capita, disposable income and inflation rate.On the other hand, we found that relationships between house prices and such determinants as, construction costs and population rates are weak.Taking into consideration the growth of Lithuania's dependence on energy resources, we analysed the relationships between house prices and energy prices such as gas prices, electricity prices and oil prices.We established strong relationship between house prices and oil prices, what has led to increase of prices of other goods and services.On the other hand, we found that relationships between house prices and energy prices such as, gas prices and electricity prices for household consumers are week and negative.Hence, the conclusion we can draw is, that the growth of house prices is not driven by electricity and gas prices for houshold consumers.The limitations of the presented research were related with the scope: the situation of only one country was observed.Nevertheless, we could shed some light on the question if and how fundamental determinants and energy prices are linked to house prices.
Lithuania as well as other Baltic States enjoyed very strong economic growth.A close look at Figure 1 allows observing, that from 2003 to 2007 GDP grew on average by almost 7% and was higher than the EU average.Statistical data on GDP allows concluding that the growth of Lithuania's economy in 2003-2007 was interrupted by global financial crisis, what has led to the sharp cumulative output decline in all Baltic States.
Fig. 1 .
Fig.1.GDP growth rate (%) Source: Eurostat Economy growth of Lithuania in 2003-2007 impacted changes in labor market.For instance, unemployment rate decreased significantly (Figure2).In 2007 unemployment rate was the lowest and reached 3.8%(Šileika, Bekerytė 2013).Hence, in the period of economy growth, wage growth and income tax reduction boosted household disposable income.
|
v3-fos-license
|
2022-02-26T00:09:53.425Z
|
2022-02-14T00:00:00.000
|
247093904
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.ijbs.com/v18p1829.pdf",
"pdf_hash": "ed1f5a5b782b35efbd73d27ba933915406b58f2c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2652",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "c25103bf13cd97ec16fd8cc6a574aad38ffab23f",
"year": 2022
}
|
pes2o/s2orc
|
Ferroptosis in Cancer Progression: Role of Noncoding RNAs
Ferroptosis is a novel form of programmed cell death, and it is characterized by iron-dependent oxidative damage, lipid peroxidation and reactive oxygen species accumulation. Notable studies have revealed that ferroptosis plays vital roles in tumor occurrence and that abundant ferroptosis in cells can inhibit tumor progression. Recently, some noncoding RNAs (ncRNAs), particularly microRNAs, long noncoding RNAs, and circular RNAs, have been shown to be involved in biological processes of ferroptosis, thus affecting cancer growth. However, the definite regulatory mechanism of this phenomenon is still unclear. To clarify this issue, increasing studies have focused on the regulatory roles of ncRNAs in the initiation and development of ferroptosis and the role of ferroptosis in progression of various cancers, such as lung, liver, and breast cancers. In this review, we systematically summarized the relationship between ferroptosis-associated ncRNAs and cancer progression. Moreover, additional evidence is needed to identify the role of ferroptosis-related ncRNAs in cancer progression. This review will help us to understand the roles of ncRNAs in ferroptosis and cancer progression and may provide new ideas for exploring novel diagnostic and therapeutic biomarkers for cancer in the future.
Mechanism of ferroptosis
Programmed cell death (PCD) is important for the balance between the progression of diseases and human health [1]. Ferroptosis, a novel coined form of PCD discovered in 2012 [2], is different from apoptosis, necroptosis, pyroptosis and autophagy [3]. Many studies have revealed that ferroptosis is a specific oxidative and iron-dependent form of PCD caused by abnormal iron metabolism and lethal lipid peroxidation [4,5]. Moreover, some studies have demonstrated that autophagy plays a crucial role in ferroptosis, especially autophagic degradation of ferroptosis-related proteins, such as ferritinophagy, lipophagy, clockophagy and chaperone-mediated autophagy [6,7]. Recently, an increasing number of studies have focused on the role of ferroptosis in various diseases [8,9], especially liver, lung, and gastrointestinal cancers [10]. By exploring the molecular mechanism related to the regulation of ferroptosis more deeply, the relationship between ferroptosis and cancer progression will be better understood. Ferroptosis is regulated by specific signal transduction pathways through iron accumulation, lipid peroxidation and cellular membrane destruction, and ferroptosis can be modulated by drugs or genetic interventions [11] (Fig. 1). The main mechanism of ferroptosis involves regulating homeostasis between oxidative and antioxidant systems [12].
Iron in ferroptosis
Iron accumulation plays a critical role in producing ROS via the Fenton reaction and enzyme activity in terms of lipid peroxidation. Although iron is essential in physiological processes, excessive iron is pernicious and can trigger ferroptosis. Ferroptosis is strictly regulated by modulators related to iron metabolism processes, such as iron intake, stockpile, usage, and release [13]. Serotransferrin-or lactotransferrin-associated iron intake promotes ferroptosis through the transferrin receptor (TFRC) [14,15]. Furthermore, oncogenic MYCN could induce iron accumulation by increasing the expression of TFRC [16]. The cargo receptor NCOA4 could activate autophagy to degrade ferritin, a process called ferritinophagy, leading to the promotion of ferroptosis [17]. In contrast, solute carrier family 40 membrane 1 (SLC40A1)-associated iron release inhibits ferroptosis [18]. Along with the degradation of ferritin, the level of intracellular iron is high, leading to ferroptosis [19], whereas ferritin efflux inhibits ferroptosis. Evidence revealed that the transcription factor BACH1 could reduce iron accumulation by upregulating the translation of ferritin genes (Fth1 and Flt1) and the ferroportin gene (SLC40A1) and inhibiting ferroptosis [20]. Some mitochondrial proteins associated with the usage of iron negatively regulate ferroptosis, such as NFS1 [21], ISCU [22], CISD1 [23] and CISD2 [24]. Moreover, the iron accumulation could be regulated by some signal transduction pathways. Nuclear protein 1 (NUPR1) , a transcriptional regulator, blocks cell ferroptosis and decreases iron accumulation by increasing the production of the iron transporter LCN2 [25]. Recent studies have revealed that iron chelators and antioxidants can inhibit ferroptosis [26][27][28].
Lipid peroxidation in ferroptosis
Lipid peroxidation leading to cell membrane destruction is the central intermediate link of ferroptosis. The molecular mechanism of lipid peroxidation is that the inhibition of system Xcor glutathione peroxidase 4 (GPX4) leads to decreased production of reduced GSH [14]. The glutamate (Glu)/cysteine (Cys 2 ) antiporter of system Xcis composed of solute carrier family 3 membrane 2 (SLC3A2) and solute carrier family 7 membrane 11 (SLC7A11) and can import extracellular Cys 2 into cells and exchange intracellular Glu. Moveover, some evidence has revealed that the cargo receptor SQSTM1/p62 can induce the autophagic degradation of ARNTL, known as clockophagy, and promote lipid peroxidation and ferroptosis via the EGLN2/HIF1A pathway [29,30].
Inhibiting system Xc -System Xc -, which involves GSH, is one of the main antioxidant defenses in our body. When system Xcis inhibited by erastin, Cys 2 cannot influx cells, resulting in a decrease in the amount of Cys 2 . Due to the important role of Cys 2 in the process of GSH biosynthesis, Cys 2 deficiency can reduce the level of GSH. Then, the exhaustion of GSH can reduce the expression and activity of GPX4 [31,32]. Meanwhile, GPX4 can catalyze GSH to GSSG, and then toxic peroxides can be reduced to nontoxic hydroxyl compounds to protect the structure and function of the cell membrane from interference and destruction by peroxides [33]. Along with the destruction of the redox balance in cells, the accumulation of ROS can induce cell membrane breakage and cell death [34]. The imbalance of system Xcis one of the main biochemical features of ferroptosis, and the regulation of ferroptosis is the primary way to control its occurrence.
Several studies uncovered that ferroptosis could be induced by inhibiting system Xcvia certain compounds, such as erastin, sulfasalazine [4], sorafenib, and lanperisone [35]. Moveover, studies have identified that some genes could regulate the catalytic subunit of system Xc -. These genes regulate the transcription or translation of subunits of system Xc -, such as SLC7A11 or SLC3A2, affecting the biological processes of the ferroptosis. The tumor suppressor BRCA1-related protein1 (BAP1) could inhibit the SLC7A11 expression and then lead to elevated lipid peroxidation and ferroptosis [36]. In some cancers, Kelch-like ECH-associated protein 1 (KEAP1) depresses the translation of SLC7A11 and reduces the exchange of Glu/Cys2, otherwise NF E2-related factor 2 (NRF2) plays the opposite role in SLC7A11 [37]. Hence, the Nrf2/Keap1 pathway promotes the translation of SLC7A11, leading to diminished ferroptosis [38]. In lung cancer, the RNA-binding protein RBMS1 could promote the levels of SLC7A11 and increase the production of GSH, resulting in the inhibition of ferroptosis in cancer cells [39]. In terms of the relationship between autophagy and lipid peroxidation, progesterone receptor membrane component 1 (PGRMC1) suppresses SLC7A11 via autophagic degradation of lipids, known as lipophagy, and induces ferroptosis in paclitaxel-tolerant persister cancer cells [40]. P53, a well-known tumor suppressor gene, can inhibit the expression of SLC7A11, leading to lipid peroxidation and ferroptosis [41]. YTHDC2, an m 6 A reader identified in 2017, suppresses the expression of SLC3A2 by inhibiting HOXA13, a transcription factor of SLC3A2 expression, to trigger system Xc --dependent ferroptosis [42].
Inhibiting GPX4
GPX4, another main antioxidant defense, can directly reduce cell membrane phospholipid hydroperoxide to hydroxyphospholipid, taking advantage of GSH as a substrate, resulting in the suppression of ferroptosis in cancer cells [43]. The inhibition of GPX4 by RSL3 induces ferroptosis [44]. Recently, many studies have revealed that some factors regulate the generation process of GPX4. Boyi Gan and his team discovered that rapamycin complex 1 (mTORC1), a foremost regulator of cell growth and metabolism, increased the production of GPX4 protein and reduced the production of lipid peroxidation of the cell membrane [45][46][47]. Moreover, Fin56, a ferroptosis inducer, synergized with Torin 2, promoting GPX4 translation and triggering ferroptosis in bladder cancer cells [48]. In other types of cancer, KLF2 reduces the transcriptional repression of GPX4 to prevent the lipid peroxidation and inhibit ferroptosis in clear cell renal cell carcinoma [49]. Interestingly, erastin activation of ferroptosis increased the production of lysosome-associated membrane protein 2a and induced chaperonemediated autophagy, which in turn increased the degradation of GPX4 resulting in reduced ferroptosis [50].
Genetic depletion of GPX4 causes lipid peroxidation and then induces ferroptosis in cancer cells or tissues [51]. In the process of lipid metabolism, arachidonic acid (AA) can produce AA-CoA through acyl-CoA synthetase long-chain family member 4 (ACSL4), and AA-CoA is esterified by lysophophatidylcholine acyltransferase 3 (LPCAT3) to produce phosphatidyl-(PE)-AA [52]. PE-AA is oxidized to PE-AA-OOH by lipoxygenases (LOXs), leading to degradation of the cell membrane [53]. Cytotoxic PE-AA-OOH is usually reduced to noncytotoxic PE-AA-OH, protecting cells from oxidative damage via GPX4. However, when GPX4 is deficient or inactivated, PE-AA-OOH cannot be reduced and then induces ferroptosis [53]. Overall, GPX4 systems are also crucial for the occurrence of ferroptosis.
ROS in ferroptosis
In the process of lipid peroxidation, the lethal accumulation of lipid ROS can destroy the cell membrane, leading to ferroptosis [54]. ROS are produced in two main ways: the Fenton reaction with Fe 2+ and lipid peroxidation. When the two antioxidant defenses, GSH and GPX4 systems, are impaired, the accumulation of toxic ROS will occur and then induce cell death. Some studies have revealed that erastin causes the production of ROS in some cell lines [4,55]. Cells treated with RSL3 revealed elevated lipid ROS during ferroptosis in the absence of GSH depletion. With prolonged erastin and RSL3 treatment times, ROS can begin to accumulate and induce cancer cell ferroptosis [4] Furthermore, ROS are generated from the TCA cycle of mitochondrial metabolism.
Inhibition of cancer progression by ferroptosis
Many studies have indicated that ferroptosis plays a crucial role in the regulation of the pathological process of cancer [56,57]. Some studies have revealed that superabundant ferroptosis of cancer cells can inhibit tumor progression. Several anticancer drugs could inhibit ferroptosis-related molecules and channels to induce ferroptosis in cancer cells, such as GPX4 and system Xc -, and then inhibit cancer growth [5,58]. It was found that the ferroptosis inducer erastin could increase the chemotherapeutic effect of some chemotherapeutics, such as cisplatin [59], cytosine arabinoside and doxorubicin [60], by inducing ferroptosis. Similarly, inactivation of dihydroorotate dehydrogenase led to a large amount of mitochondrial lipid peroxidation and induced ferroptosis in cancer cells [61]. Moreover, radiotherapy could cause cancer cells to produce lipid ROS and result in the lethal accumulation of lipid peroxides to induce ferroptosis [62]. Hence, the induction of ferroptosis may become a promising strategy to treat cancer. Next, we will discuss the relationship between ncRNAs and ferroptosis.
Function of ncRNAs in ferroptosis
Ferroptosis is related to the prognosis of many types of cancer, but we know little of the mechanism of the ferroptosis in cancer, especially regarding the role of ferroptosis-related ncRNAs in cancer. Ferroptosis is tightly related to noncoding RNAs (ncRNAs) and cancer [63]. NcRNAs, including micro-RNAs (miRNAs), long noncoding RNAs (lncRNAs) (Fig. 2), and circular RNAs (circRNAs) (Fig. 3), are involved in the underlying regulatory mechanism of ferroptosis, including mitochondrial-related proteins, iron metabolism, glutathione metabolism, and lipid peroxidation [64,65].
In terms of regulating ferroptosis-related genes, some studies have demonstrated that many ncRNAs play vital roles in regulating the expression of the ferroptosis-related genes [66]. Some ncRNAs regulate ferroptosis in cancer cells by affecting the protein level of ferroptosis-associated genes, such as FSP1 [67], EIF4A1 [68], GABPB1 [69], GDPD5 [70], and CCL5 [71]. A number of ncRNAs could affect both the mRNA and protein levels of ferroptosis-associated genes, such as NRF2 [72], STAT3 [73], ATF4 [74], AURKA [75], and ITGB8 [76]. What's more, several miRNAs could participate at the mRNA or protein level via m6A modification or epigenetic regulation of these genes, such as FSP1 [67] and AURKA [75]. Some circRNAs could regulate these genes by sponging miRNAs at the mRNA and protein levels. Moreover, lncRNAs could regulate the expression of ferroptosis-related proteins by affecting p53 at the transcriptional level. For example, lncRNA P53RRA could promote the recruitment of p53 by interacting with G3BP1 and then regulate the function of ferroptosis-associated metabolic genes [77]. Moreover, some ncRNAs could induce ferroptosis by regulating iron metabolism. Some studies have revealed that many ncRNAs increase the content of cellular iron [72,[77][78][79]. Apart from this, several ncRNAs could decrease the accumulation of iron [80][81][82]. Moreover, miR-7-5p plays a vital role in downregulating mitoferrin, reducing Fe 2+ [83], and inhibiting the ferroptosis. In addition, many ncRNAs could affect ferroptosis via ROS metabolism by the Fenton reaction [69,73,79,80]. However, future studies need to pay more attention to the molecular mechanism of the relationship between ferroptosis-related ncRNAs and iron or ROS metabolism.
Regarding lipid peroxidation, ncRNAs regulate the subunits of system Xcand GPX4, and several ncRNAs play important roles in the process of lipid metabolism. Many ncRNAs regulated the protein level of SLC7A11 [78,[84][85][86][87]. Moreover, some circRNAs regulated SLC7A11 by sponging miRNAs. Furthermore, several studies revealed that ncRNAs could increase the levels of SLC7A11 by promoting the recruitment of LSH to the promoter of SLC7A11 [79]. In the aspect of GPX4, many studies have demonstrated that ncRNAs focus on regulating the protein production of GPX4 [88][89][90][91][92]. A few studies have revealed that circRNAs can upregulate the levels of GPX4 mRNA [91]. Several studies have demonstrated that lncRNAs and circRNAs can regulate the GPX4 by interacting with miRNAs. For instance, lncRNA PVT1 induced the expression of GPX4 by inhibiting the translation of miR-214-3p [90]. In addition, has_circ_0048179 increased the level of GPX4 by sponging miR-188-3p [93]. Regarding the uptake of Gln, miRNAs could regulate the expression of the Glu metabolism-related proteins and affect the ferroptosis of cells. For example, miR-103a-3p inhibited the expression of glutaminase 2 (GLS2) and suppressed the glutamine transformation to glutamate [94]. miR-9 could inhibit the mRNA and protein levels of glutamic-oxaloacetic transaminase 1 (GOT1) and inhibit Glu metabolism [95]. In addition, miR-137 inhibited glutaminolysis by suppressing the glutamine transporter solute carrier family 1 membrane 5 (SLC1A5) [81]. Moreover, several studies have revealed that ncRNAs can induce lipid peroxidation by regulating the translation of the fundamental enzymes for the biosynthesis of unsaturated phospholipids, such as ACSL4 [96,97], ALOX15 [80], and ALOXE3 [98]. In summary, ncRNAs play crucial roles in many biological processes of ferroptosis occurrence and development.
Recently, an increasing amount of evidence has demonstrated that ncRNAs play an important regulatory role in cancer progression via the ferroptosis pathway [3,64] and might become new diagnostic markers or therapeutic targets of cancers. Hence, this review focuses on summarizing the regulatory role of ncRNAs in cancer progression via ferroptosis of cancer cells (Table 1). Moreover, there may be some obstacles to hindering the exploration of ferroptosis-related ncRNAs in cancer therapy or diagnosis. We believe that a deep understanding of the interactions between ncRNAs and ferroptosis may be conducive to solving these obstacles and improving the strategy of cancer therapy or diagnosis.
Lung cancer
Lung cancer is the leading cause of cancer-associated deaths, with approximately 1.8 million deaths (18%), and it is the most common cancer in men worldwide, with an estimated 1.44 million new cases each year (14.3%) [99]. Lung cancer is divided into small cell lung cancer (SCLC) and non-small-cell lung cancer (NSCLC). NSCLC, including lung adenocarcinoma (ADC) and squamous cell carcinoma (SCC), is the most frequently occurring cancer among lung cancers [100]. Although there are many therapeutic strategies for lung cancer therapy, such as surgical sectioning, chemotherapy, and radiotherapy, there is still a lack of a definite understanding of the pathogenesis of lung cancer. Thus, identifying novel diagnostic markers and therapeutic strategies to inhibit the progression of lung cancer is a great challenge. However, second-generation sequencing technology has been widely used to provide a very effective way to study disease-related genes and ncRNAs in recent years [101,102]. To a certain extent, this technology will help us to predict ferroptosis-related ncRNAs. Recently, miRNAs have attracted substantial attention in lung cancer. Several miRNAs play important roles in chemotherapeutic resistance via ferroptosis. For example, exosomal miR-4443 could increase cisplatin resistance to reduce the therapeutic effect of NSCLC via METTL3/FSP1-induced ferroptosis [67]. Moreover, Shi-hua Deng [88] demonstrated that miR-324-3p could reduce cisplatin resistance by inducing GPX-mediated ferroptosis in ADC cells. Research revealed that miR-27a-3p could induce ferroptosis by suppressing the expression of SLC7A11 in NSCLC [86]. The regulatory role of miRNAs in ferroptosis might contribute to an in-depth understanding of the mechanism of chemoresistance in NSCLC and uncover potential therapeutic methods for improving chemotherapeutic resistance in NSCLC.
Furthermore, lncRNAs also play relevant roles in lung cancer progression. In 2018, Chao Mao [77] demonstrated that the lncRNA P53RRA could directly interact with the functional domain of G3BP1, resulting in abnormal accumulation of p53 in the nucleus to induce cell arrest and ferroptosis and then inhibit lung cancer progression. In contrast, it was reported that a novel lncRNA, LINC00336, which sponges miR-6825, served as a competing endogenous RNA (ceRNA) and promoted lung cancer proliferation by inhibiting ferroptosis [82]. In terms of NSCLC, some studies revealed that several ferroptosis-associated lncRNAs could inhibit tumor deterioration via ferroptosis. For instance, sponging lncRNA miR-503HG with miR-1273c may inhibit NSCLC progression via ferroptosis [103]. Similarly, Hong-xia Wu [96] showed that lncRNA NEAT1 upregulated the expression of ACSL4 associated with ferroptosis and inhibited the worsening of NSCLC. Another study uncovered that lncRNA MT1DP enclosed by folate-modified liposome nanoparticles via miR-365a-3p/NRF2 could improve the sensitivity to ferroptosis and might become a new therapeutic method for NSCLCs [72]. This research revealed that nanoparticles could be used to treat cancers by interacting with ncRNAs. Regarding ADC, lncRNA ASMTL-AS1 could inhibit cancer progression and promote ferroptosis by stabilizing SAT1 via recruiting U2AF2 [104]. In addition, several ferroptosis-and iron metabolism-related lncRNAs have been identified as prognostic biomarkers of ADC [105,106]. Several studies have focused on the role of circRNAs in NSCLC [107,108]. For example, circDTL reduced the ferroptosis of cancer cells via the miR-1287-5p/GPX4 pathway [108].
However, more evidence is needed to explore the regulatory mechanism of ncRNAs in lung cancer via the ferroptosis pathway, and the regulatory roles of other ncRNAs in ferroptosis in lung cancers remain to be discovered.
Gastrointestinal cancer
Gastrointestinal cancer is also a serious leading cause threatening the health of humans worldwide, and its death rate is estimated to be 1.69 million (17.1%) [99]. Its type is divided into upper gastrointestinal cancers (UGCs), including gastric cancer (GC), and lower gastrointestinal tumors, including colorectal cancer (CRC). Due to the lack of adequate knowledge of early symptoms of gastrointestinal cancer, most patients are apt to miss the optimum therapeutic period in the early stage. Thus, the disease has a huge adverse impact on family and society. Therefore, it is essential to deeply explore the pathological mechanism of gastrointestinal cancer and uncover novel diagnostic markers or therapeutic methods to diagnose or treat it in the early stage.
To explore the molecular mechanism of UGC, Ahmed Gomma [75] revealed that overexpression of miR-4715-3p could reduce Aurora kinase A levels, leading to G2/M delay of cells, and inhibiting GPX4 resulted in the ferroptosis of UGC cells. Moreover, miR-139 could inhibit SLC7A11-mediated ferroptosis via the PI3K/Akt signaling pathway and suppress the prolifetation of pancreatic carcinoma [109]. miR-375 could reduce the regeneration ability of GC cells by triggering SLC7A11-mediated ferroptosis [110]. In terms of chemotherapeutic resistance in GC, some evidence has revealed that miRNAs play an important role in chemotherapeutic resistance [80,111]. For example, exo-miR-522 secreted by cancer-associated fibroblasts interacted with ALOX15 to promote acquired chemotherapeutic resistance in GC by inhibiting ferroptosis in GC cells [80]. Moreover, Ying Niu [94] revealed that physcion 8-O-βglucopyranoside, a chemical component contained in Rumex japonicas Houtt, could induce ferroptosis and suppress the proliferation and metastasis of GC cells via the miR-103a-3p/GLS2 pathway and inhibit the growth and metastasis of GC.
Not only do miRNAs play regulatory roles in the proliferation and metastasis of gastrointestinal cancer, but lncRNAs also have important effects on improving gastrointestinal cancer progression [112,113]. For example, Hua-jun Cai [114] investigated and constructed seven ferroptosis-related lncRNA signature by a Cox regression model to predict the survival of colon adenocarcinoma patients. An increasing number of ferroptosis-related lncRNAs may provide new insight into exploring the mechanisms of ferroptosis in GC cells and predicting GC patients [115][116][117]. Furthermore, circRNAs are involved in GC progression and may become novel therapeutic targets for the prevention and treatment of GC [118]. Chang Li [68] identified that circ_0008035 could promote GC cell proliferation and decrease iron accumulation and lipid peroxidation, resulting in the inhibition of cell ferroptosis via the miR-599/EIF4A1 axis in GC cells. This finding may contribute to the discovery of novel therapeutic targets for GC. Moreover, circRNAs act as regulators in the progression of CRC via ferroptosis [119]. For instance, downregulation of circABCB10 via the miR-326/ CCL5 axis promoted cancer cell ferroptosis and inhibited the progression of rectal cancer [71]. Similarly, another study [70] showed that circ_0007142 was upregulated in CRC, and inhibition of circ_0007142 could promote apoptosis and ferroptosis of CRC cells, resulting in reduced cancer cell proliferation.
Liver cancer
In addition to gastrointestinal cancer, liver cancer is the third largest cause of cancer-triggered death, with approximately 8.3% in both sexes [99]. The pathological types of liver cancer include hepatocellular carcinoma (HCC), cholangiocellular carcinoma, and mixed type. HCC is the most common pathological type of liver cancer. According to Barcelona Clinic Liver Cancer, the great majority of HCC patients are diagnosed after symptoms develop [120]. If they miss the therapeutic window in the early stage, the disease will be very difficult to treat. Hence, it is necessary to study the pathogenesis of liver cancer more deeply, especially regarding moderate or advanced liver cancers, and explore novel therapeutic methods or accessible diagnostic biomarkers.
Several studies have focused on ferroptosis in liver cancer and explored its molecular mechanism in depth in recent years. Some studies have revealed the role of ncRNAs in the regulation of ferroptosis in HCC [69,74]. Hence, Tao Bai [74] revealed that miR-214-3p (miR-214) could enhance erastin-induced ferroptosis by targeting ATF4 in HCC. In this research, overexpression of pre-miR-214 increased the levels of malondialdehyde (MDA), ROS, and Fe 2+ , and reduced the GSH levels when HepG2 and Hep3B cells were treated with erastin, whereas overexpression of anti-miR-214 demonstrated the opposite effect. In addition, some studies began to focus on the role of lncRNAs in the regulation of cellular ferroptosis in HCC. In 2019, Wenchuan Qi et al. [69] found that lncRNA GABPB1-AS1 could reduce GABPB1 protein levels by inhibiting GABPB1 translation during erastin-induced ferroptosis in HCC cells, resulting in the downregulation of PRDX5 protein. When PRDX5 localization in mitochondria to reduce peroxidases and hydroperoxides [121] is inhibited, the completeness of the cellular membrane and cell viability are destroyed. Moreover, it was revealed that lncRNA PVT1 played an important role in accelerating the expression of GPX4 and inhibiting ferroptosis by the miR-214-3p/GPX4 pathway [90]. To identify biomarkers of prognostic prediction in HCC, many ferroptosis-related lncRNAs were validated and might become signatures for predicting the overall survival of HCC patients and novel therapeutic targets to affect HCC cell proliferation and invasion [122][123][124][125].
Similar to the roles of miRNAs and lncRNAs in liver cancer, circRNAs also demonstrate crucial regulatory roles in liver cancer cell ferroptosis. One study revealed that circIL4R, which is greatly upregulated in HCC tissues and cells, could inhibit ferroptosis by sponging miR-541-3p through the GPX4 pathway, resulting in the promotion of HCC tumorigenesis [91]. Similarly, one report from Zhiqian Liu [126] demonstrated that circcIARS was abnormally overexpressed after sorafenib treatment and could promote sorafenib-induced ferroptosis in HCC cells by inhibiting RNA-binding protein ALKBH5-induced autophagy inhibition. Moreover, circRNAs can act as ceRNAs to regulate liver cell ferroptosis. Ning Lyu [84] identified circ0097009 as a ceRNA that sponged miR-1261 and upregulated the expression of SLC7A11, inhibiting HCC cell ferroptosis and promoting the invasion and metastasis of HCC cells.
Overall, ncRNAs may become potential therapeutic targets to treat HCC via the ferroptosis pathway.
Breast cancer
Breast cancer has exceeded lung cancer as the most frequently newly diagnosed cancer in both sexes and has the most cancer-related deaths among women worldwide. There were an estimated 2.3 million new cases (11.7%) in both sexes and 0.68 million deaths (15.5%) in females [99]. There are many therapeutic methods, including surgery, chemotherapy, and radiotherapy, to treat breast cancer. Several studies have focused on the ferroptosisrelated genes as biomarkers for diagnosing or treating and predicting prognosis in breast cancer [66]. However, reducing the number of newly diagnosed cases and mortality due to breast cancer still presents huge challenges.
To improve the therapeutic effects of drugs, several studies have revealed that some drugs exhibit anticancer effects by regulating ncRNA expression to affect cancer cell ferroptosis [89]. In this research, Yifeng Hou [89] discovered that metformin, a widely used antidiabetic drug, could lead to ferroptosis by upregulating miR-324-3p expression and downregulating GPX4 in breast cancer. This finding suggested that metformin could become a potential anticancer drug. Another study discovered that miR-5096 could increase the ROS, iron accumulation and lipid peroxidation by inhibiting SLC7A11 and inducing ferroptosis [127]. In addition to miRNAs, some lncRNAs have been identified and play an important role in regulating the ferroptosis and cancer. For instance, Chao Mao [77] found that lncRNA P53RRA not only suppressed the progression of lung cancer, but also inhibited breast cancer growth by promoting cell ferroptosis. The exploration of breast cancer biomarkers for predicting prognosis might be helpful to improve therapeutic strategies via bioinformatic analysis. Many ferroptosis-related lncRNAs have been discovered and may become prognostic signatures or potential therapeutic targets for breast cancer [128,129]. In addition, Huiming Zhang and his colleague [73] discovered that circRHOT1 could attenuate cancer cell ferroptosis through the miR-106a-5p/STAT3 pathway and promote the invasion and migration of breast cancer cells, leading to worsening progression of breast cancer. In HER-2-positive breast cancer, circGFRA1 could inhibit ferroptosis and promote cancer progression via the miR-1228/AIFM2 axis [130]. These findings may provide new insight into exploring the regulatory mechanism. However, more studies are still needed to focus on the regulatory role of ncRNAs in breast cancer via ferroptosis.
Urogenital cancer
Urogenital cancer is also one of most common cancers, but its newly diagnosed cases and deaths are lower than those of female breast cancer. It consists of bladder cancer, renal cancer, ureteropelvic cancer and urinary tract cancer. Among these cancers, bladder cancer and renal cancer accounted for an estimated 1 million new cases (5.5%) and 0.37 million deaths (3.9%) worldwide [99]. Although there are some therapeutic methods to treat it, exploring the novel diagnostic and therapeutic methods is still essential.
To explore novel diagnostic and therapeutic targets, accumulating evidence has focused on the role of ferroptosis-related ncRNAs in the genesis, progression, and treatment of urogenital cancer [131]. In term of bladder cancer, lncRNA RP11-89 could induce tumorigenesis and reduce the accumulation of cellular iron by sponging the miR-129-5p/PROM2 pathway, and leading to ferroptosis inhibition [132]. Moreover, several studies have revealed that some ferroptosis-associated lncRNAs could become prognostic signatures in renal clear cell carcinoma [133]. More studies are needed to uncover more ferroptosis-related ncRNAs and explore their regulatory role of them in the urogenital cancer.
Prostate cancer
The number of new cases of prostate cancer is very high, next only to lung cancer in males, at an estimated 14.1%, but the cancer-related death rate is lower (approximately 4.5%) than that of lung cancer (approximately 21.5%) [99]. The occurrence and progression of prostate cancer involve both genetic and environmental factors [134]. In a recent study, Yangyi Zhang [135] demonstrated that chronic cadmium exposure could promote cancer cell growth and inhibit ferroptosis by upregulating lncRNA OIP5-AS1 expression, and lncRNA OIP5-AS1 acted as a ceRNA that sponges miR-128-3p to increase the level of SLC7A11. However, future studies are needed to reveal the regulatory role of ferroptosis-related ncRNAs in inhibiting the progression of prostate cancer.
Cervical cancer
Cervical cancer is the fourth most common cancer in women, with an estimated 6.5%, and ranks as the fourth leading cause of cancer-associated death in women, with approximately 7.7% [85,99]. Although there are many therapeutic methods to treat it, such as chemotherapy and surgery, cervical cancer still lacks adequate and effective treatment to improve the low survival rate and poor prognosis. Recent studies have focused on the potential regulatory role of ncRNAs in improving the prognosis of cervical cancer via ferroptosis. For example, Peng Wu [85] showed that circEPSTI1 promoted the proliferation of cervical cancer via miR-375/409-3P/515-5p acting as a ceRNA by targeting SLC7A11 and attenuated the effect of lipid peroxidation and GSH/GSSG to inhibit ferroptosis of cervical cancer cells.
Ovarian cancer
Ovarian cancer is also a common gynecological cancer, with its number of deaths ranking eighth among gynecological cancers [99]. The majority of ovarian cancers cannot be diagnosed at an early stage, and the 5-year survival rate is low. More studies should be conducted to discover the pathogenic mechanism of ovarian cancer. Currently, some evidence has revealed that ncRNAs play an important role in suppressing ovarian cancer via the ferroptosis pathway [136,137]. Research revealed that miR-424-5p negatively regulated ferroptosis by directly targeting ACSL4, an overexpressed ferroptosis-related protein, in ovarian cancer cells and that downregulation of miR-424-5p increased erastin-and RSL3-induced ferroptosis, resulting in inhibition of the progression of ovarian cancer [97].
Acute myeloid leukemia
Acute leukemia (AL), including acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL), is also a serious malignant disease [138]. AL is a type of malignant clonal disease stemming from hematopoietic stem cells, and AML is very common in adults [138]. The five-year survival rates of patients treated with chemotherapeutics were 27.4% [139] lower than those of patients treated with stem cell transplantations. Moreover, drug resistance is one of the major problems in chemotherapy for AML. Therefore, it is necessary to explore the mechanism of drug resistance to increase the therapeutic sensitivity of chemotherapy in AML. The abnormal expression of ncRNAs may be the key regulator to improve drug resistance [140]. For example, Zuili Wang and his colleague [79] revealed that lncRNA LINC00618 was downregulated in human leukemia and strongly increased by VCR therapeutics, and it was involved in inducing AML cell ferroptosis by increasing the production of ROS and iron and decreasing the expression of SLC7A11 in AML.
However, ncRNAs, including miRNAs, lncRNAs and circRNAs, require further investigation regarding the chemotherapeutic resistance of AML.
Glioma
Although less common in other cancers, glioma is the most common primary central nervous system tumor and accounts for approximately half of all primary intracranial tumors. The five-year survival rate of adult high-grade glioma is very low. According to the WHO grade, low-grade glioma is grade I or II, and high-grade glioma is grade III or IV among the four grades of glioma [141]. Grade IV glioma is also named glioblastoma (GBM). Hence, it is essential to explore the regulatory mechanism of glioma and some novel therapeutic methods. Recently, some studies have reported that some ncRNAs play important roles in inhibiting glioma via ferroptosis [76,98]. Xinzhi Yang [98] revealed that miR-18a accelerated glioblastoma advancement by directly inhibiting ALOXE3-mediated ferroptotic and antimigration activities. Deficiency of ALOXE3 in GBM cells results in resistance to p53-SLC7A11associated ferroptosis and improves the survival rate of GBM cells. Moreover, it was reported that the circRNA TTBK2 was upregulated in glioma tissues and cells and inhibited ferroptosis via the miR-761/ITGB8 axis to promote glioma proliferation and invasion [76].
Head and neck squamous cell carcinoma
Head and neck squamous cell carcinoma (HNSCC) is a series of malignant tumors involving many tissues in the head and neck region, including the oral cavity, nasopharynx and throat [142]. Although surgery, radiotherapy and chemotherapy are available, the morbidity of HNSCC has increased markedly in recent years, especially in women [99]. It is particularly important to explore the regulatory mechanism and uncover novel remarkable biomarkers and therapeutic targets to improve the progression of HNSCC. Bin Zhang revealed that miR-125b-5p could inhibit the expression of SLC7A11 and that enhancer of zeste homolog 2 (EZH2) inhibited ferroptosis via the miR-125b-5p/SLC7A11 pathway in tongue squamous cell carcinoma [87]. Moreover, Yun Tang [143] identified that some ferroptosis-related lncRNAs may become diagnostic biomarkers and potential therapeutic targets to treat HNSCC via ferroptosis. Furthermore, some research has focused on the role of ferroptosis-related circRNAs in HNSCC. For example, circFNDC3B could inhibit cancer cell ferroptosis by miR-520d-5p/ SLC7A11 pathway in oral squamous cell carcinoma [78]. Another study revealed that circKIF4A could upregulate the levels of GPX4 and reduce the ferroptosis of thyroid cancer cells by sponging miR-1231, and leading to the induction of cancer progression [92]. Moreover, circ_0067934 reduced lipid peroxidation and ferroptosis of thyroid cancer cells via the miR-545-3p/SLC7A11 pathway [144]. However, more evidence is needed to identify the regulatory roles of these ncRNAs in HNSCC via ferroptosis.
Melanoma
In contrast to other cancers, melanoma stemming from melanocytes is not common. However, melanoma is the third most frequently occurring malignant tumor of the skin. Melanoma lacks specific treatment, except for surgical resection in the early stage. Hence, ferroptosis is a focus of researchers, and new cell death contributes to inhibiting melanoma progression. Meiying Luo [81] revealed that miR-137 could inhibit glutamine transporter SLC1A5, an inhibitor of ferroptosis, in melanoma cells and that the suppression of SLC1A5 decreased glutamine uptake and MDA accumulation, resulting in ferroptosis and inhibiting the progression of melanoma. In another study, Kexin Zhang [95] identified that overexpression of miR-9 inhibited GOT1, leading to reduced erastin-and RSL3-mediated ferroptosis. Suppression of miR-9 could increase the levels of lipid ROS in melanoma cells, leading to promotion of ferroptosis in melanoma cells and the inhibition of melanoma growth [95]. However, other ncRNAs, including lncRNAs and circRNAs, ought to be focused on to define the regulatory role of ferroptosis in melanoma.
Clinically relevant radioresistance
Like chemotherapy, radiation therapy (RT) is also one of the most common therapies for cancers. However, radioresistance decreases the therapeutic effect of RT, and the mechanism of radioresistance is not well understood. In recent years, Kazuo Tomita [83] revealed that miR-7-5p played a crucial role in regulating irradiation resistance by controlling intracellular Fe 2+ content in clinically relevant radioresistant (CRR) cells. Oxidative stress and ferroptosis in CRR cells are inhibited. In the future, more investigations will be required to uncover the mechanism by which miRNAs improve radioresistance in cancer cells via the ferroptosis pathway.
Conclusions and future prospects
Ferroptosis, a newly discovered form of programmed cell death, is related to some pathophysiological processes, especially many types of cancers. Numerous studies have focused on exploring the regulatory mechanisms of cancers via the ferroptosis pathway. Great progress in exploring the regulatory role of ncRNAs in cancers by ferroptosis has been made. Taken together, these findings contribute to further understanding the pathogenesis of cancers and have demonstrated that ferroptosis-associated ncRNAs may act as a series of tumor inhibitors to suppress cancer growth. In addition, ncRNAs, including miRNAs, lncRNAs and circRNAs, have the potential to be novel anticancer therapeutic methods and diagnostic biomarkers by regulating the ferroptosis of cancer cells (Table 2). [88] lncRNA C20orf197 [105] miR-27a-3p [86] lncRNA ARHGEF26-AS1 [105] LINC00336 [82] lncRNA MGC32805 [105] lncRNA NEAT1 [96] lncRNA LINC00324 [105] lncRNA MT1DP [72] lncRNA LINC01116 [105] circDTL [108] lncRNA -Head and neck squamous cell carcinoma miR-520d-5p [78] LINC01963 [143] miR-125b-5p [87] LINC01963 [143] LINC01980 [143] LINC01980 [143] lncRNA AATBC [143] lncRNA AATBC [143] lncRNA ELF3-AS1 [ Many studies have revealed that these ncRNAs play important roles in the progression of cancers via ferroptosis and that these ncRNAs may regulate the ferroptosis of cancer cells to induce or inhibit tumorigenesis. It has been demonstrated that lncRNAs and circRNAs always sponge miRNAs to regulate the expression of GPX4 and induce lipid peroxidation of the cellular membrane. Furthermore, abnormal lipid peroxidation destroys cancer cell membranes, resulting in the ferroptosis of cancer cells. Sometimes, these ncRNAs can directly target GPX4, SLC3A2 and SLC7A11 to induce ferroptosis in cancer cells. In addition, some ncRNAs regulate the ferroptosis of cancer cells via the lethal accumulation of ROS and the abnormal metabolism of iron.
Due to the individual heterogeneity of ncRNA expression in different types of cancer, ncRNAassociated therapy and biomarkers may be applied to support personalized cancer treatment. Although an increasing number of studies have revealed a regulatory mechanism between cancer and ferroptosis, a deeper understanding of the mechanism by which ferroptosis-related ncRNAs regulate the progression and growth of cancer is still needed. Moreover, future studies should pay more attention to the role of ncRNAs in the linkage between cancer and ferroptosis. Moreover, some biomaterials, like nanomaterials, may overcome the shortcomings of conventional therapeutic schedules for tumortargeted ferroptosis therapy by preloading antitumor drugs [145,146]. However, the regulatory relationship between these promising materials and ferroptosis-related ncRNAs still lacks of research and should be given more attention.
This review has summarized the regulatory roles of several types of ncRNAs in cancer progression and ferroptosis. These studies are beneficial for understanding the pathogenesis of cancer. Ferroptosis-related ncRNAs have great potential to act as anticancer therapeutic targets by regulating ferroptosis. Targeting these key ncRNAs may reveal novel therapeutic methods or diagnostic biomarkers to inhibit the growth and progression of malignant tumors.
|
v3-fos-license
|
2021-05-08T06:17:04.435Z
|
2021-05-07T00:00:00.000
|
233985445
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/fd/d1fd00004g",
"pdf_hash": "b44597eecddf4364c52722519097dc57eb81455c",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2654",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "71546de85c37a593e9c081a61920e0f742b3edf7",
"year": 2021
}
|
pes2o/s2orc
|
How bulk and surface properties of Ti 4 SiC 3 , V 4 SiC 3 , Nb 4 SiC 3 and Zr 4 SiC 3 tune reactivity: a computational study †
We present several in silico insights into the MAX-phase of early transition metal silicon carbides and explore how these a ff ect carbon dioxide hydrogenation. Periodic density functional methodology is applied to models of Ti 4 SiC 3 , V 4 SiC 3 , Nb 4 SiC 3 and Zr 4 SiC 3 . We fi nd that silicon and carbon terminations are unstable, with sintering occurring in vacuum and signi fi cant reconstruction taking place under an oxidising environment. In contrast, the metal terminated surfaces are highly stable and very active towards CO 2 reduction. However, we predict that under reaction conditions these surfaces are likely to be oxidised. These results are compared to studies on comparable materials and we predict optimal values for hydrogen evolution and CO 2 reduction.
Introduction
The H-(Hägg)phase of over 100 ternary carbides/nitrides was rst identied in the 1960s. 1,2][8][9][10] Owing to this set of novel properties these materials are also called "metallic ceramics". 11Interestingly, this same mixture of properties is also present in the closely related monocarbide materials which in recent years have received a lot of interest from the catalytic a School of Chemistry, Cardiff University, Main Building, Park Place, Cardiff CF10 3AT, UK.E-mail: quesnem@ cardiff.ac.uk
Methodology
9][30] Benchmark studies, using formation energies for these materials with both RPBE 45 and PBEsol, 46 have produced consistent values for all methods which validates our choice of methodology.Such a protocol has been shown to be sufficient for replicating experimental trends for both surface properties and reactivities of closely related carbide 26 and MXene materials. 25lane-wave basis sets are applied to valence electrons with core potentials produced using projected augmented wave methodology (PAW) 31 to describe the core electrons of each element.Long-range dispersion interactions were introduced via the D3 method. 32,33Bulk structures for a-Ti 4 SiC 3 (ref.7) and a-Nb 4 SiC 3 (ref.34) were taken from the Inorganic Structure Database (ICSD), 35 whilst the MAX-phase of a-V 4 SiC 3 and a-Zr 4 SiC 3 were created by in silico modication of the former structures.The lattice parameters and the internal coordinates of each material were then fully optimized for all degrees of freedom and the (0001) basal plane was cut between the metal-carbon and metal-silicon layers using the METADISE code. 36Since the materials consist of alternating layers of metal and either carbon or silicon, only two possible slabs can be constructed in this plane: both express one metal layer, with one slab also terminated by carbon and the other slab terminated by the corresponding silicon surface.This sequence is due to the stoichiometry of these materials with a 1 : 1 ratio of metal to either carbon or silicon.The primitive cell of each silicon carbide was replicated into a 2 Â 2 Â 2 simulation cell to give slabs with nine atoms in each layer and 16 atomic layers along the z-axis to give a total of 144 atoms per slab.Above each termination was placed 15 Å of vacuum.A ne Monkhorst-Pack grid with a 5 Â 5 Â 5 k-point mesh was applied to all bulk calculations, with surface properties being determined using a 5 Â 5 Â 1 mesh.In all cases, energies were converged to within 520 eV and a threshold of 0.01 eV ÅÀ1 was used for geometry optimisations.Self-consistency cycles (SCF) were converged to within 10 À5 eV and the Blöchl smearing method was applied for higher accuracy. 37In all cases, half of each slab (the top 8 layers) was allowed to relax fully, whilst the bottom 8 layers were xed to maintain their bulk-optimised positions.Spin polarization was also enabled to account for the magnetic moments in the materials and electron transfer during catalysis.
Adsorption energies for H 2 O, H 2 , O 2 , OH and CO 2 are dened as: where the sum of the energies of the pristine slab and gas phase adsorbate(s) are subtracted from the total energy of the minimum energy structure of the slab with adsorbates.Vibrational frequencies, calculated with the nite difference method, conrmed the global minima for all absorbed species.
Bulk properties
MAX-phase silicon carbide crystal structures for a-Ti 4 SiC 3 (ref.7) and a-Nb 4 SiC 3 (ref.34) were taken from the Inorganic Structure Database (ICSD). 35Since no structures were available for a-V 4 SiC 3 and a-Zr 4 SiC 3 , they were created in silico by exchanging the transition metal component and running full optimisations of both the lattice parameters and atomic coordinates.For a meaningful comparison, the same bulk references for the individual elements were used as previously reported for the carbides, 26 with the addition of an elemental silicon reference. 38ur results show that the addition of interstitial silicon layers into the carbide lattice has a relatively small effect on formation energies (see Table 1) with no consistent trend observed, although silicon addition does appear to have a very large effect on the relative lattice constants (Fig. 1), which is unsurprising since at 210 pm silicon has the largest atomic radius of any of the elements in the materials under investigation.This effect is most apparent when examining the relative surface areas of the carbide (111
Density of states
Total density of states plots for each of the materials are shown in Fig. 2. Importantly, all four materials are clearly metallic in nature, with some bands crossing the Fermi level.Decomposition of these states indicates that there is a high level of hybridization between 3d or 4d orbitals of the metal with both the 2p states of the carbon and 3s orbitals of the silicon.All these materials, therefore, show considerable amounts of M-C and M-Si covalency.These same general properties were also reported previously for the related monocarbide materials, 26 and our results replicate previously reported electronic structure analysis for a-Nb 4 SiC 3 . 34There is also the same apparent increase in peak intensity in the group 5 over the group 6 metals that in carbides has been associated with a decrease in the strength of the M-C bond and an increase in ionic character. 40This shi is primarily due to an increase in the number of valence d-electrons and also causes a negative shi in the position of the d-band centre in relation to the Fermi level.Finally, the density of states in the conduction band is much higher in the silicon-containing carbides with d 2 metals (i.e. the metals in group 4 with the valence electrons d 2 , s 2 ) than for the d 3 which could indicate greater redox potential in the former.To a large extent, catalytic activity as well as other surface properties, such as the stability of different facets, can be linked to surface energies (s), and work functions (F). 41We have calculated these properties for each silicon carbide by cleaving along the (0001) plane and allowing the uppermost half of the slab to relax; depictions of the morphology of each relaxed surface can be found in Fig. S1 (see ESI †).Interestingly, major reconstruction (leading to surface-mediated formation of elemental silicon) is observed when the silicon-terminated surfaces of the 3d silicon carbides are relaxed, which indicates that the M-Si bond broken is far weaker than the new Si-Si bonds formed.Relaxed surface energies (s) were calculated using eqn (2) and (3).A denotes the area of each surface, where E slab signies the total surface energy and n denotes the number of unit cells used for each slab.When calculating the energy to create two unrelaxed surfaces (eqn (2)) 2 is added to the denominators.
When only half the slab is relaxed, the relaxation component of the surface energy (s) must be determined, as shown in eqn (3); here the area of only one surface is included in the denominator and this energy is subtracted from that of the unrelaxed surfaces.The computed s for various facets of both the siliconcontaining carbides reported here and their corresponding monocarbide equivalents are shown in Table 2. Previous work by our group showed that relaxing only the surface layers of the (111) facets of the corresponding monocarbides was critically important in both explaining the energetic differences between the two very different facets and to maintain the constraints provided by the bulk properties. 26From the values in Table 2, it is clear that surface energies for the carbonterminated slabs in both the monocarbide and silicon carbide materials are very high.
The silicon carbide (0001) slabs with both silicon and metal terminations have much lower surface energies, closer to the dominant (001) surface of the monocarbides than to the (111) facets.This result is perhaps unsurprising since in the presence of absorbates the carbon terminated (111) facet of early transition metal carbides has been shown to be extremely unstable and therefore would only be expected as a meta-stable phase under vacuum conditions. 22,42Fig. S1 † shows that there is a major reconstruction of both the silicon and carbon terminated surfaces upon relaxation.To attempt to see if this major surface reconstruction also occurs in an oxygen-rich environment, we have also added a monolayer of eight oxygen atoms to each pristine termination; however, upon optimisation, various partially oxidised silicon and carbon species were formed in all cases (see Fig. S2, ESI †).Owing to their potential instability and relatively high surface energies, it is clear Table 2 Relaxed surface energies (s) and surface areas (a 2 ) for each possible termination of the four silicon containing carbides compared to values for the (001) and metal terminated (111) surfaces of the corresponding monocarbides. 26All surface energies are given in J m À2 with the corresponding areas shown in Å2 that the carbon and silicon terminated (0001) facets of these silicon carbides will be unstable under reaction conditions and that the surface morphology of the catalysts will be either dominated by pristine metal surfaces or by sintered coke/ elemental silicon on top of a metal surface support.Therefore, unless otherwise stated, the remainder of this study will focus on the activity of the metal terminated (0001) facets.
Reactivity on the pristine surfaces
Hydrogen adsorption.We attempted to identify barriers for hydrogen adsorption by performing geometry scans that sequentially lower physically adsorbed hydrogen from the vacuum to the silicon carbide surfaces.To this end, each hydrogen molecule was xed in the z-direction and sequentially lowered towards the surface, but allowing full relaxation along all other degrees of freedom.The resulting energies of these scans are shown in Table S3 (see ESI †).However, whilst no local minima in such stepwise pathways to adsorption were found, the subsequent chemical adsorption step was shown to be a barrierless and extremely exothermic process.Since the same barrierless adsorption mechanism has been reported previously for the metal terminated (111) surfaces of the corresponding monocarbides, 42 we will continue our study by examining hydrogen loading effects on these facets.Fig. 3 shows the adsorption energies for the most exothermic adsorption modes of hydrogen under various coverages on the pristine metal terminated surfaces of the four silicon carbides considered in this study.The trends reported here are very similar to those reported previously for the metal terminated (111) facets of the corresponding monocarbides, 42 where it was demonstrated that the decrease in adsorption energies with higher loading of hydrogen is primarily due to electronic effects, whereby electron donation into the conduction band of the carbides increases the surface work function.The same phenomenon is observed here, with a steeper slope obtained for the silicon carbides with group 5 transition metals owing to their increased number of electrons in the d-bands exacerbating this effect.Unfortunately, whilst smaller than those observed for the metal terminated (111) surfaces of the monocarbides (almost certainly due in part to the higher surface energies of those facets) the hydrogen adsorption energies here may still be too large at even the highest loadings for the adsorbed hydrogen to be useful for efficient CO 2 reduction.This prediction is informed by our previous work, where very similar hydrogen adsorption energies on the (111) surfaces of the corresponding monocarbides makes those facets less than ideal for hydrogenation processes under reaction conditions. 42arbon dioxide adsorption.The adsorption energies for the lowest energy chemical adsorption modes for CO 2 on top of the metal terminated (0001) facets of the four MAX-phase catalysts are shown by the orange bars in Fig. 4 and are considerably more exothermic than the energies previously reported for the monocarbide surfaces 22 for all materials except for V 4 SiC 3 , which is calculated to lead to a very similar adsorption energy to that of the metal terminated (111) surface of VC.The values that we present here correlate more closely to values obtained for MAXenes with M 2 C stoichiometry, for which adsorption energies of À3.69 for Ti 2 C and À3.16 for Zr 2 C were calculated. 43Important geometric and electronic information about the lowest energy adsorption modes for carbon dioxide on top of each silicon carbide surface is given in Fig. S5, ESI.† These data show that whilst the average metal oxygen bond distance between the surface and adsorbate is modulated almost completely by the ideal M-C bond distance the amount of charge transfer (and corresponding elongation of the C-O bond length) varies depending on the periodic position of the parent transition metal.Importantly, group IV metals (Ti/V) see an increase of $0.6 in number of electrons transferred to the CO 2 upon adsorption from the values observed for their V metal equivalents (Zr/Nb), which corresponds to both greater activation of the adsorbate and a more exothermic adsorption energy.Additionally, modelling of MAXenes with a M 3 C 2 stoichiometry and d 3 transition metals predicted much smaller adsorption energies of À2.19 for V 3 C 2 and À2.35 for Nb 3 C 2 . 23Whilst high carbon dioxide capture on the pristine facets is more reminiscent of the MAXene-phase than the metal terminated (111) facets of the fcc-monocarbides, the adsorption energies are still slightly smaller than those on the MAXenes.Such high energies in the MAXenes are explained by the absence of a free (pristine) metal surface under either synthesis or reaction conditions.Indeed, all these materials are capped by a terminating layer of either OH, O, F or H monolayers. 44 Therefore, the remainder of this paper will focus on the effects of monolayer formation on the metal facet of MAX-phase silicon carbides.
Atomic monolayer formation and reactivity
First, we consider solvation of the surfaces in an aqueous environment.The energy per molecule of eight water molecules adsorbed on each surface is shown in Fig. 5 and there is little variation between the values for each material, which range between À1.03 for V 4 SiC 3 to À0.91 for Zr 4 SiC 3 .To create each layer, water molecules were initially placed in the most exothermic binding motifs (corresponding to the highly coordinated hollow sites) aer which other molecules were sequentially added to ll each surface, whilst every attempt was made to maximise hydrogen-bonding networks between molecules.The extent to which this was possible was largely determined by surface structure, as is also shown in Fig. 5, where the bottom panels show the hydrogen-bonding energy per molecule, obtained aer the surface was removed and a single point energy calculation was performed.These results show that for each silicon carbide surface, except for Zr 4 SiC 3 (where the network is much weaker), the surface-mediated hydrogenbonding networks maintain approximately half the energy of each hydrogenbond formed by water under atmospheric conditions.The anomalous value for Zr 4 SiC 3 is possibly due to the larger distances between binding sites caused by the larger surface area.
Finally, we examined surface modications under oxidising environments.Fig. 6 shows the morphology and energetics of a fully oxidised or hydroxylated metal terminated (0001) surface for each catalyst.The references for the surface View Article Online formation energies are triplet oxygen and singlet hydrogen molecules in the gas phase.Importantly, the oxide and hydroxide layers on this surface are formed without major modication to the rest of the slab which contrasts strongly with the results obtained using the carbon and silicon terminations (see ESI Fig. S2 †).
The addition of other adlayers has an extremely exothermically favourable stabilising effect that again is in excellent agreement with previous experimental results from the related MAXene-phases. 44More interestingly, by subtracting the energies for the formation of the oxide layers from those of the hydroxyl layers, we observe a large reduction in the hydrogen adsorption values: Ti ).These latter energies are interesting, because they are close to the values determined via modelling of chemical potentials to be optimal for the CO 2 hydrogenation reaction over the monocarbide materials. 22However, it is still unknown whether these hydroxylated surfaces will also maintain their other catalytic properties that make silicon carbide MAX-phases such exciting candidates for CCC and CCU processes.
Conclusions
The MAX-phases of early transition metal silicon carbides represent an interesting class of materials, owing to both their novel mix of metallic, covalent and ionic properties, as well as their structural similarities to related redox catalysts for carbon utilisation reactions.Our results demonstrate that the silicon and carbon terminations of these materials are unstable, with strong silicon-silicon and carbon-carbon bonds causing sintering and coking respectively, when undercoordinated atomic layers are exposed to the vacuum.In the presence of oxygen these terminations lead to the formation of partially oxidised silicon-and carbon-containing species as well as massive surface reconstruction.However, these materials can also terminate at an extremely stable metal (0001) facet that is very active towards CO 2 adsorption and activation.Unfortunately, these facets adsorb hydrogen far too strongly for them to be useful for the hydrogenation of carbon dioxide, although further calculations suggest that oxidised surfaces may have the ideal properties for this process.Finally, whilst these materials appear to show many promising characteristics as potential CCU catalysts, much more experimental and theoretical work is required to study their activity and selectivity towards the production of industrially useful chemicals.
Fig. 1
Fig.1Comparison of lattice constants (a 0 ) for both the monocarbides and their comparable silicon carbides.All results are calculated using the PBE functional.
Fig. 3
Fig. 3 Adsorption energies for different hydrogen loadings on top of pristine metal terminated surfaces of four MAX-phase silicon carbides.Values are shown for: Ti 4 SiC 3 blue diamonds (solid trend line), V 4 SiC 3 brown circles (dotted and dashed trend line), Zr 4 SiC 3 green squares (dotted trend line) and Nb 4 SiC 3 purple triangles (dashed trend line).Energies are given in eV per atom with reference to gas phase molecular hydrogen.Number of hydrogen atoms added is given along the x-axis in multiples of two (i.e.indicating increasing molecules of H 2 added).
Fig. 4
Fig. 4 Adsorption energies for chemically adsorbed carbon dioxide on the metal terminated (0001) basal plane of the four silicon carbides, shown by the orange bars.These values are compared to adsorption energies for the comparable (001) (blue bar) and metal terminated (111) (green bar) surfaces of the comparable monocarbide catalysts [ref.22].
Fig. 5
Fig. 5 Water monolayers on top of exposed metal terminated (0001) basal plane of silicon carbides.(top) Adsorption energy in eV per H 2 O for eight water molecules adsorbed on each surface.(bottom) Hydrogen-bonding energy in eV per H 2 O for the water layer without the surface interactions.
Fig. 6
Fig. 6 Models of oxidised surfaces of the four metal terminated silicon carbides.(top) Oxygen monolayer of nine oxygen atoms with energies given in eV per O atom .(bottom) Hydroxylated surfaces with nine -OH groups per surface with values given in eV per OH molecule .
Table 1
Energies of formation (O f E 0 ) for each silicon carbide compared to the corresponding monocarbide material (with the same metal component).All values are normalised to eV per atom and compared with the experimental heat of formation a PBE.b PBE sol .c RPBE. d Experimental values.
|
v3-fos-license
|
2021-07-24T06:16:49.935Z
|
2021-07-22T00:00:00.000
|
236200623
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0252871&type=printable",
"pdf_hash": "d6199c21e17119b2efbed0d1601bcc38aeed5936",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2656",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"sha1": "11cbb3cb43ce8cc3263019bcd16ed32a1a33f85c",
"year": 2021
}
|
pes2o/s2orc
|
Research on efficiency measurement and spatiotemporal disparity of rural public health services in China
Objective Based on the rural public health services of 29 regions in China, a service efficiency evaluation index system consisting of input and output dimensions was constructed, and the coupling, coordination and disparity efficiencies of rural public health services in China were studied, providing information to redress the imbalance in the interregional coordinated development. Methods The efficiency, spatiotemporal disparity pattern, spatial correlation and evolutionary trend of the coordinated development of rural public health services of 29 regions in China from 2004 to 2018 were analyzed using efficiency and spatial analysis methods such as Data Envelopment Analysis (DEA), kernel density estimation and Moran’s Ⅰ analysis. Result Nowadays, there are problems of unbalanced and insufficient development between the fields and regions in the development of rural public health services in China. The development level of rural public health services in various regions shows a distribution pattern that the service efficiency is “high in the middle”, “middle in the east” and “low in the west”, indicating a spatial cluster effect.
Introduction and literature review
The increasing health needs and the impact of various public health events raise a great demand for paying attention to the efficiency and level of health services. In recent years, China's health investment has continued to increase, and the level of basic health services has also been gradually improved. However, in general, the health services in the vast rural areas are still facing severe challenges, and the disparities in health services still exist between regions. In particular, the outbreak of Covid-19 epidemic caused a great shortage of health supplies in rural areas of China, and some provinces and cities with severe epidemic conditions were even faced with the shortages of health personnel and hospital beds, which poses a huge challenge to the national health system. Therefore, evaluating the allocation efficiency of rural health service resources in China's provinces is a prerequisite for an effective response to the epidemic. The ability of China's provinces to efficiently allocate the available health service resources is the key to resisting this epidemic. Hence, the evaluation of the efficiency and level of rural health services in various provinces will help to further improve the ability to support and allocate health service resources. In summary, a research on the efficiency measurement and spatiotemporal disparity of rural public health services in China is necessary. The spatial disparities in the level of health services have been widely studied. Some studies have found that the provision of health services shows varying degrees of spatial imbalance in both developed and less developed countries such as the United States, Spain, Germany and India [1][2][3][4][5]. A number of studies have been conducted in recent years on the coordinated development of public health services in China. Lu Zuxun, Xu Hongbin et al,. [6] pointed out that although China's health service system construction has made many achievements, the capacity of health services in grassroots is still insufficient. According to relevant studies, He Wenju and Liu Huiling [7] proposed that it is not uncommon for provinces, cities, and even regions to have different health resources. However, in recent years, the improvement of the system, the economic system and transportation environment has bridged the gap in health resources between provinces, cities and regions. In terms of index system construction, a series of variables were selected to construct the regional health service evaluation index system. Ma Zhifei et al,. [8] selected the number of skilled health personnel per capita and the number of beds in health facilities per capita as variables; Han Zenglin et al,. [9] selected the number of daily visits per doctor, daily inpatients per doctor and daily inpatients per health personnel as variables; Huang Jingnan et al,. [10] selected the number of doctors (assistant) per 1000 rural population and the number of registered nurses per 1000 rural population as variables. Based on this, an evaluation index system was constructed. Hu Yujie [11] used the DEA model and the Malmquist index to analyze the level of health service provision in various regions, and concluded that it is steadily increasing across China. Some authors also conducted a comprehensive evaluation by constructing a health evaluation index system. In general, multidimensional output indexes, such as the service capacity of grassroots health institutions and quality medical staff, are first selected to construct an evaluation index system [12]. The Gini coefficient, Theil index, entropy-weighted TOPSIS, concentration index method and other research methods are then used to measure the efficiency of public services [13][14][15]. It is generally agreed that regional disparities obviously exist between different regions and provinces in China, and the spatial pattern generally shows the distribution characteristics of "high in the east and low in the west" [16].
A review of the existing references reveals that the research on health service provision has made some progress in recent years. However, there are few references that focus specifically on the regional disparities in health service provision. The researches on the regional disparities in health service provision of China suffer from the following deficiencies: 1) The evaluation index systems are not well designed and comprehensive, and most of them lack resultsbased indexes, which may reduce the accuracy of the evaluation results; 2) Few studies have comprehensively and systematically discussed the distribution dynamics and regional disparities. More importantly, the lack of reasonable use of spatiotemporal analysis methods usually leads to an incomplete analysis and an inaccurate estimation result [17]. Therefore, based on previous studies and using China's provincial panel data, this study analyzed the level of public health services in rural China from the perspectives of time and space. Firstly, the Data Envelopment Analysis (DEA) was used to quantitatively evaluate the level of health service provision in various regions of China from a perspective of time. Secondly, the kernel density estimation was used to compare different years and obtain temporal changes in the efficiency of health services in rural China. Then the Moran's Ⅰ analysis was used to reveal the regional and spatial evolution of the level of health service provision. Hence, in this work, the efficiency of public health services in rural China was measured and studied in terms of time and space.
Selection of Data Envelopment Analysis (DEA) model
First of all, the purpose of the evaluation is to accurately know the current status of regional public health services. Therefore, the indexes that can reflect the effectiveness of rural public health services were selected. Based on the study of the index system and the summary of past experience, the cross-sectional data for one year were selected as input and output factors. The DEAP 2.1 software was used to calculate these data and obtain the evaluation value θ [18].
If there are n decision making units (DMU), each unit has m types of inputs and s types of output, which correspond to the "consumed resources" and "performance of work". X ij (X ij >0; i = 1,2,� � �,m) refers to the input amount of the i-th input of the j-th decision unit, X j is used to represent the output amount of the DMU j -th output of the j-th decision unit, and it is expressed as: (X j , Y j ) can be used to denote the inputs and outputs of j decision units DMU j . After the introduction of the Archimedean infinitesimal ε, the input relaxation variable S − and the output relaxation variable S + , the BCC model for the k-th decision unit is as follows: In the C 2 R model, θ is often referred to as the efficiency coefficient. If θ<1, but S − and S + are not all 0, the decision unit DEA under evaluation is considered inefficient, i.e., the existing outputs can be achieved with fewer inputs. If θ = 1 and one of S − and S + is not 0, the decision unit DEA is considered weakly efficient. If θ = 1 and S − and S + are both 0, the decision unit DEA is considered efficient, which means that it is not advisable to increase or decrease the input amount with the existing outputs. Comparing with the C 2 R model, the BBC model has an additional constraint, i.e., P n j¼0 l j ¼ 1. If the optimal solution to this model is λ � ,S �− ,S �+ ,θ � , its DEA efficiency is determined theoretically as follows: 1. When θ � = 1, the j 0 -th DMU is weakly efficient 2. When θ � = 1 and S �− = 0, S �+ = 0, then the j 0 -th DMU is efficient.
Solve the above formula, if θ = 1, then the decision unit DMU j being evaluated is DEA efficient. This result means that DMU j reaches the maximum output with the set input or minimizes the input when achieving the set output target, and the resources have an efficient allocation. If θ<1, then the decision unit DMU j being evaluated is not DEA efficient. This result means that there is insufficient output or excessive input in the evaluation system composed of these U decision units, indicating that the optimal allocation of resources has not been achieved.
Kernel density estimation
The kernel density estimation method is a non-parametric test method that studies the data distribution characteristics from the data sample itself to estimate unknown density functions. The principle is to count the number of points around a certain point. For data x1,x2,. . .,xn, the kernel density estimation is expressed as [19]: Where K(x) denotes a Kernel function, Xi denotes the sample value of the random variable, � x denotes the average value of the sample values, N is the sample size, h is the bandwidth. The choice of bandwidth directly affects the estimation results, such as the smaller the bandwidth the higher the accuracy of the estimation. In this work, the Gaussian Kernel density function was chosen among the commonly used Kernel functions, as shown in equation (4):
Spatial global correlation analysis
Global correlation means the correlation of spatial data in the entire space and is measured by the global Moran's I which is used to test whether neighboring areas in the entire space are similar, dissimilar or independent. Moran's I is defined as follows [20]: Where n is the total number of regions, xi denotes the observation value of the i-th region, and-x denotes the average value for all regions, and wij is the value of the spatial weight matrix defined above and is the sample variance. A spatial weight matrix for the 29 regions was first constructed, and then the Moran's I was used to study the global spatiality. In this work, a 0-1 weighted distance matrix was used. The value is 1 if two regions are adjacent and 0 if they are not. Because Hainan is geographically an island, in terms of the neighboring relationship, Guangdong and Hainan were taken into consideration of each other's neighboring regions in the process of constructing the spatial weight matrix. Anselin (1995) proposed the local Moran's I to test whether there is clustering of similar or dissimilar observation values in local regions. The local Moran's I of region i is used to measure the relationship between region i and its neighbors, and defined as follows:
Analysis for the spatial cluster pattern of health service development
The local Moran's I is used to present whether a specific region is correlated with its neighboring regions. The local Moran's I also has a value between -1 and 1. When the local Moran's I is greater than 0, it means that region i and its neighboring regions have similar characteristic properties, which indicates that they all have high level or low level development. On the contrary, when the local Moran's I is less than 0, it means that region i and its neighboring regions have the opposite characteristic properties, which indicates that they are high-low cluster or low-high cluster. The spatial clusters of health service development in different regions were analyzed by calculating the local Moran's I as defined above and drawing the local Moran's I scatter plot. The scatter plot is used to show the correlation between the variables of a region and its neighboring regions by plotting the positions of different regions on the twodimensional plane (Y, WY). The actual development level of health services is indicated by the spatial weight matrix and the spatial-lagged level of health service development.
System construction and index selection
Measuring the effectiveness of health services in a region is also a part of the performance evaluation for the regional government, and this measurement can also help the government to better know the actual status of the current effectiveness of public health services in rural regions and improve its own service capability to ensure the well-being of local residents. So it is particularly important to measure the effectiveness of public health services. The scientific, comprehensive and reasonable efficiency of public health services can reflect the service effectiveness of a region. A scientific and objective evaluation system for the effectiveness of rural public health services should consist of a number of interrelated indexes that objectively reflect the dynamic rules of development of rural public health services. It is important to understand who is being evaluated, what is being evaluated, how it is being evaluated, and who is evaluating. On top of that, scientifically defining the range of evaluation indexes for the effectiveness of rural public health services is a prerequisite for scientifically evaluating health service. Evaluation content must take into account China's current national and regional realities. The scope of the evaluation should not be too extensive, otherwise the content will be complicated, the workload will be heavy, the evaluation will be difficult to quantify, and the key elements will be difficult to identify. However, the scope of evaluation also should not be too small. An evaluation consisting only one or two contents regarding public health service is narrow and is difficult to make a comprehensive and scientific conclusion. Therefore, the indexes for evaluation should be in an appropriate range. The indexes should be quantifiable and with easily accessible data and meantime are selected from representative, referential and more stable parts of the public health services. This work investigated the situation of rural public health services from two aspects. On the basis of scientifically defining the relevant indexes and setting the score distribution, a relatively objective index reflecting the level of rural public health services in a region is constructed using input and output indexes.
Therefore, the measurement of the development of high-quality public medical and health services is a systematic project that needs to comprehensively consider the economy, society, ecological civilization, people's livelihood and health. The development of high-quality public medical and health services is not only the pursuit of high-quality development process, but also the pursuit of high-quality development results. Grasp not only the current momentum of high-quality public medical and health services, but also grasp the prospects and potential of the overall and various parts of the service development. The performance of a local government in rural public health services is mainly shown in the input indexes such as the construction of infrastructures and clinics, the number of skilled health personnel and beds in rural health facilities per 1000 rural population. These indexes are directly related to the perception of people in the health services. Therefore, it is important to have indexes to show the capacity of public health services. In order to directly show the capacity of the government in public health services, the following output indexes are chosen for this work: 1) The hospital discharge rate, 2) The bed utilization ratio, 3) The average days in hospital, 4) Daily visits per doctor, 5) Daily inpatients per doctor. Therefore, this work constructed an evaluation index system for public health services in rural China with 10 input and output indexes, which are shown in Table 1.
Based on the above basic assumptions about the inputs and outputs of rural public health services, a comprehensive service evaluation system for rural public health services is constructed. The comprehensive service performance of the government was evaluated with the local input and output indexes. In summary, this work follows the principle that the relevant indexes should be as quantifiable, comparable, more stable, more influential, and representative as possible in the selection of relevant indexes.
Results on the measurement of the level of rural public health services
Based on the China's provincial panel data of 29 regions from 2004 to 2018, the DEA model was used to obtain the efficiency of rural public health services, and the details are shown in Tables 2 and 3.
The efficiency value calculated by the model is within the interval [0, 1], the closer it gets to 1, the higher the efficiency. If it equals to 1, then the DEA is efficient. The above tables demonstrate that the great majority of regions in China show a low efficiency in the rural public health services from 2004 to 2018. In addition, the data reveal that only Tianjin, Chongqing, Tibet and Ningxia show an average efficiency value equal to 1, reflecting the efficient rural public health services, which accounts for 13% of the total. In other words, the service efficiency values of all the other regions are less than 1. The average efficiency value of rural public health services on our collected data is 0.928. 18 regions exceed the average value, which accounts for 62% of the total regions, reflecting the weak efficiency in the rural public health services. The remaining 11 regions show inefficiency and a low level in the rural public health services. The above result indicates a huge disparity in the service efficiency between regions. China's rural public health services still have great potential for improvement. In terms of orientation, the average efficiency values are 0.918, 0.898, 0.956 in the eastern, central and western regions, respectively. The service efficiency value of the western region is significantly higher than that of the eastern and central regions and the national average, showing the spatial characteristics that the service efficiency is high in the eastern and western regions but low in the central region. This result is attributed to the fact that the eastern coastal regions have more complete health infrastructures and better qualified human resources, and the western regions have been receiving attention in recent years and continued to vigorously develop its economy and infrastructures to improve its service capacity. Therefore both the eastern and western regions show a higher efficiency value in the health service. In terms of regions, the regions that have a service efficiency value higher than the national average are in a proportion of 62%, indicating that the efficiency of health services has been further improved in most regions. The four regions with the highest average efficiency are Tianjin, Chongqing, Tibet, and Ningxia in order, and three of which are in the western regions. The five regions with the lowest average efficiency are Shandong, Jilin, Inner Mongolia, Liaoning, and Shanxi in order, and most of them are in the central region.
Temporal distribution of rural public health service efficiency
In this work, the kernel density estimation method was used to study the peak and distribution changes of China's rural public health services efficiency and to analyze its dynamic evolution characteristics. Eviews8 software was applied to the kernel estimation of China's rural public health services efficiency, and a two dimensional graph of the kernel density was obtained and shown in Fig 1. The typical years 2004,2006,2008,2010,2012,2014,2016 and 2018 were selected to draw the kernel density curves. The service efficiency is based on the results obtained from DEA method, and the Kernel density which is in essence probability density mainly helps to compare and reference. It is worth noticing that the Kernel density and service efficiency, which are vertical axis and horizontal axis labels respectively, have no units owing to nondimensionalization. Comparing the curves of different years, the temporal dynamic evolution characteristics of China's rural public health services efficiency can be obtained. In the curves, the change of the central position of the density function in the 8 years is not obvious. With an increase of the year, there is a tendency to gradually shift the kernel density distribution curve to the right. After 2010, the extent of the shift is slightly increased, indicating that the health service efficiency has a greater increase after 2010. In short, China's regional health services have been slowly improving. From the view of the shapes of the curves, the 4 years on the left side of Fig 1 have smaller changes, but the years on the right side have the opposite trend. In 2004In , 2006In , 2008In , and 2012, there are great differences in the curves of these 4 years, indicating a large gap in the development between regions. The shapes of the whole distribution curves show a steep first and then gentle trend, and also reveal a low concentration. From the view of the density value, the height of the main peak rises first and then declines, and the width of the main peak gradually becomes wider, which indicates that the polarization trend of the efficiency of the rural health services in China has been gradually weakening. During the observation period, the main peak value of the kernel density curve increases significantly, but the small peak value tends to decrease. This result means that the gap between the regions with low service efficiency in China is gradually increasing, but the gap between the regions with high services efficiency is decreasing. The interregional uncoordinated development is obvious. From 2004 to 2018, the center of the kernel density function did not change significantly. The highest peak value decreases year by year, and these peaks are wide. This result indicates that the distribution of the efficiency of health services is scattered. The wave of the kernel density curve shifts to the right, the vertical height of the peak decreases, and the horizontal width increases, which indicates that the index is increasing and the regional disparity becomes greater. This result means that from the view of the number of peaks, the kernel density distribution of China's service efficiency has always shown the characteristics of a "scattered double peaks" model. The double peaks which were shown in all the curves indicate an uncoordinated development in China's ecological efficiency. Furthermore, the polarization of the interregional coordination index has always existed. Table 4. Clearly, the P values of Moran's statistic for the health service development from 2004 to 2018 are all less than 0.1, indicating that the development of China's rural public health services has a significant spatial correlation in the entire regions. In terms of geographical location, there is a strong spatial dependency effect on the development of health services in different regions. The development of health services of a region is not only related to its own development, and is also positively affected by the neighboring regions, which is attributed to the mutual resource flows between regions, indicating the spatial spillover effect between regions in the public health services.
Moran's scatter plot has four quadrants, where the first and third quadrants refer to positive spatial correlation, and the second and fourth quadrants refer to negative spatial correlation. The specific meaning of each quadrant is as follows: 1) The first quadrant refers to that the regions and their neighboring regions have a high development level in health services, which indicates that the regions with well-developed public health services are adjacent to the regions with well-developed public health services, showing a high-high cluster. This quadrant is denoted as H-H.
2) The second quadrant refers to that the regions have a low development level in health services, but their neighboring regions have a high development level in health services, showing a low-high cluster. This quadrant is denoted as L-H. 3) The third quadrant refers to that the regions and their neighboring regions have a low development level in health services, showing a low-low cluster. This quadrant is denoted as L-L. 4) The fourth quadrant refers to that the regions have a high development level in health services, but their neighboring regions have a low development level in health services, showing a high-low cluster. This quadrant is denoted as H-L, shown in Figs 2-9.
Based on the above description, the corresponding results for the Moran's scatter plot are listed in detail in Table 5. The distributions of the regions for the development of health services for nearly 15 years in the four quadrants are almost the same. Specifically, the regions in H-H are mainly Hebei, Shanxi, Jiangxi, Shandong, Henan, Hubei, Hunan, Guangdong and Shaanxi. These regions generally have a high development in public health services and are mostly from the regions with relatively developed economic in the center of China. Therefore, they show a high-high cluster. The regions in L-H are mainly Inner Mongolia, Liaoning, Anhui, Fujian, Chongqing, Guizhou, Yunnan, Qinghai. These regions have a low development in health services, but their neighboring regions have a high development in health services. The region in H-L is almost only Sichuan, which indicates that Sichuan itself has a high development in public health services, but the neighboring regions such as Tibet, Xinjiang, Qinghai
Conclusion and discussion
In this work, the development of China's rural public health services was studied. The evaluation index system for China's rural public health services was constructed. The DEA method was used to measure the level of rural public health services in 29 regions of China from 2004 to 2018. And then the efficiency of China's public health services was analyzed from the perspectives of spatiotemporal dynamic distribution and evolutionary trend. The conclusions are summarized as follows: 1. The health services in the areas with low supply levels have a relatively faster development. This is mainly due to the fact that the policies and measures adopted by the government to promote the equalization of basic public health services, such as strengthening the construction of public health institutions and guaranteeing the funding of public health services, can be a useful supplement to the areas with relatively poor health resources, while the effect is not obvious for the areas with relatively rich health resources. As a result, the efficiency of rural health services in China is growing slowly. There is a large and increasing gap between regions in the efficiency, which indicates an uncoordinated development between regions. And the trend towards multi-polarity has long existed. 3. There are some characteristics in the evolution of service efficiency. Level transitions are mostly occurred at adjacent levels. The service efficiency level generally shifts to the next lower level. The probability for cross-level transition is extremely low. In addition, the probability of upward transition is generally lower than the probability of downward transition. The level transition at higher levels is more fluctuated, while the level transition at lower levels is smoother and more persistent.
4. There is a large gap in the level of health expenditure between regions, and the health resources cannot cross the gap of social welfare, and the mobility of resource elements is poor. Therefore, this paper suggests that the government should change its functions, break the administrative system barriers, and create conditions for the free flow of health resources to realize the coordinated development of health services between regions and between urban and rural areas. Based on the findings of the study, the following discussions are proposed: 1. Although the level of public health services in China is not uniform, after years of development, the difference in the level of public health services in the three regions has reduced, especially in the central and western regions. This result indicates that the strategy of China which aims to achieve the equalization of the level of public health services in various regions has achieved initial success.
2. To achieve the goal of narrowing the regional disparities in public health expenditure, fiscal policy focuses on two aspects: On the one hand, China should further accelerate the reform of the fiscal system, promote economic development in backward areas through incentive fiscal policies, and expand the scale of regional economies, which enhances the hard power in providing public health services. On the other hand, the central government's investment in public health care needs to further support the underdeveloped regions. The public health investment in the central and western regions would continue to increase through transfer payments. To a certain extent, it helps to overcome the regional differences in the level of public health services caused by economic differences between regions. 3.
In addition to what has been studied in this paper, there are actually further directions worthy of research. This paper focuses on the research of efficiency measurement and spatiotemporal disparity of rural public health services in China. Beyond that, future research can start from different aspects, such as the differences of public health services in China and abroad and the reasons for the differences, which helps to find the way to promote public health services in China and abroad. Besides, this paper focuses on quantitative research in terms of research methods. In the future, more complex simulation analysis can be used to make more comprehensive and in-depth research on how multiple subjects interact with each other in public health services.
|
v3-fos-license
|
2020-10-24T11:35:41.482Z
|
2021-03-04T00:00:00.000
|
225054725
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1911-8074/14/3/100/pdf",
"pdf_hash": "22adf81289f32e960048c14a1d022613dc258108",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2657",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "4548586e7438cf58b6561a825a3a937881de1ce4",
"year": 2021
}
|
pes2o/s2orc
|
Cash Flows Discounted Using a Model-Free SDF Extracted under a Yield Curve Prior
: We developed a model-free Bayesian extraction procedure for the stochastic discount factor under a yield curve prior. Previous methods in the literature directly or indirectly use some particular parametric asset-pricing models such as with long-run risks or habits as the prior. Here, in contrast, we used no such model, but rather, we adopted a prior that enforces external information about the historically very low levels of U.S. short- and long-term interest rates. For clarity and simplicity, our data were annual time series. We used the extracted stochastic discount factor to determine the stripped cash flow risk premiums on a panel of industrial profits and consumption. Interestingly, the results align very closely with recent limited information (bounded rationality) models of the term structure of equity risk premiums, although nowhere did we use any theory on the discount factor other than its implied moment restrictions.
Introduction
Under very mild conditions, there exists a scalar stochastic discount factor (SDF) process that generates moment restrictions on the returns (or cash flows) of traded securities. Knowledge of the SDF process allows one to check if a particular security is priced consistently with other traded assets, and it allows the valuation of uncertain future cash flows regarding nontraded assets as far as one is confident the pricing implications extend appropriately. In the literature, there are a plethora of methods either to nonparametrically extract the SDF process from historical data or to evaluate particular theories on the SDF process such as long-run risk models or habit models.
One issue associated with such approaches concerns the ex post implied level of real interest rates. Extant procedures use moment conditions based on asset returns, and the return horizons typically range from monthly to annual. The returns series are like first differenced asset prices and thus nearly white-noise processes; information on the levels of asset prices, and bond prices in particular, is negligible, leaving interest rates ill determined. Left on its own, an extracted SDF can give rise to somewhat implausible levels of real interest rates. As an example, for a recent long-run risk application, Christensen (2017) reports the long-term interest rate as a rather high at 7 percent per year; in additional computations using this author's code, we found that the entire yield curve from one year on out is essentially flat at just over 7 percent. Poorly determined real yield curves were encountered using the extraction procedure of this paper, where absent prior knowledge implied that yield curves shifted and bent in implausible configurations. As just noted, the moment conditions contain little, if any, level information, and external information from other sources needs to be imposed to discipline the SDF extraction.
An agreed upon fact is that U.S. real interest rates are very low. According to Campbell (2003, p. 812), the average short-term U.S. real rate was 0.896 percent over the period , and few would have argued for higher real short-term rates since then. As for longer-term real rates, Figure 2 of Tesar and Obstfeld (2015, p. 8) indicates that the 10-year real rate of interest over the period 1930-2014 was often negative, generally fluctuated between 0 and 2.5 percent per year, and only briefly bumped 5 percent during the interwar era and again during the disinflation period of the early 1980s. Additional information from Treasury Inflation Protected Securities (TIPS) real yields is seen in Table 1, which are remarkably low. In what follows, we implemented a Bayesian SDF extraction procedure subject to a prior that enforces these known low values for U.S. real interest rates. The method's mathematical foundation requires a prior to ensure all the random variables are actually defined on a proper probability space, and here, we elected to use the yield curve prior in place of a specific model of the SDF. Specifically, the prior centers the one-year yield at 0.896 percent with a standard deviation of 1.00 percent, and it centers the 30-year yield at 2.00 percent with a standard deviation of 1.00 percent. The prior generally accommodates both the levels and fluctuations in real rates suggested in the above discussion and by Table 1. This prior was maintained throughout the entire sample period, although it would be relatively easy to impose a time-varying prior with possibly higher yields in the earlier parts of the sample if reliable real rates were available to inform the development of such a prior.
Post-extraction, we used the dynamics of the SDF and related variables in a standard log-Gaussian pricing framework to value various cash flows, with a focus on the risk premiums on stripped cash flows, often termed dividend strips in the literature. The concept is simple: if, from the perspective of period 0, an investment pays off the uncertain stream {CF t } ∞ t=1 into the indefinite future, then the stripped cash flow is the asset that pays just CF t in period t > 0 and zero in all other periods. Recently, researchers have been investigating the term structure of the equity risk premiums on stripped cash flows to better understand the relationship between risk and reward at short-and long-term horizons. Asset-pricing models (van Binsbergen et al. 2012;Giglio et al. 2015) suggest that the term structure of risk premiums is upward sloping, with more distant cash flows earning higher risk premiums due to a long-run risk (Bansal and Yaron 2004) mechanism that makes investors fearful of volatility in the distant future. This prediction seems at odds with common sense intuition, but theory alone is not powerful enough to make an unambiguous prediction on the slope of the equity risk premium term structure. Backus et al. (2016) show how a wide range of levels and shapes of the term structures of claims can be achieved by modifying the dynamics of the pricing kernel, the cash flow growth, and their interaction. Empirically, discussions about the true average slope of the equity return term structure have not yet been settled (Cochrane 2017;Bansal et al. 2017;van Binsbergen et al. 2017), and reconciling asset-pricing models with the possible slopes of the term structure of equity returns has recently become a very active area of research. Of particular interest here is Croce et al. (2015), who developed a bounded rationality model with long-run risk that appears to explain our findings below.
Ex Post Stochastic Discount Factor
The ex post-realized values of SDF t−1,t were extracted annually for 1930-2015 using the methodology developed in Gallant and Hong (2007). The differences are threefold: the dataset is longer due to the passage of time, all of the Fama-French portfolios French 1992, 1993) can be used because a missing data problem has been resolved, and the prior tilts values toward a specified yield curve instead of toward long-run risk dynamics (Bansal and Yaron 2004). In brief, the ideas are as follows.
For time t = 1, . . . , n = 86, where t = 1 corresponds to 1930 and t = n corresponds to 2015, denote the real gross returns on the 25 Fama-French 5 × 5 size and value portfolios with the vector R st , denote the real gross annual return on the thirty-day T-bill with R bt , denote the real per capita consumption growth with C t C t−1 , and denote the per capita labor income growth with L t L t−1 . Let x t be a vector of length 28 containing these variables. Let x be an array with x t as columns; x has the dimensions 28 by n = 86. Let θ t denote the stochastic discount factor SDF t−1,t and set θ = (θ 1 , . . . , θ 86 ).
The vector θ is random and endogenous, so determining a likelihood p * (x | θ) for Bayesian analysis requires some care. The likelihood construction proceeds as follows.
We presume the existence of, but not knowledge of, a general equilibrium model with a financial sector. The general equilibrium model determines a joint probability space on which all the random variables that enter the model live and, hence, a marginal probability space on which the random variables (x, θ) ∈ X × Θ live. The marginal probability space determines a conditional distribution of x given θ. We presume that this conditional distribution has the density p o (x | θ). Ours is a partial equilibrium analysis, so any general equilibrium parameters that might be involved in an expression for p o (x | θ), were they known, do not affect our analysis and can be ignored or, to be pedantic, are fixed and calibrated by nature.
Let p(θ) denote the prior we intend to use for a partial equilibrium Bayesian analysis. The analysis will be with reference to the probability space ( Denote the vector of the (conditional) moment-equation errors with where 1 denotes a vector of 1's of length twenty-six. Define the instruments where R st − 1 denotes 1 subtracted from each element of R st . Consider the moment conditions where ⊗ denotes the Kronecker product, and their sample averagē The length of the vector m(x t , x t−1 , θ t ) is K = 754, so the number of overidentifying restrictions on θ 2 , . . . , θ 86 is 669. Note that θ 1 is not yet identified because θ 1 does not appear in (3); it is identified by the prior as discussed later in this subsection.
Following Gallant and Hong (2007), we assume that e t,t−1 (θ) has a factor structure. There is one error common to all the elements of θ t R st and twenty-six idiosyncratic errors, one for each element of (θ t R st , θ t R bt ). Denote this matrix with Σ e (or with Σ e,t if one wants to allow for heterogeneity, which makes no difference in what follows). A set of orthogonal eigenvectors U e for Σ e is easy to construct (Gallant and Hong 2007, p. 535) and can be used to diagonalize Σ e . Similarly, U v and Σ v for V t can be determined. Let H t (θ) = (U v ⊗U e ) m(x t , x t−1 θ) with elements h i,t (θ). Diagonalization implies that we can estimate the variance of H t (θ) by a diagonal matrix S n (θ) with the elements Let S −1/2 n (θ) denote this matrix with the diagonal elements replaced by their square roots.
The extraction of the ex post realization of the SDF is based on the random variable with a range Z defined on the aforementioned probability space (X × Θ, C o , P o ). Z(x, θ) is the normalized sum of transformed draws (x t , θ t ) and is asymptotically multivariate normal with a zero mean and identity variance under plausible regularity conditions on (X × Θ, C o , P o ). Note, specifically, that θ t is random and jointly distributed with x t , so issues of uniformity in θ do not arise. Thus, it is reasonable to assume that Z follows the standard normal distribution Φ(z) with a density of φ(z).
The assumption that Z(x, θ) has a density of φ(z) induces a probability space (X × Θ, C, P), where C is the σ-algebra of preimages C = {C = Z −1 (B), B ⊂ Z, B Borel} and P[C = Z −1 (B)] = B dΦ(z). Define C * to be the smallest σ-algebra that contains all the sets in C plus all the sets of the form R B = (X ×B), where B is a Borel subset of Θ. Under a semipivotal assumption for (4), ( Which is that {x : Z(x, θ) = z} is not empty for any choice of (z, θ) in the parameter space Θ and range space Z; a sufficient condition is that each element of Z is continuous in µ when some element x it of x t is replaced by x it + µ for all t and is unbounded from above and below in µ.) there is an extension of (X × Θ, C, P) to a space (X × Θ, C * , P * ) on which the conditional density of x given θ is (Gallant 2016a). This density is termed the "method of moments representation" of the likelihood and may be used for Bayesian inference in connection with the prior p(θ) (Gallant 2016a(Gallant , 2016b. (Missing in (5) is a multiplicative Jacobian term that experience indicates has a negligible impact on computations when omitted (Gallant 2020)).
In short, the Bayesian method used in Gallant and Hong (2007) and here uses the moment conditions z = Z(x, θ) given by (4), takes as the likelihood, and proceeds directly to Bayesian inference using a prior p(θ). Next, we describe the prior.
Let w t = log(θ t ), log( GDP t GDP t−1 ) , where GDP t growth is observed for t = 1, 2, . . . , n = 86. GDP is not involved in the SDF extraction up to this point. It is now included as prior information regarding past business-cycle conditions. Consider the recursion where the u t are independent and bivariate normal with a zero mean and variance Σ d . Markov chain Monte Carlo (MCMC) (Gamerman and Lopes 2006) is used in the Gallant and Hong (2007) method which means that the proposed θ t is available to compute w t before the prior and likelihood need to be computed. From the w t the parameters of (7) can be determined by the least squares. With the least-squares values replacing parameters in (7), a yield curve for maturities one year through thirty years can be computed analytically from (7) conditional on a specified initial condition w 0 ; see Equations (12), (13) and (15) of Section 3. In particular, the one-year and 30-year yields, Y * 1,t and Y * 30,t , can be computed successively for w 0 = w t , t = 1, ..., n = 86. Our prior is Note, in particular, that the prior identifies θ 1 . With likelihood (5) and prior (8) in hand, Bayesian inference can be carried out using MCMC in the usual way; see, for exaxmple, Gamerman and Lopes (2006). After the transients died out, we ran an MCMC chain of length 8,000,000. The θ in the chain with the highest value of the posterior was selected as the estimateθ of the ex post SDFs for the years 1930 through 2015. The estimate is plotted as Figure 1. The shaded areas are NBE R recessions.
Let w t = log(θ t ), log( GDP t GDP t−1 ) , where GDP t growth is observed for t = 1, 2, . . . , n = 86. GDP is not involved in the SDF extraction up to this point. It is now included as prior information regarding past business-cycle conditions. Consider the recursion where the u t are independent and bivariate normal with a zero mean and variance Σ d . Markov chain Monte Carlo (MCMC) (Gamerman and Lopes 2006) is used in the Gallant and Hong (2007) method which means that the proposed θ t is available to compute w t before the prior and likelihood need to be computed. From the w t the parameters of (7) can be determined by the least squares. With the least-squares values replacing parameters in (7), a yield curve for maturities one year through thirty years can be computed analytically from (7) conditional on a specified initial condition w 0 ; see Equations (12), (13) and (15) of Section 3. In particular, the one-year and 30-year yields, Y * 1,t and Y * 30,t , can be computed successively for w 0 = w t , t = 1, ..., n = 86. Our prior is Note, in particular, that the prior identifies θ 1 .
With likelihood (5) and prior (8) in hand, Bayesian inference can be carried out using MCMC in the usual way; see, for exaxmple, Gamerman and Lopes (2006). After the transients died out, we ran an MCMC chain of length 8,000,000. The θ in the chain with the highest value of the posterior was selected as the estimateθ of the ex post SDFs for the years 1930 through 2015. The estimate is plotted as Figure 1. The shaded areas are NBE R recessions.
Discounted Cash Flow Estimation
We now used the extracted SDF series to value cash flows for assets outside the span of returns used in the extraction step. For this part, we used annual data on corporate profits
Discounted Cash Flow Estimation
We now used the extracted SDF series to value cash flows for assets outside the span of returns used in the extraction step. For this part, we used annual data on corporate profits from various large sectors of the U.S. economy. We assembled annual data for seven sectors but for the shorter period 1959-2015, as data limitations precluded going any farther back. We also treated consumption as a cash flow, making a total of eight under consideration. These data were concatenated with the extracted SDF data and various macroaggregates for this shorter time span.
For the valuation step, consider the trivariate series where CF t denotes a cash flow payoff at time t, such as the annual corporate profits in year t; GDP t denotes the gross domestic product; and SDF t−1,t denotes the extracted stochastic discount factorθ t of Section 2. Note that the second variable in the autoregression, log(GDP t ) − log(CF t ), plays no direct role in subsequent pricing, but it is included because it conveys information on future cash flows. The specification presumes co-integration between GDP and CF, which is discussed more fully in Section 4 below. The time zero present value of the cash flow CF t is where F 0 denotes the time 0 information set. (Note that in (10), the time zero value of the SDF must be unity; the time zero value of CF is irrelevant because we work in terms of ratios-see (19) below-and therefore, we will normalize it to be unity.) (In (10), the expectation E refers to the probability space (X × Θ, C o , P o ) defined in Section 2. In (14)-(16) and thereafter, E refers to the VAR (11).) For a risk-free payoff of one real dollar at time t, the time zero present value is where the e t are independent and trivariate normal with a zero mean and variance Σ b . The sum One can use (12) and (13) where E refers to expectation with respect to the VAR (11). Note that in the above, the time zero value of the SDF must be unity; the time zero value of CF is irrelevant because we work in terms of ratios-see, for example, (19)-therefore, we normalize it to be unity. We now describe the imposition of a yield curve prior on the estimation of the b 0 , B, and Σ b that appear in the VAR (11). Consider a state-space representation of VAR (11) The estimation of (17) subject to the indicated parameter restrictions gives the same estimatesb 0 , andB,Σ b as does the unconstrained estimation of (11). Let and note that the fourth and fifth elements of Ax t are sd f t−1,t and ∆gdp t−1,t = log(GDP t ) − log(GDP t−1 ). An implication is that we can insert the parameters b * 0 , B * , and Σ * b of into Equations (12), (13), and (15) to compute the one-year yield, Y * 1,t , and 30-year yield, Y * 30,t , with y 0 set to v t successively for t = 1, ..., n = 86, and impose the prior (8).
The computational procedure is, within an MCMC loop, for the proposed b 0 , B, and Σ b , to evaluate the likelihood implied by VAR (11); compute b * 0 , B * , and Σ * b as indicated by expression (18) from the proposed b 0 , B, and Σ b ; evaluate the prior (8); and use the likelihood and prior so computed to make the accept/reject decision of the MCMC loop.
Serendipitously, the state-space complications can be avoided because it turns out that the yields Y * 1,t and Y * 30,t obtained by applying Equations (12), (13) and (15) directly to the b 0 , B, and Σ b of VAR (11) are identical to those computed from the b * 0 , B * , and Σ * b of VAR (18). Apparently, the reason is that the only difference between the distributions of Av t and v t is the location parameter of their fifth element, and the location parameter of the fifth element is not involved in Equation (12), (13) or (15). For ourselves, we are more comfortable relying on having performed the computations using both (11) and (18) and obtaining identical results than relying on the distributional argument.
Valuation, Expectation, and Risk Premiums
Display (14) shows the valuation operator PV 0,t (CF) as the economic value in period 0 of the cash flow CF t received in t; evidently, PV 0,t (•) is a linear operation on random variables realized at time t. Likewise, (16) shows the conventional statistically expected value operator EV 0,t (CF), also a linear operator on the same space of time t random variables where PV 0,t (•) operates. As usual, two linear operators on a space are connected via a Randon-Nikodym-style change in measure/density, which is the usual risk-neutral change of measure, noted but not used here.
The (geometric) risk-free yield to maturity r f 0,t at time 0 of the t-maturity zero-coupon bond is defined via the relationship e t r f 0,t × PV 0,t (1) = 1, because the invested amount PV 0,t (1) grows at the continuously compounded rate r f 0,t up to $1 at time t. Equivalently, the rate is defined by Just as a coupon-bearing bond can be thought of as a portfolio of stripped payments valued as described immediately above, finance economists have become interested in "dividend strips", where the dividend asset that pays the owner the infinite stream {CF s } ∞ s=1 is viewed as a portfolio of stripped payments. The value of each stripped payment is given by the pricing operation worked out above as PV 0,t (CF). Only if agents are neutral to risk would it be the case that PV 0,t (•) = e −r f ,0,t t EV 0,t (•).
By analogy with the pure discount bond, we can define the geometric rate r 0,t at which the amount PV 0,t (CF) invested at time 0 grows continuously compounded to its statistically expected value at time t by way of Note that r 0,t is a number known at time 0 that pertains to a cash flow received at time t. The quantity risk premium: is the excess (geometric) return over cash of the investment (stripped cash flow) that pays CF t . The amount (20) represents the required rate of return above cash necessary to compensate for the economic risk embedded in the investment. To aid interpretation below, we recall from elementary asset pricing that in the iid log-Gaussian case, the one-period relationship is EV 0,1 (r 0,1 ) = r f 0,1 − Cov(∆c f 0,1 , sd f 0,1 ), using notation defined in (9). The risk premium in (21) is −Cov(∆c f 0,1 , sd f 0,1 ), the fundamental notion in the finance of reward for bearing covariance risk. Expression (21) extends to cover (20) in the obvious way for the t horizon case.
Cash Flow Data
We now apply the preceding to the valuation of eight cash flows. There are seven industrial profit series for real per capita income and real per capita consumption of nondurables and services, annually, for 1959-2015. (For the cash flow sources and construction, see Appendix A.2 of the data appendix, Appendix A.) The eighth cash flow is obtained by treating measured consumption as a cash flow; i.e., we compute the risk premium on the (endowment) asset that pays out annual consumption. The basic statistics for the cash flows, labeled 1-8 are shown in Table 2. Some of the industrial cash flows are aggregates. but none is a complete aggregate of any of the others. For example, cash flow 1, Total Corporate profits, includes items such as transportation and utilities, which are not among the other categories because of a lack of consistent data over the entire sample period. The bottom section of Table 2 also shows the basic statistics for the extracted log-SDF and GDP growth processes.
These cash flows do not correspond to the payoffs of traded securities, but using the above methods, we can compute the risk premiums on the stripped cash flows using (19) and (20). Among other things, we can then examine issues such as the reasonableness of the risk premiums relative to the characteristics of the industries and their term structure at short-and long-term horizons.
The specification of (9) as a VAR presumes that log-cash flow and log-GDP processes are co-integrated with gdp t − c f t as the stationary error correction process. As a check, the first seven panels (excluding cash flow 8) of Figure 2 show time-series plots of the gdp t − c f t process for the seven industrial profit series, each of which appears to be reasonably treated as realizations of a stationary process.
We estimated autoregressions of the form and for the seven industrial profit series, the estimates of ρ ranged between 0.71 and 0.93, with a median of 0.86, and generally, we rejected, quite strongly, H 0 : ρ = 1 in favor of H 1 : ρ < 1. For the consumption cash flow series, the results are different, as the consumption of nondurables and services grew steadily relative to GDP during the transition from a production to a service economy over our particular sample period. Thus, for the consumption cash flow, we cannot invoke co-integration, and we simply use gdp t − gdp t−1 in place of gdp t − c f t as the predictor variable in the vector autoregression (11). The log consumption growth series is displayed in the lower-right corner of Figure 2. We estimated autoregressions of the form and for the seven industrial profit series, the estimates of ρ ranged between 0.71 and 0.93, with a median of 0.86, and generally, we rejected, quite strongly, H 0 : ρ = 1 in favor of H 1 : ρ < 1. For the consumption cash flow series, the results are different, as the consumption of nondurables and services grew steadily relative to GDP during the transition from a production to a service economy over our particular sample period. Thus, for the consumption cash flow, we cannot invoke co-integration, and we simply use gdp t − gdp t−1 in place of gdp t − c f t as the predictor variable in the vector autoregression (11). The log consumption growth series is displayed in the lower-right corner of Figure 2.
To verify the predictability, we estimated forecasting regressions of the form for each of the seven industrial cash flows, and we found very strong evidence for additional predictability coming from the error correction variable gdp t − c f t . For consumption (the eighth cash flow), we performed the regression as and found only mild evidence for additional predictability coming from the second righthand variable. To verify the predictability, we estimated forecasting regressions of the form for each of the seven industrial cash flows, and we found very strong evidence for additional predictability coming from the error correction variable gdp t − c f t . For consumption (the eighth cash flow), we performed the regression as and found only mild evidence for additional predictability coming from the second righthand variable.
Risk Premiums
Using the methods described in Sections 3 and 4.1 above, we computed for each of the eight cash flows the implied risk premiums (20) at horizons t = 1, 2, . . . , 50 for each available year after lags. Since the risk premiums show negligible temporal variation, we onlyt report and discuss the full sample averages. Table 3 shows the oneperiod computations, where the average Cov(∆c f t−1,t , sd f t−1,t ) is the logarithmic SDF exposure of the cash flow. To ease interpretation, the table shows the average correlations as well. The negative of the covariance is the one-period risk premium calculated by way of (21) in this log-Gaussian framework. As seen from the table, all the cash flows carry positive risk premiums except for Retail Trade, which is plausibly seen to be a hedging cash flow. For easier interpretation, we convert the covariance exposure to beta exposure, and plot the relationship between the risk premiums and the (negative) log-SDF beta exposure in Figure 3. The essentially exact linear relationship seen in the figure is a mechanical consequence of the computations, but nonetheless, it is interesting to note that the price of risk is very close to 0.05, meaning an increase in the average return of 5 percent per year per unit exposure of the cash flow to moves in −sd f t−1,t . Far more interesting are the paths of the risk premiums in (20) for t forward into the future, which are shown in Figure 4 for t = 1, 2, . . . , 50. For the industrial cash flows 1-6, i.e., those other than the hedging cash flow, the risk premiums show some increase at short horizons but then decrease with horizons, and they decrease by a factor of one half from 1 to 50 years out; by contrast, for the consumption cash flow, the risk premium increases from about 1 percent per year to 4 percent per year when moving from 1 to 50 years out.
These results are exactly in line with what one would expect from Croce et al. (2015, p. 723). In their limited information (bounded rationality) model, there is long-run risk embedded in consumption, which under usual parameterizations thereby carry an increasing risk premium at longer horizons, just as seen in Table 2. There is also long-run risk embedded in individual cash flows, but it is obscured by a high level of cash flow noise that can be correlated with short-term consumption risk. Agents following optimal filtering rules thereby view the assets as much more covariance-risky in the short run than in the long run, and so we expect to observe higher risk premiums for the short run than the long run. It bears noting these findings are model free in that no a priori economic theory of the discount factor was imposed in the estimation.
A final matter is the summability of the stripped cash flows, which relates to the issue of whether the asset that pays the entire stream {CF s } ∞ s=1 is even sensible. Using basic computations such as those of Burnside (1998) for log-Gaussian models, in the iid case, the In a general case, such as that considered by De Groot (2015), the summability conditions are more involved, as they involve interactions between conditional means and variances, but the basic intuition of (26) remains: the covariance between the cash flow growth has to be sufficiently negative to overcome any excess of the expected cash flows over the risk-free rate. In our case, the cash flows grow at implied rates around 2 percent per year, while the long-term interest rate prior centers on 2 percent, but the covariances in Table 3 are negative, so the sums converge numerically, albeit very slowly. An exception is Retail Trade, the hedging cash flow where the average covariance is positive. Some numerical instability of the partial sums for this cash flow was seen if the range was extended to 100+ years, which is not surprising for an extrapolation so far beyond the range of the data. A final matter is the summability of the stripped cash flows, which relates to the issue of whether the asset that pays the entire stream {CF s } ∞ s=1 is even sensible. Using basic computations such as those of Burnside (1998) for log-Gaussian models, in the iid case, the convergence of ∑ ∞ t=1 PV 0,t (CF) is assured if In a general case, such as that considered by De Groot (2015), the summability conditions are more involved, as they involve interactions between conditional means and variances, but the basic intuition of (26) remains: the covariance between the cash flow growth has to be sufficiently negative to overcome any excess of the expected cash flows over the risk-free rate. In our case, the cash flows grow at implied rates around 2 percent per year, while the long-term interest rate prior centers on 2 percent, but the covariances in Table 3 are negative, so the sums converge numerically, albeit very slowly. An exception is Retail Trade, the hedging cash flow where the average covariance is positive. Some numerical instability of the partial sums for this cash flow was seen if the range was extended to 100+ years, which is not surprising for an extrapolation so far beyond the range of the data.
Robustness
The objective of the paper was to adapt a Bayesian methodology (Gallant and Hong 2007) to a completely model-free data-only setting and then value important nontraded cash flows for the economic analysis of risk premiums. It only considers this specific objective and reaches interesting economic conclusions, but it is not exhaustive.
There are two issues regarding robustness that should be remarked upon: Is the methodology sensitive to the choice of assets? Is the methodology sensitive to the choice of
Robustness
The objective of the paper was to adapt a Bayesian methodology (Gallant and Hong 2007) to a completely model-free data-only setting and then value important nontraded cash flows for the economic analysis of risk premiums. It only considers this specific objective and reaches interesting economic conclusions, but it is not exhaustive.
There are two issues regarding robustness that should be remarked upon: Is the methodology sensitive to the choice of assets? Is the methodology sensitive to the choice of prior?
As regards the assets, the methodology extracts the ex post SDF from the asset-pricing errors on 754 dynamic portfolio returns induced by instrumenting a smaller core set of asset returns. The core set, described in the Appendix A, is representative of those used for evaluating asset-pricing models; see, for example, the review by Bryzgalova et al. (2020, p. 3). The methods for the extraction of an SDF can be dependent on the assets used for the extraction, e.g., Nieto and Rubio (2014). However, our surmise is confidence that the large set of portfolio returns spans the factors on which all assets load, especially upon taking into account that the portfolios are dynamic and include instrumental variables such as consumption and labor income growth. Rather than the assets, it is the prior to which the Gallant and Hong (2007) methodology is sensitive. This issue we have examined, as reported above.
A referee suggests other references that a reader might consider. Lewellen et al. (2010) and Ghosh et al. (2017) discuss other methods for the extraction of the SDF, rather than the valuation of nontraded cash flows, though it would be interesting to extend these studies to our application. That step is beyond the scope of this paper. Additionally, Gormsen (2020); Bansal et al. (2019); and Giglio et al. (2020) further examine evidence on the slope of the term structure of equity returns without consensus. Our main finding regards the contrast between the term structure of the risk premiums on consumption versus that of those on the industrial cash flows we considered.
Conclusions
We developed a model-free Bayesian procedure to extract the SDF process using a yield curve prior that enforces the historically-very-low U.S. short-and long-term interest rates. The prior thereby enforces known information, but no particular theory on the SDF process itself. Using annual data for 1959-2015, we used the extracted SDF to compute the implied stripped cash flow risk premiums on panel corporate profits for eight major industrial sectors and consumption. The magnitudes of the risk premiums on the stripped cash flows are plausible, and, with one exception, the risk premiums show a decreasing term structure for 1-50-year horizons. The exception is Retail Trade, which is found to be a hedging asset in the short run but not the long run. By contrast, the risk premiums on the stripped consumption cash flow are found to be positive and rather low in the short term but increase with the horizon to about 4 percent per year 50 years out. The observed term structures of the equity risk premiums generally confirm the limited information (bounded rationality) model of Croce et al. (2015).
Appendix A.1. SDF Extraction
In the extraction step, all the data were annual for the years 1930 through 2015. The raw data were converted from nominal to real using the annual consumer price index obtained from Table 2.3.4 on the Bureau of Economic Analysis web site. Conversions to per capita were performed by means of the mid-year population data from Table 7.1 on the Bureau of Economic Analysis web site.
The raw data for stock returns were value-weighted returns including dividends for the NYSE, AMEX, and NASDAQ from the Center for Research in Security Prices data on the Wharton Research Data Services web site (http://wrds.wharton.upenn.edu) (accessed on 23 February 2021). Likewise, the raw data for returns on U.S. Treasury 30-day debt were obtained from the Center for Research in Security Prices data on the Wharton Research Data Services web site.
Raw annual returns including dividends on the twenty-five Fama and French (1993) portfolios were obtained from Kenneth French's web site, http://mba.tuck.dartmouth. edu/pages{\protect\penalty\z@}/faculty/ken.french (accessed on 23 February 2021). The portfolios were the intersections of five portfolios formed on market equity and five portfolios formed on the ratio of book equity to market equity. The portfolios were for all NYSE, AMEX, and NASDAQ stocks for which equity data were not missing and book equity data were positive. The portfolios were constructed at the end of each June with breakpoints determined by the NYSE quintiles at the end of June. The complete details are on Kenneth French's web site. The advantage of the Fama-French portfolios here is that they appeared to isolate and exhaust the risk factors for holding equities French 1992, 1993).
The raw labor income data were the "compensation of employees received" from Table 2.2 on the Bureau of Economic Analysis web site.
The Real Treasury Inflation Protected Securities (TIPS yields displayed in Table 1 are annual averages of daily values from https://www.treasury.gov/resource-center/ data-chart-center/interest-rates/Pages/TextView.aspx?data=realyieldAll (accessed on 23 February 2021). According to the Treasury, "These rates are commonly referred to as "Real Constant Maturity Treasury" rates, or R-CMTs. Real yields on Treasury Inflation Protected Securities (TIPS) at "constant maturity" are interpolated by the U.S. Treasury from Treasury's daily real yield curve. These real market yields are calculated from composites of secondary market quotations obtained by the Federal Reserve Bank of New York. This method provides a real yield for a 10 year maturity, for example, even if no outstanding security has exactly 10 years remaining to maturity".
Appendix A.2. Valuation
The raw data for the cash flow valuation step were annual data for 1959-2015, for 56 observations, net, after a provision for the initial lag.
The industrial cash flow data were annual corporate profits with inventory valuation adjustments and without capital consumption allowances for major sectors. The data were spliced together for consistency from Table B-6 of the 2017 Economic Report of the President and Table B-91 of the 2004 Economic Report of the President. The GDP data were from NIPA Table 1.1.5; the consumption of nondurable goods and services data were from NIPA Table 2.3.5; nominal data were converted to real using the implicit GDP deflator (Price Index for Gross Domestic Product) from NIPA Table 1.1.4. All the data series were converted to per capita using the total U.S. resident population plus armed forces overseas (annual averages of monthly estimates) obtained from FRED, https://fred.stlouisfed.org/ series/B230RC0A052NBEA (accessed on 23 February 2021).
|
v3-fos-license
|
2022-07-07T15:10:53.022Z
|
2022-06-29T00:00:00.000
|
250326423
|
{
"extfieldsofstudy": [],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://e-journal.unair.ac.id/JBMV/article/download/36478/21714",
"pdf_hash": "77cec2d16008d029c39c87659259e495853c5446",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2658",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "14ac73a1fe06c32e9810f27072dd4a06a56479f0",
"year": 2022
}
|
pes2o/s2orc
|
Prevalence of Trypanosomiasis in Wild Rats in Banyuwangi District
The aim of this research was to investigate the number of prevalence of Trypanosomiasis of wild rats in Banyuwangi District. Sixty wild rats were trapped from 4 sub-districts, Banyuwangi, Srono, Songgon, and Tegaldlimo in human residence, markets and rice fields from March until June 2017. Blood were taken after anaesthetized using ether. The examination of parasite used blood smear in Giemsa staining. The result show that just 1 of 60 blood sample was appear Trypanosoma sp. , it means the prevalence of Trypanosomiasis of wild rats in Banyuwangi was 1.67%. Rat-Associated
INTRODUCTION
Trypanosomiasis is a disease which has important role in human and veterinary medicine. This disease caused by Trypanosoma sp., known as protozoan parasite which has flagella in their body (Dobigny et al., 2010).
In Thailand, Trypanosomes which have quite similar morphological parameters with T. lewisi-like were found from an infant with fever, cough, also anorexia (Sarataphan et al., 2007). As mentioned by Shegokar et al., (2006) in India T. evansi was found for the first time in human around 2004. These cases possibly the effect of rodents and wild rats' existence in human dwelling. As the population of wild rats has getting bigger nowadays, it mostly lives in close area with human. In some case wild rats become the first competitor of humans for food, mostly about the preharvest damage. This condition is true according to Meerburg (2009), has mentioned in Malaysia, the precentage of crop losses has reached 5%. The magnificent number of crop losses happened in Indonesia, it reached 17%. The huge number of crop losses in both country has shown that rodents existence could be some kind of threat to human, in this case is about food.
In Indonesia, Trypanosomiasis mostly found in livestock, which infecting cattle. Any report about wild rats Trypanosomiasis in Indonesia is almost none until now (Sim and Wiwanitkit, 2015). Study on prevalence of Trypanosomiasis in wild rats caused by T. lewisi has been done in many countries since mid-20 th century, first was conducted in New Zealand, USA, and more infection has been reported from African and Asian countries, South America also have reports in Chile and Brazil (Linardi and Botelho, 2002).
According to Suwanti and Mufasirin (2014), in their research about wild rats Trypanosomiasis through 2011 until 2014, 7 out of 89 wild rats were infected Trypanosoma sp. in Surabaya.
Wild rat has become serious problems which related to public health and rodent-borne diseases. Wild rats also have been acknowledged as a vector of some diseases, mainly zoonotic diseases. Wild rats have known as a host of more than 60 zoonotic diseases that becomes the top threat to human health as mentioned by Blasdell et al. (2015) in Meerburg et al., (2009). Many rodents borne diseases that have ever been recorded in the world according to Nurisa and Ristiyanto (2005). There are 14 diseases that caused by protozoa. One of these 14 rodent borne diseases is Trypanosomiasis, which mostly happened in the tropical area of the world and possibly transmitted to human.
Banyuwangi District is one area that also has potential to conducted Trypanosomiasis detection in wild rats. Because Banyuwangi District is known as one of the endemic area of Trypanosomiasis in accordance with the research of Sawitri et al., (2015), which had been taking the isolate of Trypanosomiasis from some areas in Indonesia to represent the endemic area, including Banyuwangi. It has been decades since Trypanosomiasis or Surra had entered to Banyuwangi District.
Banyuwangi District has tropic condition which is relatable to the occurrence of Trypanosomiasis, mostly happen in tropical area. Viewing the terms of territory and epidemiological conditions, it has the potential to also become home to a Trypanosomiasis that attack wild rats. Based on literacy, many researches have been done about the prevalence of parasites in wild rats around the world. Taiwan, United States of America, until Malaysia, they have been doing this research for quite long time (Shafiyyah et al., 2012).
Therefore, since there has not been many research with this topic in Indonesia, especially in Banyuwangi District and according to the background above, the author has considered this research about Trypansoma sp. infection of wild rats which has zoonotic effect to human needs to be done in Banyuwangi District.
Samples
This research was Cross-Sectional survey. This research used 60 bloods of wild rats as research samples.
Preparation of blood collecting
Blood collecting process were done after the wild rat have anaesthetized. The blood collected through cardiac puncture using 3 ml tuberculin syringe. Then the blood transferred to EDTA tube quickly before clotting.
Preparation of blood smear
To make blood smear through some steps. First step, blood drop to the object glass using pipette, in the one side or nearly at the edge of the first object glass. Another object glass is put in front of the dropped blood until it is wide spread in the second object glass. Positioning the object glasses form a ± 45º. The second object glass is pulled in front until formed a thin layer of blood.
Preparation of blood staining
Blood smear were fixated in methanol absolute 96%, for 3 minutes. Staining started after fixation by using Giemsa stain 10% for 30 minutes in staining jar. Blood smear films are washed by flowing water or sterile aquades and dried.
Microscopic examination
Microscopic observation of peripheral blood smear was used a microscope with 1000 magnification with oil emersion and then captured with OptiLab®.
Data analysis
Data was analyzed with descriptive method based on the formula of prevalence.
RESULTS AND DISCUSSION
In this research were used 60 wild rat blood samples to know the prevalence of Trypanosomiasis in wild rats in Banyuwangi District. Wild rats were taken from human residence, rice fields, garden, or traditional markets, representing four sub-districts in Banyuwangi, such as Banyuwangi, Srono, Songgon, and Tegaldlimo, from March to June 2017. One of 60 blood samples was positive, the number of prevalence was 1.67% as seen in the Table 1 and Figure 1 below.
To make a clearer view of Trypanosoma sp. as captured in Figure 1, the morphology of trypanosome that was found in wild rat's blood was described in a sketch like in Figure 2 below.
Based on the research result, only 1 sample postive from 60 blood samples that had taken from wild rats. The overall prevalence is 1.67%. The result was shown by the representative four sub-districts in Banyuwangi. It is lower from Surabaya, which had 7,9% according to Suwanti and Mufasirin (2014). They detected 89 wild rats in Surabaya started from 2011 until 2014.
In this research, choosen 4 subdistricts shown the prevalence of Trypanosomiasis in wild rats which was 1.67%. That one positive sample was found in human residence area in Kalipait Village, Tegaldlimo Subdistrict. While in the other sub-districts were not found.
The number of prevalence shown that in different area or region, the result will be out differently. It shows that the prevalence of Trypanosomiasis of wild rats is not only affected by the species or individual factors, but also external factors such as vector development, habitat conditions, include temperature and humidity.
Linardi and Botelho (2002) (2002), T. lewisi has long thin posterior end with subterminal ovale kinetoplast, nucleus was located in the anterior part of the body with free flagellum. The trypanosome that was found has large body, it might be present in intermediate form with migration of the kinetoplas and granular body. This result depends on the incubation period.
The trypanosome was only one single trypanosome in the figure. As mentioned by Desquesnes (2013), based on the development condition and the Shegokar et al., (2006) T. evansi was found for the first time in human. In 2010 also found T. lewisi infection in a 37-days old infant in India (Verma et al., 2011). Truc et al. (2013) also mentioned, that Trypanosomiasis in human has occurred at Malaysia in 1933, Sri Lanka in 1999, and mentioned that most of the records were come from India.
Study about the infection of Trypanosomiasis of wild rats has not been done oftenly in Indonesia and it was the first time to be done in Banyuwangi District. This should become a warning that the infection of Trypanosomiasis in wild rats has a potential to infect human.
CONCLUSION
It could be concluded that the number of prevalence of Trypanosomiasis in Banyuwangi District was 1.67%. T. lewisi predicted as the species of Trypanosoma sp. that was found. Trypanosoma sp. might still potential to cause outbreak, especially in cattle because Banyuwangi District is known as the endemic area of Trypanosomiasis or Surra in cattle. It also possible to infect human since it was found in human residence area.
|
v3-fos-license
|
2021-04-10T13:21:10.116Z
|
2021-01-01T00:00:00.000
|
233195567
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-4292/13/6/1167/pdf",
"pdf_hash": "03684538dae6de0e41c8deef3aa9e34dab96d8ec",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2659",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Computer Science",
"Environmental Science"
],
"sha1": "59f98f1b43de0fad4218907fc2bcd9cc46f033b7",
"year": 2021
}
|
pes2o/s2orc
|
Development of a Parcel-Level Land Boundary Extraction Algorithm for Aerial Imagery of Regularly Arranged Agricultural Areas
The boundary extraction of an object from remote sensing imagery has been an important issue in the field of research. The automation of farmland boundary extraction is particularly in demand for rapid updates of the digital farm maps in Korea. This study aimed to develop a boundary extraction algorithm by systematically reconstructing a series of computational and mathematical methods, including the Suzuki85 algorithm, Canny edge detection, and Hough transform. Since most irregular farmlands in Korea have been consolidated into large rectangular arrangements for agricultural productivity, the boundary between two adjacent land parcels was assumed to be a straight line. The developed algorithm was applied over six different study sites to evaluate its performance at the boundary level and sectional area level. The correctness, completeness, and quality of the extracted boundaries were approximately 80.7%, 79.7%, and 67.0%, at the boundary level, and 89.7%, 90.0%, and 81.6%, at the area-based level, respectively. These performances are comparable with the results of previous studies on similar subjects; thus, this algorithm can be used for land parcel boundary extraction. The developed algorithm tended to subdivide land parcels for distinctive features, such as greenhouse structures or isolated irregular land parcels within the land blocks. The developed algorithm is currently applicable only to regularly arranged land parcels, and further study coupled with a decision tree or artificial intelligence may allow for boundary extraction from irregularly shaped land parcels.
Introduction
Extracting object boundaries from remote sensing imagery has been applied to various fields, such as land planning [1], census studies [2], and cadastral map production [3,4]. In particular, accurate and up-to-date spatial analysis in agricultural areas is critical in terms of crop harvest management, resource planning, etc. Conventional methods based on extensive field investigation and manual analysis of images are time-consuming and costly. Automated analysis techniques based on remote sensing provide a cost-effective and efficient alternative, as they enable detailed investigation of a large area. Accordingly, the automation of the boundary extraction of object features has drawn attention from the various research fields of the environment, architecture, and agriculture.
The pixel level classification of remote sensing imagery is insufficient to meet practical applications with the rapid advances in remote sensing technology. Additional boundary-This study's primary goal was to develop an algorithm that automatically extracts parcel boundaries from aerial imagery. In images where objects are clearly distinguishable from the background, object boundary extraction can result in correct land parcel delineation, even with a single application of a specific algorithm. However, for aerial imagery of paddies and uplands, it is challenging to derive correct land parcel results with only a single algorithm due to the similarity of the image characteristics and the land distribution patterns. Therefore, it is necessary to use a set of algorithms that effectively delineate the agricultural land parcel boundaries.
Image Contour and Suzuki85 Algorithm
The contour of an object can be defined as a series of pixels among the object area's pixels, which are adjacent to the background area. Generally, the outermost pixel of a white object in a black background is identified as the outline. In the case of a hole-a black area that lies within a white object area-object pixels surrounding the hole can also be detected as outlines. Thus, the outline of an object can be divided into outer and inner parts. By applying these definitions to an image, an image contour can be defined as the boundary of an area with the same color or color intensity within an image.
The outline of an object can be represented by a series of points and, thus, saved in the form of a list of linked points using contour detection algorithms, such as the square tracing algorithm [27], Moore-Neighbor tracing [28], radial sweep [29], and Theo Pavlidis' algorithm [30]. Specifically, the Suzuki85 algorithm is based on a contour detection function in an open-source library supported by OpenCV [31]. However, the detection of contours with a raw image is ineffective as the individual pixels have different digital values. Thus, original images must be simplified for contour extraction efficiency by converting the pixel values into a binary format by applying a certain threshold value. The Suzuki85 algorithm sets a starting point of a contour by searching each row of a binary image and constructs contours by performing repeated recursive searches of adjacent points in both clockwise and counterclockwise directions [24,32].
Canny Edge Detection
Canny edge detection, developed by John F. Canny in 1986, is one of the most widely used edge detection algorithms. This algorithm includes a series of noise reduction processes, finding intensity gradients, non-maximum suppression, and hysteresis thresholding [25].
As edge detection is sensitive to image noise, a Gaussian filter can be used to reduce noises. In this study, a (2k + 1) × (2k + 1) Gaussian kernel was used, as presented in Equation (1) [33]: Remote Sens. 2021, 13, 1167 4 of 20 where G ij is the (i, j) element of the Gaussian kernel, and σ is the standard deviation of the Gaussian kernel elements. The larger the Gaussian kernel size, the lower the detector's sensitivity to noise. A 5 × 5 size kernel (σ = 1.4) was applied as presented in matrix (2): By operating Equation (3) on each pixel of the image, Gaussian blurs were processed, and any noise was removed. The weight decreases as the distance (u, v) between the central position (x, y) of the kernel and the kernel position (x + u, y + v) increases.
where M xy is the element of the processed image from the data matrix f . The gradient of the image in which the noise has been removed by a Gaussian filter was calculated by applying 3 × 3 size xand y-direction Sobel kernels obtained [34] (matrix (4)): where G x and G y are the xand y-directions of the Sobel kernel, respectively. The filter was applied using Equation (3) similarly to the Gaussian filter. After that, the edge gradient and direction of each pixel were calculated using Equations (5) and (6): where G is the edge gradient of each pixel, and θ is the direction of the edge gradient. The entire image was searched in the gradient's direction to find pixels with the maximum value of the gradient values, representing the edge. Pixels with smaller values than the maximum were then removed by nullifying those pixels with zero value (Figure 1). . ( By operating Equation (3) on each pixel of the image, Gaussian blurs were processed, and any noise was removed. The weight decreases as the distance ( , ) between the central position ( , ) of the kernel and the kernel position ( + , + ) increases.
where is the element of the processed image from the data matrix . The gradient of the image in which the noise has been removed by a Gaussian filter was calculated by applying 3 × 3 size -and -direction Sobel kernels obtained [34] (matrix (4)): where and are the -and -directions of the Sobel kernel, respectively. The filter was applied using Equation (3) similarly to the Gaussian filter.
After that, the edge gradient and direction of each pixel were calculated using Equations (5) and (6): where is the edge gradient of each pixel, and is the direction of the edge gradient. The entire image was searched in the gradient's direction to find pixels with the maximum value of the gradient values, representing the edge. Pixels with smaller values than the maximum were then removed by nullifying those pixels with zero value (Figure 1).
Figure 1.
A graphic example of non-maximum suppression, where A, B, and C represent three gradient points. If the value at point A is the largest, then A becomes an edge pixel, and points B and C are nullified.
Hysteresis thresholding was used to determine whether the result of non-maximum suppression represents a definite edge. Two threshold values were set as the 'high' and 'low' thresholds. Edges composed of pixels with values greater than the 'high' threshold were considered as valid, while edges with values smaller than the 'low' threshold were thought to be invalid and, thus, removed. When a pixel value was between the 'high' and Hysteresis thresholding was used to determine whether the result of non-maximum suppression represents a definite edge. Two threshold values were set as the 'high' and 'low' thresholds. Edges composed of pixels with values greater than the 'high' threshold were considered as valid, while edges with values smaller than the 'low' threshold were thought to be invalid and, thus, removed. When a pixel value was between the 'high' and 'low' thresholds, the edges were valid only if connected to a pixel with a value above the 'high' threshold.
Hough Transform
The Hough transform is a method to extract straight lines by finding correlations between specific points on a Cartesian coordinate plane in digital image processing [35]. In the Cartesian coordinate system, a straight line with slope m and y-intercept b can be uniquely expressed as Equation (7): where m represents the line slope in the Cartesian coordinates and b is the y-intercept of the line. This straight line can be expressed as the coordinates (b, m) in the parameter space. However, a straight line in the form x = constant causes divergences and the slope parameter m becomes infinity. Therefore, for computational reasons, Richard and Peter [36] proposed using a Hessian standard form as where ρ is the distance between the origin and the line and θ is the radius angle of the line's normal vector. When the graph representing Equation (7) for a pair of points (x 1 , y 1 ) and (x 2 , y 2 ) is plotted in Hough space, it becomes a sinusoidal curve (Figure 2), which represents all straight lines passing through (x 0 , y 0 ). When applied to all possible pairs of points, the number of graphs passing through the intersection of sinusoidal curves in Hough space indicates the number of points on a straight line. In other words, the junction of n curves at a single point in Hough space indicates n number of points on a straight line in the Cartesian coordinates. 'low' thresholds, the edges were valid only if connected to a pixel with a value above the 'high' threshold.
Hough Transform
The Hough transform is a method to extract straight lines by finding correlations between specific points on a Cartesian coordinate plane in digital image processing [35]. In the Cartesian coordinate system, a straight line with slope and -intercept can be uniquely expressed as Equation (7): where represents the line slope in the Cartesian coordinates and is the -intercept of the line. This straight line can be expressed as the coordinates ( , ) in the parameter space. However, a straight line in the form = constant causes divergences and the slope parameter becomes infinity. Therefore, for computational reasons, Richard and Peter [36] proposed using a Hessian standard form as = cos + sin , (8) where is the distance between the origin and the line and is the radius angle of the line's normal vector. When the graph representing Equation (7) for a pair of points ( 1 , 1 ) and ( 2 , 2 ) is plotted in Hough space, it becomes a sinusoidal curve (Figure 2), which represents all straight lines passing through ( 0 , 0 ). When applied to all possible pairs of points, the number of graphs passing through the intersection of sinusoidal curves in Hough space indicates the number of points on a straight line. In other words, the junction of curves at a single point in Hough space indicates number of points on a straight line in the Cartesian coordinates. Therefore, detecting a straight line is possible by obtaining the ( , ) of a straight line passing through any two points ( , ) and ( , ); ≠ in the point set = {( , )} ; ∈ ℕ of the binary image, and by determining that ( , ) appearing with a high frequency is a real straight line.
Development of a Parcel Boundary Extraction Algorithm
The developed parcel boundary extraction algorithm consisted of image splitting, parcel contour detection, and image merging ( Figure 3). Therefore, detecting a straight line is possible by obtaining the (ρ, θ) of a straight line passing through any two points (x i , y i ) and x j , y j ; i = j in the point set S = {(x i , y i )} ; i ∈ N of the binary image, and by determining that (ρ, θ) appearing with a high frequency is a real straight line.
Development of a Parcel Boundary Extraction Algorithm
The developed parcel boundary extraction algorithm consisted of image splitting, parcel contour detection, and image merging ( Figure 3). Remote Sens. 2021, 13, x FOR PEER REVIEW 6 of 21 The resolution of the orthographic imagery was 51 cm/pixel. Block-level contours were detected using the Suzuki85 algorithm. Most consolidated paddies and uplands had a regular arrangement in the rectangular form. Thus, based on the assumption that the boundaries dividing the parcels are composed of straight lines, the Hough transform was applied to detect parcel-level edges as straight-line forms. These were extended to blocklevel contours. The Suzuki85 algorithm was reapplied to derive results grouped in list data structure by the parcel contours.
Image Splitting and Merging
Contour extraction and edge detection algorithms were used to extract high gradient values compared to the overall image characteristics. Thus, these algorithms work better in edge extraction when applied to narrow areas rather than extensive areas. These deductive algorithms also tend to be more sensitive to the distinguishable boundaries from the surrounding land, such as mountains, residential areas, and roads, compared with the boundaries of agricultural land, such as paddies and uplands. Therefore, it is advantageous to apply the algorithm to segmented images of an appropriate size to extract the land parcel boundaries.
As shown in Figure 3, the block-level contours were extracted first, and then the edges of the inner land parcels were suppressed and extended to the block boundaries. Comparisons between the block-level contours and internal edge list array were made repeatedly to check their match. Another advantage of the use of split image segments was the reduction of the comparison operations to increase the algorithm's computational efficiency.
In this study, original aerial imagery was split into several segments of the same size before applying the algorithms. This was because division to the same sized pieces allowed consistent results over the entire image points when using a fixed threshold value. The original image was divided into approximately 1024 pixels size in width and height, The resolution of the orthographic imagery was 51 cm/pixel. Block-level contours were detected using the Suzuki85 algorithm. Most consolidated paddies and uplands had a regular arrangement in the rectangular form. Thus, based on the assumption that the boundaries dividing the parcels are composed of straight lines, the Hough transform was applied to detect parcel-level edges as straight-line forms. These were extended to block-level contours. The Suzuki85 algorithm was reapplied to derive results grouped in list data structure by the parcel contours.
Image Splitting and Merging
Contour extraction and edge detection algorithms were used to extract high gradient values compared to the overall image characteristics. Thus, these algorithms work better in edge extraction when applied to narrow areas rather than extensive areas. These deductive algorithms also tend to be more sensitive to the distinguishable boundaries from the surrounding land, such as mountains, residential areas, and roads, compared with the boundaries of agricultural land, such as paddies and uplands. Therefore, it is advantageous to apply the algorithm to segmented images of an appropriate size to extract the land parcel boundaries.
As shown in Figure 3, the block-level contours were extracted first, and then the edges of the inner land parcels were suppressed and extended to the block boundaries. Comparisons between the block-level contours and internal edge list array were made repeatedly to check their match. Another advantage of the use of split image segments was the reduction of the comparison operations to increase the algorithm's computational efficiency.
In this study, original aerial imagery was split into several segments of the same size before applying the algorithms. This was because division to the same sized pieces allowed consistent results over the entire image points when using a fixed threshold value. The original image was divided into approximately 1024 pixels size in width and height, as Remote Sens. 2021, 13, 1167 7 of 20 specified in the algorithm. After the edge detections were performed on each segmented image, the entire segmented images were re-merged.
Block-Level Contour Extraction
To extract the contours representing block-level boundaries, an image was converted into 8-bit grayscale according to the YCbCr color space model by applying Equation (9) [37]. The Y component of this color space, which is the same with the YIQ and YUV models, is used widely in grayscale representation. Then the grayscale was converted to 1-bit monochrome ( Figure 4b) using a threshold value (Equation (10)): where Y xy is the luma value (grayscale level) at point (x, y); T is the threshold value for image binarization; R xy , G xy , and B xy are the red, green, and blue components at point (x, y) in the red, green, and blue (RGB) image, respectively; and BI xy is the value at point (x, y) in the binary image. As shown in Figure 4c, all contour lines were extracted without a hierarchy relationship. A minimum number of pixels to draw a contour line was applied to extract the block boundaries while minimizing the computational memory usage.
The contours extracted from the image contained detailed coordinate information regarding the shape of objects. These contours were simplified into polygons by removing redundant coordinates using the Ramer-Douglas-Peuker algorithm [39,40]. All extracted When a fixed value is applied to the binarization process threshold, the threshold setting can be out of range for a particular circumstance that causes process errors. To solve this problem, threshold values for the binarization of segmented images were determined in such a way that the intra-class variance (Equation (11)) was minimized or inter-class variance (Equation (12)) was maximized between classes when the image pixels were classified into two categories, as described by Otsu's method [38]. Although minimizing intra-class variance is mathematically similar to maximizing inter-class variance, it is computationally efficient to apply Equation (12).
where α represents the number of pixels with a value smaller than the threshold, β is the number of pixels with a value greater than the threshold, and µ 1 and σ 1 are the average and variance values for pixels smaller than the threshold, while µ 2 and σ 2 are for pixels greater than the threshold, respectively. The image was then binarized systematically based on the threshold, determined with Equation (12), which maximizes the inter-class variance. After that, block-level contours were extracted using the Suzuki85 algorithm ( Figure 4c).
As shown in Figure 4c, all contour lines were extracted without a hierarchy relationship. A minimum number of pixels to draw a contour line was applied to extract the block boundaries while minimizing the computational memory usage.
The contours extracted from the image contained detailed coordinate information regarding the shape of objects. These contours were simplified into polygons by removing redundant coordinates using the Ramer-Douglas-Peuker algorithm [39,40]. All extracted contours were pruned into more straightforward object boundaries by using the maximum distance limit around 0.0001. The final contour line was determined as the inner area of a contour approximated to the image block's size (Figure 4d).
Parcel-Level Edge Extraction from Block-Level Contours
The Canny edge detection algorithm was applied to the grayscale images obtained from Equation (9) (Figure 5a). For paddies and uplands, sensitive edge extraction was required, as both showed a similar intensity in grayscale images. Therefore, a minimum Sobel kernel of 3 × 3 was applied to the gradient calculation (Equation (4)). The values of 80 and 240 were respectively used as low and high hysteresis thresholds through repeated trial and error applications.
The Hough transform was used to extract the parcel edges within the block. Although the Hough transform can be applied to the entirety of pixels within a block, a probabilistic Hough transform that performs a Hough transform on pixels randomly extracted from all target pixels was adopted in this study for computational efficiency. This was because the entire pixel application becomes rapidly inefficient with the increase in image size.
An edge detection resolution of 1-pixel length and 1 • angle was set in converting the line segment from the Cartesian coordinate to the Hough domain. Edges with a size less than 25 pixels or within 3 pixels of a mutual distance were ignored, while edges that passed through more than 60 valid pixels were determined as valid.
Parcel Contour Extraction
A specific parcel boundary within a block was extracted by extending a line segment from the Hough transform process to the block contours. The hierarchical relationship between the edges and contours was identified using a ray casting algorithm [41] in combination with a type of inside-polygon test [42] for all pairs of block-level contours and edges within the segment. An arbitrary ray was constructed to cast at a point and determined as the point lying within the contour when the ray meets the contour polygon an odd number of times. The link between the determined points became the parcel and was extended to the contour when both endpoints of the link were within the same block contour. In other words, the edge of either end that was not included in the same block contour was removed (Figure 5b).
Remote Sens. 2021, 13, x FOR PEER REVIEW 9 of 21 80 and 240 were respectively used as low and high hysteresis thresholds through repeated trial and error applications. The Hough transform was used to extract the parcel edges within the block. Although the Hough transform can be applied to the entirety of pixels within a block, a probabilistic Hough transform that performs a Hough transform on pixels randomly extracted from all target pixels was adopted in this study for computational efficiency. This was because the entire pixel application becomes rapidly inefficient with the increase in image size.
An edge detection resolution of 1-pixel length and 1° angle was set in converting the line segment from the Cartesian coordinate to the Hough domain. Edges with a size less than 25 pixels or within 3 pixels of a mutual distance were ignored, while edges that passed through more than 60 valid pixels were determined as valid. To remove noises that did not match the parcel arrangement direction of the inner edges, the angular range [0, π] was divided by π 18 rad intervals and the number of edges of each range was determined by calculating all edges' angles. Based on the regularity of parcel arrangement, only the edges along the direction of the majority of the angular range remained and the other edges with deviated angles were removed (Figure 5c).
The line density was then adjusted by overlapping the inner edges with similar inclinations. The distances of AA and BB between both the ends of the two edges AB and A B were calculated (Figure 6), and one edge was removed if both distance values (AA , BB ) were smaller than 5 pixels (Figure 5d). The number of 5 pixels was determined based on trial and error in order to eliminate the redundant finer boundaries as the given resolution of the images used in this study.
The line density was then adjusted by overlapping the inner edges with similar in nations. The distances of ′ ̅̅̅̅̅ and ′ ̅̅̅̅̅ between both the ends of the two edges ̅̅̅̅ a ′ ′ ̅̅̅̅̅̅ were calculated (Figure 6), and one edge was removed if both distance valu ( ′ ̅̅̅̅̅ , ′ ̅̅̅̅̅ ) were smaller than 5 pixels (Figure 5d). The number of 5 pixels was determin based on trial and error in order to eliminate the redundant finer boundaries as the giv resolution of the images used in this study. The Suzuki85 algorithm was then applied once more to finalize the valid block-le contours and parcel-level edges (Figure 7b). Thereby, all the extracted block-level and p cel-level edges eventually became the parcel-level boundaries, and their coordinate inf mation was stored (Figure 7c). The Suzuki85 algorithm was then applied once more to finalize the valid block-level contours and parcel-level edges (Figure 7b). Thereby, all the extracted block-level and parcel-level edges eventually became the parcel-level boundaries, and their coordinate information was stored (Figure 7c).
Study Site
The hues of aerial imagery may vary depending on the surface cover conditions or aerial photographing times. Images of various surface cover conditions are required to evaluate the developed algorithm's performance in extracting parcel boundaries from aerial images. In this study, six evaluation areas were selected based on the distribution of land parcels, such as paddies, uplands, and greenhouses as well as surface color hues (Figure 8), as edge extraction is highly influenced by the image characteristics of the objects.
Study Site
The hues of aerial imagery may vary depending on the surface cover conditions or aerial photographing times. Images of various surface cover conditions are required to evaluate the developed algorithm's performance in extracting parcel boundaries from aerial images. In this study, six evaluation areas were selected based on the distribution of land parcels, such as paddies, uplands, and greenhouses as well as surface color hues (Figure 8), as edge extraction is highly influenced by the image characteristics of the objects. The study sites with various surface hues (corn-silk, khaki, brown, green, corn-silk and green, and olive) were selected (Figure 9) to verify the applicability of the developed algorithm for a range of different aerial images. In particular, Hwasun, with two different surface hues of corn-silk and green was included as a study site, since a non-maximum suppression process (Figure 1) through gradient operation tends to be affected heavily by the entire image colors. The study sites with different land cover types were also selected to evaluate the boundary extraction. All the study sites included paddy and upland as the primary land cover, while the two sites of Hwasun and Miryang were chosen to assess the effect of greenhouse areas on parcel extraction ( Figure 10). The study sites with various surface hues (corn-silk, khaki, brown, green, corn-silk and green, and olive) were selected (Figure 9) to verify the applicability of the developed algorithm for a range of different aerial images. In particular, Hwasun, with two different surface hues of corn-silk and green was included as a study site, since a non-maximum suppression process (Figure 1) through gradient operation tends to be affected heavily by the entire image colors. The study sites with various surface hues (corn-silk, khaki, brown, green, corn-silk and green, and olive) were selected (Figure 9) to verify the applicability of the developed algorithm for a range of different aerial images. In particular, Hwasun, with two different surface hues of corn-silk and green was included as a study site, since a non-maximum suppression process (Figure 1) through gradient operation tends to be affected heavily by the entire image colors. The study sites with different land cover types were also selected to evaluate the boundary extraction. All the study sites included paddy and upland as the primary land cover, while the two sites of Hwasun and Miryang were chosen to assess the effect of greenhouse areas on parcel extraction ( Figure 10). The study sites with different land cover types were also selected to evaluate the boundary extraction. All the study sites included paddy and upland as the primary land cover, while the two sites of Hwasun and Miryang were chosen to assess the effect of greenhouse areas on parcel extraction ( Figure 10).
Parcel Boundary Extraction
As shown in Figure 11, the boundaries between paddy and upland in the original imagery were matched well with the boundaries extracted by the developed algorithm for all the study sites. Although there were rare cases that omitted the boundaries between the parcels with very similar colors, the developed algorithm demonstrated good performance in boundary extraction regardless of the surface color hues, indicating the applicability for different soil colors and aerial photography time.
Parcel Boundary Extraction
As shown in Figure 11, the boundaries between paddy and upland in the original imagery were matched well with the boundaries extracted by the developed algorithm for all the study sites. Although there were rare cases that omitted the boundaries between the parcels with very similar colors, the developed algorithm demonstrated good performance in boundary extraction regardless of the surface color hues, indicating the applicability for different soil colors and aerial photography time.
Parcel Boundary Extraction
As shown in Figure 11, the boundaries between paddy and upland in the original imagery were matched well with the boundaries extracted by the developed algorithm for all the study sites. Although there were rare cases that omitted the boundaries between the parcels with very similar colors, the developed algorithm demonstrated good performance in boundary extraction regardless of the surface color hues, indicating the applicability for different soil colors and aerial photography time. There are some parcel boundaries that are polyline, arc, and curved line due to various environmental and artificial factors. Thus, some anomalies between a straight line through the Hough transform and the actual parcel boundary shape appeared. The first anomaly was finely broken lines that resulted from farm entry road features (Figure 12a). This can be approximated to a straight line of rectangular boundaries by the density regulation of the parcel contour extraction process. The second anomaly was over-divided boundaries (white lines in Figure 12b). This case was rather rare and occurred when isolated curved features, such as forests or unconsolidated small segments existed within the block-level boundary. Most of these features can be removed by the noise removing process, as described in Section 2.2.4. The third anomaly was curvilinear block-level contours (Figure 12c). This was not affected by the Hough transform and could be well extracted by the Suzuki85 algorithm. There are some parcel boundaries that are polyline, arc, and curved line due to various environmental and artificial factors. Thus, some anomalies between a straight line through the Hough transform and the actual parcel boundary shape appeared. The first anomaly was finely broken lines that resulted from farm entry road features (Figure 12a). This can be approximated to a straight line of rectangular boundaries by the density regulation of the parcel contour extraction process. The second anomaly was over-divided boundaries (white lines in Figure 12b). This case was rather rare and occurred when isolated curved features, such as forests or unconsolidated small segments existed within the block-level boundary. Most of these features can be removed by the noise removing process, as described in Section 2.2.4. The third anomaly was curvilinear block-level contours (Figure 12c). This was not affected by the Hough transform and could be well extracted by the Suzuki85 algorithm. The developed algorithm also performed well in extracting parcel boundaries for the sites with greenhouses by clearly differentiating greenhouses from paddies and uplands ( Figure 13). However, the parcels with greenhouses tended to be dissected further with individual greenhouse structures as independent parcels. This is because distinctive greenhouse structures are aligned in parallel to the long side of a land parcel, and thereby the algorithm tends to render individual greenhouses recognizable as independent land parcels. Considering the unique and regular appearance of greenhouses, the identification of an individual greenhouse as a parcel may be agreeable or multiple structures may be merged into a single parcel with an additional algorithm in the future. The developed algorithm also performed well in extracting parcel boundaries for the sites with greenhouses by clearly differentiating greenhouses from paddies and uplands ( Figure 13). However, the parcels with greenhouses tended to be dissected further with individual greenhouse structures as independent parcels. This is because distinctive greenhouse structures are aligned in parallel to the long side of a land parcel, and thereby the algorithm tends to render individual greenhouses recognizable as independent land parcels. Considering the unique and regular appearance of greenhouses, the identification of an individual greenhouse as a parcel may be agreeable or multiple structures may be merged into a single parcel with an additional algorithm in the future.
Boundary Extraction Accuracy Assessment
An official farm map was used as the reference land parcel boundary data for the algorithm performance assessment. The farm map is the electronic version of the national farmland registration provided by the Ministry of Agriculture, Food and Rural Affairs of Korea. This map was generated and regularly updated by onscreen manual digitizing of farmland information using the aerial photographs and the national cadastral map along with extensive field inspection for verification. Four different major farm categories of paddies, uplands, orchards, and greenhouses were established in the farm map as the shape data.
Boundary Extraction Accuracy Assessment
An official farm map was used as the reference land parcel boundary data for the algorithm performance assessment. The farm map is the electronic version of the national farmland registration provided by the Ministry of Agriculture, Food and Rural Affairs of Korea. This map was generated and regularly updated by onscreen manual digitizing of farmland information using the aerial photographs and the national cadastral map along with extensive field inspection for verification. Four different major farm categories of paddies, uplands, orchards, and greenhouses were established in the farm map as the shape data.
The algorithm's parcel boundary extraction performance was evaluated in two aspects: the boundaries themselves and the sections formed by the boundaries.
Matching the Extracted Boundaries with the Reference Boundaries
The boundary of a parcel extracted by the developed algorithm must match with the respective boundary in the farm map for the evaluation of the model accuracy. For this, the maximum overlap area method was applied to compute the coincidence degree between two boundaries consisting of all the extracted and the reference [43]: where , denotes the area of the th extracted boundary, , is the area of the th reference boundary. The one with the maximum coincidence degree among the extracted boundaries was matched to the respective reference boundary.
Boundary Level Accuracy Assessments
The extracted boundaries were assessed quantitatively by applying a buffer overlay method [44][45][46]. When the extracted boundaries are overlaid to the buffer around the reference boundaries, the overlapping ones on the buffer are true positive (TP), otherwise FP. Similarly, by overlaying the reference boundaries on the buffer around the extracted boundaries, true negative (TN) and false negative (FN) were determined. TP, FP, FN, and TN are demonstrated in Table 1 and Figure 14 [47]. The algorithm's parcel boundary extraction performance was evaluated in two aspects: the boundaries themselves and the sections formed by the boundaries.
Matching the Extracted Boundaries with the Reference Boundaries
The boundary of a parcel extracted by the developed algorithm must match with the respective boundary in the farm map for the evaluation of the model accuracy. For this, the maximum overlap area method was applied to compute the coincidence degree O ij between two boundaries consisting of all the extracted and the reference [43]: where A e,i denotes the area of the ith extracted boundary, A r,j is the area of the jth reference boundary. The one with the maximum coincidence degree among the extracted boundaries was matched to the respective reference boundary.
Boundary Level Accuracy Assessments
The extracted boundaries were assessed quantitatively by applying a buffer overlay method [44][45][46]. When the extracted boundaries are overlaid to the buffer around the reference boundaries, the overlapping ones on the buffer are true positive (TP), otherwise FP. Similarly, by overlaying the reference boundaries on the buffer around the extracted boundaries, true negative (TN) and false negative (FN) were determined. TP, FP, FN, and TN are demonstrated in Table 1 and Figure 14 [47]. Table 1. Definitions of true positive, false positive, false negative, and true negative results regarding the extracted results and the reference.
Reference
Extracted Results
Positive
Each indicator has a value between 0 and 1, and the closer to 1, the better the algorithm's performance. The correctness, completeness, and quality of the extracted boundaries were calculated for the entire parcels of the respective study sites and are presented in Table 2. The International Association on Assessing Officers (IAAO) proposed a buffer width limit of 2.4 m for sufficient accuracy in the boundaries of rural areas [48]. A buffer width of 2.0 m was used in this study. TP, FP, and FN were calculated by converting the extracted (or reference) polygon boundaries into line features and overlaying this on the buffer of the paired reference (or extracted) boundaries. The calculated lengths of TP, FP and FN were then used to evaluate the completeness, correctness, and quality of the extracted boundaries using Equations (14)- (16): Each indicator has a value between 0 and 1, and the closer to 1, the better the algorithm's performance. The correctness, completeness, and quality of the extracted boundaries were calculated for the entire parcels of the respective study sites and are presented in Table 2. In general, correctness showed a greater value compared with completeness, which is expected because the reference boundaries better reflect the non-smooth curves of reality. In particular, the correctness, completeness, and quality for the Hwasun and Miryang sites, which included greater areas of greenhouses, appeared to be relatively low. Greenhouses have distinctive image characteristics compared to paddies and uplands that lead to clear identification and more subdivision of the boundaries between greenhouses compared with the reference boundaries ( Figure 13). This was the primary reason for the high rates of FP and FN, resulting in smaller values of quality of the extracted boundaries. The mean correctness, completeness, and quality over the six study sites were 80.7%, 79.7%, and 67.0%, respectively. Considering the previous study results with the respective values of 73.3%, 73.0%, and 59.7% by Khadanga et al. [17], the developed algorithm performed reasonably well in boundary extraction.
Section Level Accuracy Assessments
To analyze the developed algorithm's accuracy on the section level, two measures in terms of the area and the number of extracted boundaries were applied. These measures are methods that can be used to analyze the accuracy of single feature extraction, such as boundary extraction based on object-based image analysis (OBIA) [43].
The area-based accuracy assessment method measured the correctness, completeness, and quality of the extracted boundaries. These were calculated using Equations (14)- (16) for the entire parcels based on the results exemplified in Table 1 and Figure 15. ity. In particular, the correctness, completeness, and quality for the Hwasun and Miryang sites, which included greater areas of greenhouses, appeared to be relatively low. Greenhouses have distinctive image characteristics compared to paddies and uplands that lead to clear identification and more subdivision of the boundaries between greenhouses compared with the reference boundaries ( Figure 13). This was the primary reason for the high rates of FP and FN, resulting in smaller values of quality of the extracted boundaries. The mean correctness, completeness, and quality over the six study sites were 80.7%, 79.7%, and 67.0%, respectively. Considering the previous study results with the respective values of 73.3%, 73.0%, and 59.7% by Khadanga et al. [17], the developed algorithm performed reasonably well in boundary extraction.
Section Level Accuracy Assessments
To analyze the developed algorithm's accuracy on the section level, two measures in terms of the area and the number of extracted boundaries were applied. These measures are methods that can be used to analyze the accuracy of single feature extraction, such as boundary extraction based on object-based image analysis (OBIA) [43].
The area-based accuracy assessment method measured the correctness, completeness, and quality of the extracted boundaries. These were calculated using Equations (14)- (16) for the entire parcels based on the results exemplified in Table 1 and Figure 15. The correctness, completeness, and quality of the extracted sections were calculated for the entire parcels of the respective study sites and are presented in Table 3. The mean correctness, completeness, and quality of the area-based assessments were 89.7%, 90.0%, and 81.6%, respectively, indicating good performance of the algorithm. The correctness, completeness, and quality of the extracted sections were calculated for the entire parcels of the respective study sites and are presented in Table 3. The mean correctness, completeness, and quality of the area-based assessments were 89.7%, 90.0%, and 81.6%, respectively, indicating good performance of the algorithm. Some of the overly dissected parcels were removed during the density regulation process, while thresholding resulted in a reduction of completeness and quality.
The number-based accuracy assessment method measured the correct, false, and missing rates relying on the number of boundaries. Each measure had a value between 0 and 1, and these were calculated as: where N c , N f , and N m denote the number of correct, false, and missed extracted boundaries. The closer the correct rate was to 1, and the closer to 0 for false and missing rates, the better the algorithm's performance. The coincidence degree was applied as a criterion for determining whether the extracted boundary was correct, false, or missed. These were determined based on a coincidence degree threshold value of 0.8, and the extracted boundary that was not matched to the reference boundary was determined as false. The correct, false, and missing rates for each study site were calculated based on the number of correct, false, and missed boundaries (Table 4). The resulting values of the area-and number-based accuracy measures were compa rable with the results of previous studies [17,49,50]. Thus, the developed model can be used to facilitate rapid updates of farm maps through the automation of farmland bound ary extraction. However, the current algorithm is only applicable for the regularly ar ranged land parcels and further development is required for more general applications to non-regular shapes of land, possibly with the help of the artificial intelligence.
Conclusions
In an effort to automate parcel boundary extraction from aerial images in agricultura areas, we developed a parcel-level boundary extraction algorithm in this study, and its performance was evaluated over six study sites. The set of computational and mathemat ical methods used for the developed algorithm include the Suzuki85 algorithm, Canny edge detection, and Hough transform.
As for the match-ness assessment between the extracted and reference boundary, the developed algorithm demonstrated 80.7%, 79.7%, and 67.0% correctness, completeness and quality, respectively. The area-based accuracy measures of correctness, completeness and quality were 89.7%, 90.0%, and 81.6%, respectively. These results appeared to be com parable or better when compared with the results of the previous studies, and thus the developed algorithm can be used in farmland parcel boundary extraction.
Since the developed algorithm is based on the assumption that land parcel bounda ries are straight lines, a cautious approach should be taken in applications with non-reg ular shaped land parcels. The developed algorithm tended to subdivide land parcels fur ther when distinctive features, such as greenhouse structures or isolated, were land par cels within the land blocks.
Conclusions
In an effort to automate parcel boundary extraction from aerial images in agricultural areas, we developed a parcel-level boundary extraction algorithm in this study, and its performance was evaluated over six study sites. The set of computational and mathematical methods used for the developed algorithm include the Suzuki85 algorithm, Canny edge detection, and Hough transform.
As for the match-ness assessment between the extracted and reference boundary, the developed algorithm demonstrated 80.7%, 79.7%, and 67.0% correctness, completeness, and quality, respectively. The area-based accuracy measures of correctness, completeness, and quality were 89.7%, 90.0%, and 81.6%, respectively. These results appeared to be comparable or better when compared with the results of the previous studies, and thus the developed algorithm can be used in farmland parcel boundary extraction.
Since the developed algorithm is based on the assumption that land parcel boundaries are straight lines, a cautious approach should be taken in applications with non-regular shaped land parcels. The developed algorithm tended to subdivide land parcels further when distinctive features, such as greenhouse structures or isolated, were land parcels within the land blocks.
The developed boundary extraction algorithm is currently only applicable to regularly arranged agricultural lands. A wide range of applications is possible by selectively extracting the boundaries of various objects as well as agricultural parcels. For applications beyond the regular shaped boundaries, further study, potentially with a decision tree or artificial intelligence, is needed.
|
v3-fos-license
|
2019-11-14T17:07:19.260Z
|
2019-11-01T00:00:00.000
|
211650307
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/531FD6809AE24D9E45424A7DA441E827/S0020743819000679a.pdf/div-class-title-muscular-muslims-scouting-in-late-colonial-algeria-between-nationalism-and-religion-div.pdf",
"pdf_hash": "170d61912456bc54b8fa9747985e8ba7252aea69",
"pdf_src": "Cambridge",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2660",
"s2fieldsofstudy": [
"History",
"Sociology"
],
"sha1": "de377ddd4caca58f47f85151ebafe23fd8e82449",
"year": 2019
}
|
pes2o/s2orc
|
MUSCULAR MUSLIMS: SCOUTING IN LATE COLONIAL ALGERIA BETWEEN NATIONALISM AND RELIGION
Abstract The Islamic reformist movement in Algeria is often seen as a precursor to the independence movement, in which religion was supposedly integrated into nationalist identity politics. Focusing on the Muslim scout movements between the 1930s and 1950s, this article challenges this view by arguing that Islam continued to play a role beyond that of an identitarian marker. Influenced by Christian youth movements, the Muslim scouts developed ideas of a “muscular Islam” that remained central even after the movement split in two—one association close to the major nationalist party and another linked to the reformists.
Apart from this interpretation, which perceives Islamic reformism as a sort of cultural nationalism that preceded "actual" nationalism, the prominence of Islam in political discourse is normally seen as pertaining to a later period in the history of the Middle East and North Africa, when Arab nationalist-cum-socialist regimes were beginning to lose their legitimacy. 5 Youth movements with an explicitly religious character especially, including those that are the focus of this article, are often described as more recent phenomena of an "Islamist" or even "post-Islamist" era in Middle Eastern history since the 1990s. 6 In contrast to these trends, Peter Wien has cautioned against an easy conflation of 20th-century Arab nationalism with secularism. 7 And with regard to Algeria, Shoko Watanabe has recently refuted the interpretation of Islamic reformism as a mere precursor to the ultimately successful nationalist independence movement of the National Liberation Front (Front de libération nationale, FLN) and convincingly shown that the ʿulamaʾ, and particularly their youth movements, continued to follow a political direction autonomous-and clearly discernible-from the nationalists well after World War II. 8 Although in principle this article subscribes to Watanabe's argument about the independent role of the Islamic reformists and the distinctiveness of their political vision from the nationalist one, it takes the issue further by investigating the role of religion in the major youth movements close to the reformist AUMA and to the nationalist party, respectively the Boy scouts musulmans algériens (BSMA) and the Scouts musulmans algériens (SMA). It argues that Islam remained fundamental for the SMA as well, which would provide the radical nationalist Algerian People's Party/Movement for the Triumph of Democratic Liberties (Parti du peuple algérien/Mouvement pour le triomphe des libertés démocratiques, PPA-MTLD) 9 with its cadres, and thus sheds new light on the 1948 split in the Algerian Muslim scout organization that gave rise to two competing associations. The article also examines the references and influences that shaped the Muslim scout movements from its inception in the 1930s until the war of independence from 1954. It argues that, although Muslim scouting emerged during the heyday of paramilitary youth movements, which were sometimes inspired by European fascism, in the Middle East (and beyond) the major influence on the groups in question came from religious associations such as the Muslim Brotherhood and particularly the initiatives of Christian missionaries. Within this context, Muslim Algerian scouts developed their own idea of a "muscular Islam."
A L G E R I A N Y O U T H M O V E M E N T S B E T W E E N N AT I O N A L I S M A N D R E L I G I O N
In Algeria, indigenous scouting emerged in the bustling atmosphere that followed the 1930 anniversary celebrations of French conquest. Many scout leaders recall the Centenaire, with its impressive scout camp, as an important stimulus. 10 As one scout observed, "Algerian youth, for their part, were finally on the move: sports clubs came into life pretty much everywhere, cultural and theatrical associations developed; we witnessed the arrival of the first Egyptian and Indian movies. But, for my part, I gravitated towards Muslim scouting." 11 The first Muslim Algerian scout association is said to have been the troop al-Fallah from Algiers, established in 1935. Additional groups emerged over the following years in cities and towns with a sizable Muslim population, from Tlemcen and Mostaghanem in the West to Sétif and Constantine in the East, but also in the Southern Territories, in the town of Laghouat. 12 Together, in 1939, these groups formed the Federation of Muslim Algerian Scouts (commonly referred to as Scouts musulmans algériens, SMA). 13 In 1948, the SMA split when the nationalist tendency imposed its views on the federation, propelling proponents of a nonpolitical stance to create the Boy scouts musulmans algériens. Both the SMA and the BSMA continued to use the Arabic designation al-Kashshafa al-Islamiyya al-Jazaʾiriyya afterwards. 14 By all accounts, Mohamed Bouras (Muhammad Buras), who was close to the Islamic reformist Progress Club (Nadi al-Taraqqi or Cercle du progrès), founded the first Muslim boy scout movement. In 1941, French authorities charged Bouras, who had continued to play a leading role in the SMA, with treason and executed him; he thereby became the first "martyr" of the scout movement and one of the symbols of Algerians' fight against colonialism. 15 The association in Constantine evolved under the aegis of Ben Badis, often depicted as the Algerian Muhammad ʿAbduh. 16 The SMA and, after 1948, the BSMA were thus considered part of the is ̣lāh ̣ movement of Islamic reform around Ben Badis, who was credited not only with the establishment of the private Arabic education system, but also with the foundation of the soccer team Mouloudia Olympique de Constantine. 17 After the split, the SMA organization provided the basic structure for the militant movement centered around the PPA-MTLD, which had demanded independence for the first time in May 1945 and then started the armed insurrection nine and a half years later. 18 Many features that were central to scouting in general acquired nationalist or religious meaning for the Muslim scouts in the specific context of Algeria during the last decades of French rule. One such feature was the centrality of the experience of nature for a healthy lifestyle for town boys and girls, allegedly corrupted by their urban environment. In the case of the Muslim Algerian Scouts, this idea was linked, beyond the general concern for outdoor activities present in all scout movements, to a nationalist drive to get to know one's country. Knowledge of the nation as a tangible entity made up of a particular topography, which a young scout was supposed to acquire, pertained to different levels of experience: one was the beauty of the landscape that might elicit pride. On another level, for adolescents from urban, middle-class backgrounds, taking note of the harsh realities of life in the countryside under colonial exploitation might also help to raise nationalist consciousness. Finally, there was the inquiring impetus of scouting, which, according to scout educators, should lead to a thorough study of the country to individuate the challenges and potentials of the new nation that was to be built. 19 It was often assumed that scouts should lead others by example and, in turn, follow the example of great historical figures. Traditions of religious sainthood, in this context, were combined with the modern idea of great individuals as the actors of history. 20 In the French associations active in Algeria, these figures included medieval knights as well as modern colonial conquerors or men of science. 21 The Muslim scouts, for their part, placed themselves in their own tradition, which was, above all, an Islamic one. Apart from Bouras, the actual founder of the movement, Ben Badis became a venerated founding figure after his death in 1940. 22 Beyond the immediate Algerian context, the intellectual ancestry claimed by the scouts can clearly be identified with the wider movement of Islamic reform: the BSMA paper al-Hayat (Life) printed a whole series about ʿAbduh, but a paper close to the nationalist SMA also published several articles on the former modernist mufti of Egypt. 23 The Muslim scouts also integrated venerated scholarly figures such as Avicenna (Ibn Sina) and Ibn Khaldun into their pantheon of role models who could represent the enlightened tradition of Islamic culture. 24 Another contemporary example would be the young King Faruq of Egypt who used to appear in public as head of his country's scout movement. 25 For scouts linked to a movement of national liberation civic education was, of course, of paramount concern. The association was described as a "school of patriotism," and its members as "the soldiers of the future" and the leaders of the independent state to come. 26 But civic and moral education, or national and religious consciousness, were intertwined in the Muslim scout movement. Mohammed (Muhammad) Harbi, himself a young scout at the time, founded a sports club in his hometown of Philippeville (Skikda) and, eventually, became an important historian of the nationalist movement (though a leftist dissident one). He remembered how he became politically socialized during the 1940s: It was also through scouting that I was initiated into certain hadiths repeated over and over by Muslim reformism and taken up by the nationalists-such as "Love of the homeland is part of the faith"-or Qurʾanic verses calling for the refusal of all determinism and fatalism: "Say: Act! God will judge your action, as will the Prophet and the believers"; and above all: "God will not change the state of a community, if it does not reform itself first." In the name of these verses, my generation had to shake off parental tutelage. 27 From this passage, it is clear that nationalists employed religious symbolism as a mobilizing force. The reference to Islam was certainly part of a politics of identity-the colonized "natives" were commonly referred to and referred to themselves not as Algerians or Arabs, but as Muslims, in contrast to the French Algerians of diverse Christian European origins and to the indigenous Jews, most of whom also held full French citizenship. Hence, Muslim identity apparently overrode national identification, at least for a certain time. Messali Hadj (Masali al-Hajj) characterized the discourse at the beginning of Algerians' anticolonial activism in the 1920s, again mentioning the famous hadith: "We did not realize that we were animated by nationalist sentiments. In our conversations in France, we never used the word 'nationalism. ' We just said to express our sentiments during the discussions: 'Love of the homeland or the country is an act of faith.' 'Hubb al-watan min al-iman.'" 28 According to Harbi, Messali himself even thirty years later, when he was the undisputed leader of the nationalist movement, played on religious references and styled himself as an eschatological savior figure to appeal to popular constituencies. 29 That many sports clubs, as well as both indigenous scouting associations, used the term "Muslim" in their names marked them, first of all, as communal organizations of a distinct demographic group in the settler colony. 30 Besides their self-identification as Muslim, most sports and scout associations employed Arabic names. Although the use of Arabic might not be surprising in the context of national identity building-leaving aside the intricate question of Kabyle or Amazigh identity-the program of Arabization was a central component of the Islamic is ̣lāh ̣ movement that operated the private Arabic school system. 31 Again, the national and the religious were closely knit together. The recollections of one militant show that in the mid-1940s a boy scout could adhere to Islamic reformism and contribute to its Arabic education system while also being a member of a nationalist party and participating in paramilitary training. 32 In a historical context where all major social and political forces-radical nationalists and communists as well as Islamic reformists, and eventually even the colonial administration-called for modernization, the transformation of society, and a break with tradition, religion was clearly part of a process that James McDougall has described as "the invention of authenticity." 33 This insistence on national-cum-religious authenticity also translated into concrete practices. For example, the French scouts in Algeria did not make reference to the actual colonial setting for their imagery of adventure, but to a certain romanticism of native Americans or Rudyard Kipling's Jungle Book-which seems suited for metropolitan youth rather than for people in a situation where the "exotic" non-European was nearby. 34 Contrary to this kind of exoticism, the Muslim scouts tried to take edifying examples only from what they regarded as their own cultural tradition, like when they substituted the figure of Hayy bin Yaqzan, devised by the Muslim philosopher Ibn Tufayl, for Mowgli. 35 Whereas European scouts occasionally dressed up as "Indians," their Muslim counterparts, during a visit to the Andalusian town of Granada, presented themselves in a sort of "traditional" Arab garb, which seems closer to Orientalist imagination than to anything Arabs would wear in contemporary Algeria. 36 This quest for authenticity was especially obvious in connection with the question of women's "emancipation." 37 Many authors argued that Muslim women's social status should be ameliorated as a necessary part of social modernization. "Emancipation" was not to be understood as a demand for Westernization, but explicitly as an authentic way of female liberation in line with cultural and religious values. 38 Even a writer such as Zhour Ounissi (Zuhur Wanisi), who would become the first female government minister in independent Algeria, made her claims for women's equality as important constituent parts of the scout movement, as well as of "the body of every nation" ( jasad kull umma) as a whole, in early Islamic history. 39
B O Y S C O U T S , F A S C I S T S , A N D M I S S I O N A R I E S
After its establishment in Britain by General Sir Robert Baden-Powell in 1907, scouting rapidly spread to France and, over the following decades, developed in most parts of the French colonial empire, as well as in the Arab world. 40 The history of scouting and sports exemplifies the ambivalence of the colonial situation: 41 first introduced by missionaries or colonial officials in the framework of their civilizing mission ideology, new educational models, leisure practices, and new structures for community organization were quickly adopted by those sectors of indigenous society that aimed at anticolonial reform and national self-empowerment. 42 In the words of Daniel Denis, sports and scouting in a colonial setting thus became "ruses of History." 43 The 1930s, the decade in which Muslim scouting developed in Algeria, witnessed the mushrooming of various nationalist youth movements in the Middle East and North Africa. Many of these organizations, which often had a paramilitary character, were linked to new movements in the early era of mass politics and represented mainly an emerging young urban middle class, known in the Arab East as the effendiyya. 44 Some of the paramilitary associations were certainly influenced by contemporary fascist aesthetics and political styles. Yet despite several prominent personalities or organizations having connections to Fascist Italy and National Socialist Germany, there were virtually no parties or movements in the Arab world that can be described as truly fascist. 45 Nevertheless, the question remains whether Algerian Muslim scouting at the time of its emergence might have been influenced to some extent by "fascistic" elements in terms of aesthetics or organization. In the sources, there is no evidence to support the idea of any fascist influence on the movement. In contrast to youth organizations in Egypt and the Mashriq, which sometimes styled themselves deliberately after fascist examples and were founded explicitly as paramilitary wings of nationalist parties, 46 the SMA and BSMA clearly remained boy scout associations, following the rules established by Baden-Powell and recognized by the French and international scouting federations. Their symbols, the fleur-de-lis and crescent and star, reflected their participation in the world scout movement as well as an emphasis on Islamic identity; both symbols were far from any fascist iconography. Götz Nordbruch has shown that in discussions about fascism during the interwar years Islam was often presented as essentially incompatible with the adoption of Italian or German models in the Middle East. 47 In 1930s Algeria, Islamic reformists and even many nationalists clearly saw their future in the framework of French republicanism and not in a radical alternative that fascism might offer, in particular after the leftist Popular Front came to power in 1936. 48 In 1941, Bouras was executed on charges of collusion with Germany and Italy. Even if the accusations were true-all Algerian accounts insist they were fabricated 49 -it is unlikely that this reflected any ideological proximity between the SMA and fascism. It seems, rather, that Bouras attempted to use the French defeat in World War II to the advantage of the Algerian cause, just as Tunisia's ruler Moncef (Munsif) Bey did at the time. 50 Although Algerians, including the scouts, tried to come to terms with the Vichy Regime during this period, the quasifascist youth movements created under the dictatorship of Marshal Philippe Pétain did not serve as examples to young activists for their own associations. This is stressed, for instance, by Hocine Aït Ahmed (Husayn Ait Ahmad), who would become a major leader in the PPA-MTLD and then the FLN. Reflecting on his schooldays under Vichy, he recalled: "Like my fellow pupils, I appreciated that our afternoons were now dedicated to sports, but I was neither active in a youth movement nor did I sing Maréchal, nous voilà." 51 Although Aït Ahmed was active in a youth movement-the Muslim Scouts-he did not link this to the state organizations set up by the right-wing regime. In fact, in Algeria fascist sympathies and right-wing extremism, especially anti-Semitism, were widespread among the settler population, which made them all the more unattractive to anticolonial activists. 52 Another possible influence could have been the workers' sports movement, which was also important during the interwar period. After all, Messali's first nationalist organization had been set up in the late 1920s as an affiliate of the French Communist Party (Parti communiste français, PCF), and even the ʿulamaʾ repeatedly collaborated closely with the PCF's Algerian offspring. 53 Workers' sports were well established in Algeria, to the point that the country-though legally part of France-was admitted with a separate delegation to the 1936 Peoples' Olympiad in Barcelona. 54 But here, too, there is no indication in the various recollections and contemporary reports that socialist athleticism might have exercised any influence on the emerging Muslim scout movement, which was firmly anchored in the educational and associational environment that was being constructed by Islamic reformists.
Ben Badis's is ̣lāh ̣ movement focused on strengthening young Algerians' Arab-Islamic identity through private schools, cultural circles, sports clubs, and scout troops. 55 An important model for distinctly Islamic youth organizations was provided by the Muslim Brotherhood, founded in 1928 by the Egyptian teacher Hasan al-Banna. 56 The Algerian paper al-Manar (The Lighthouse), for instance, which was close to the SMA, printed articles by the most prominent ideologues of the Brotherhood, Banna and Sayyid Qutb. 57 Although the Muslim Brothers, with their uniformed wing the Jawwala, were certainly part of the phenomenon described above of more or less radical youth organizations in the Arab world, Banna explicitly rejected fascism as a model. 58 In his pedagogy, he was actually much closer to the scout ethos and ideas derived from British "muscular Christianity." 59 Beth Baron has even argued that the foundation of the Muslim Brotherhood was a direct response to the activities of Christian missionaries in Egypt, at once a countermeasure to their increasing influence in the fields of charity and education and an initiative modeled on their example. 60 A similar connection can be found in the case of Algerian Muslim scouting. Christian missionary societies had been present in the colony since the 19th century, the most important of which was the Society of the Missionaries of Africa, commonly known as the White Fathers (Pères Blancs) and White Sisters (Soeurs Blanches). 61 Muslim boys and girls had been enrolled in scout troops by these missionaries from the mid-1930s as part of their Moral Assistance to North African Natives program (Assistance morale aux indigènes nord-africains). Later, the French Catholic scout movement Scouts de France (SDF, for boys) and Guides de France (GDF, for girls) incorporated them and thus became the only French association to create special sections for Algerian Muslims. 62 Besides the Catholic societies, Protestant missionaries were active in colonial Algeria, and they had their own scout association, the Éclaireurs unionistes de France (EUDF). 63 In his autobiographical novel Le fils du pauvre (The Poor Man's Son) from 1950, Mouloud Feraoun (Mulud Firʿawn) relates an experience with Protestant missionaries and the EUDF: when Menrad, the protagonist, moves from the countryside to the city to pursue his studies, he stays in a student home run by a missionary society, because he cannot afford otherwise. There, he has to take part in community prayer and bible lessons, but without having to undergo baptism or renounce his Muslim faith. In fact, he feels uncomfortable not during church service, but during the outings of the scout troop affiliated with the mission. Coming from a poor rural background, where people are "in the fresh air" all day anyway, he fails to understand the purpose of hiking: "Menrad was stunned that serious persons, like the missionary, would waste their time on such childish things. The shepherds from his village, then, practiced scouting without knowing it?" 64 Feraoun here hints at the fact that the most lasting experiences pupils would acquire in missionary institutions were perhaps not religious teachings based on the gospels, but rather new leisure practices and community activities. In fact, one of the early indigenous scout troops in Algeria emerged precisely in the environment described by the Kabyle writer. In Tizi-Ouzou, the urban center of Kabylia, during the late 1930s, the future prominent SMA leader Salah Louanchi (Salah al-Wanshi) and fellow students developed the Muslim association al-Hilal out of a scout unit based at a mission station of the French Reformed Church and first organized by a member of the EUDF. 65 Sometimes, the Muslim Scouts explained the content of scouting with the term "scout" as an acronym: serve (servir), believe (croire), obey (obéir), unite (unir), and work (travailler). 66 For some, this emphasis on strict discipline and obedience might again evoke fascist precedents, but, in fact, it was taken from the paper of the North African branch of the Protestant EUDF. 67 Although French Protestants found themselves in a more difficult situation vis-à-vis the laicist state and a predominantly Catholic society than their Anglo-Saxon coreligionists, in Algeria they competed actively with other Christian missionaries in the realms of charity and education. 68 Apart from mission schools and the Éclaireurs unionistes, there was another Protestant institution that had a strong impact on the development of ideas on muscular religion: the Union chrétienne de jeunes gens (UCJG), the French branch of the Young Men's Christian Association (YMCA). 69 As part of Protestant missionary efforts, the YMCA had spread to many countries across the globe in the first half of the 20th century. 70 The training, not least the physical education, dispensed in this association influenced quite a few political activists of different religious and ideological persuasions. 71 The most prominent Algerian member of the YMCA during the colonial period was certainly the Muslim reformist thinker Malek Bennabi (Malik bin Nabi), who, after independence, would become a major source of inspiration for Algerian Islamists. 72 This intellectual from Constantine joined the Protestant youth organization shortly after settling in Paris in 1930. 73 Bennabi, at the time a twenty-five-year-old student with the occasional job, explained how he ended up there rather apologetically and insisted that he identified himself as a Muslim while entering the association. 74 In his programmatic anticolonial writings, Bennabi was not interested in sports and mentioned scouting only briefly. 75 Although, he saw the YMCA more as a site for intellectual discussion, 76 it is unlikely that he was completely untouched by the physical activities being pursued at the place where basketball was first introduced to France. 77 In any case, what he likedand deemed necessary for his fellow North Africans-were "lessons in efficiency, style, or in one word: civilization." 78 This is completely in line with the focus on ethics and good conduct, regardless (to a certain extent) of a specific religious creed, which many Protestant missionaries espoused. 79 In fact, Bennabi seems to have combined his experience in the YMCA with his admiration for the Egyptian Muslim Brotherhood in formulating his ideas about reform and the education of young generations, which favored individual morality and rationalism, austerity, and activism. 80 While living in Paris during the 1930s, Bennabi married a French Catholic who then converted to Islam, which was not uncommon in the circles of Maghribi immigrants. The liaison between the Catholic student activist and girl guide leader Anne-Marie Chaulet and Salah Louanchi, an Algerian independence activist and SMA leader, is more exceptional, as it happened in 1950s Algiers where relations between communities were much more tense than in the metropole. But it exemplifies the personal as well as institutional connections of Christian and Muslim youth groups up to the very end of French rule in Algeria. 81 In 1953, representatives of the Catholic student union and the SDF/GDF, on the one hand, and from both Muslim scout associations, on the other, established the Association of Algerian Youth for Social Action (Association de la jeunesse algérienne pour l'action sociale, AJAAS) in an attempt to find common ground in the face of increasing political conflict and a colonial administration unwilling to reform. Members of the AJAAS included, apart from Anne-Marie and Salah Louanchi, Fanny Colonna, who would become a well-known sociologist, as well as two other founding figures of the Algerian scout movement who were also active in the nationalist PPA-MTLD: Omar (ʿUmar) Lagha and Mahfoud Kaddache, the future historian of Algerian nationalism. Together, they had established their own scout association in the late 1930s, which then joined the SMA. At the time of their activities with the AJAAS, they published a scout paper, La Voix des Jeunes (The Voice of the Young). The AJAAS also had its own journal, edited by the historian André Mandouze, a famous exponent of the Catholic left and former Resistance fighter. 82 The literature on Christian missions in the wider Arab world focuses mainly on their influence on the cultural Arab renaissance (nahd ̣a) of the late 19th century and the emergence of Arab nationalism. 83 In this view, it was not the diffusion of the Christian faith as such that represented the influence of the missionaries, but the entrenchment of certain values, such as individual morality in a strong community, self-help, rationality, orderly conduct, and a work ethic, or a modern lifestyle in general. The methods of education to transmit these values pertained to the current of "muscular Christianity" that was emerging from the 19th century mainly in the context of Anglo-Saxon Protestantism with emblematic movements such as the boy scouts and the YMCA. 84 Although the activities of various Protestant missionary societies in the Middle East are relatively well studied in the context of "muscular" religion, 85 less attention has been paid to Catholic actors. 86 And yet, in France, the patronages of different monastic orders, which had started to work among the emerging industrial working class in the early 19th century, propagated a similar "civilized," orderly, and healthy lifestyle for the metropolitan poor as they did for supposedly ignorant and backward colonial populations. The patronages picked up on the Jesuit tradition of education through games and became prominently involved in the development of modern sports in France. 87 In general, the literature has dedicated less interest to Christians' strictly religious influences on Islamic reformist youth movements. 88 Especially with regard to the multiconfessional societies of the Mashriq, the scholarship often sees religious denominations as identitarian markers. 89
M U S C U L A R R E L I G I O N I N A C O LO N I A L S I T U AT I O N
Similarly, in stressing the nationalist potential of scouting, 90 scholars have neglected the role of religion. But was there more to religious references than an "invention of authenticity"? Was there something like "muscular Islam," comparable to the muscular Christianity of European scouts and the YMCA? Returning to the subject of exemplary figures to emulate, it seems fair to say that the ultimate role model for Muslim scouts remained the Prophet Muhammad, "the original über-scout" 91 -similar to the "great scoutmaster Jesus" for their Christian counterparts: 92 The Prophet Mohammed (God's grace be upon him), conscious of the importance of youth and its determining role for propagating the new religion, bases his policy on a collaboration with this section of the population, which is more receptive towards social reform. He establishes paramilitary and military training for his young companions. . . . One of the Prophet's (God's grace be upon him) hadiths recalls it in no uncertain terms: "my triumph has been assured by the young." He never missed an opportunity to remind his companions in jihad: "teach your children to swim, to shoot arrows and to ride a horse." 93 A central figure in the development of notions of "muscular Islam" within the Algerian Muslim scout movement was Mahmoud Bouzouzou (Mahmud Buzuzu). From the end of World War II, Bouzouzou, a former student in an is ̣lāh ̣ı̄Arabic school, served as head spiritual guide (murshid) to the SMA and continued to lead this association after it aligned itself completely with the PPA-MTLD in 1948. Though he published mainly in French, in 1951 he started his own Arabophone newspaper titled al-Manar, in reference to Rashid Rida's famous Islamic reform journal published in Egypt. Bouzouzou also wrote a short biography of Muhammad for his scouts, in which he outlined in accessible language the exemplary traits of the Prophet. 94 In an article for Lagha's and Kaddache's paper La Voix des Jeunes on the occasion of the mawlid, the SMA murshid depicted the founder of Islam as a man of action and even credited him with introducing the spirit of individual responsibility long before European Enlightenment philosophers. 95 Islam was foundational in Bouzouzou's educational project: "Islam represents the most important spiritual value of our homeland. Legislation, traditions, individual, social, and family life, in one word, the manners and institutions of the Algerian derive from Islam." 96 But apart from such a statement, which could still be interpreted as a claim to "authentic" identity, the murshid often stressed the activist qualities of religion: for him, Islam was primarily a way of life; reform was not about the intricacies of theological debate, but about individual practice, "not philosophizing about the Koran, but living it." 97 In one article, the spiritual leader of the SMA explained the importance of practice as opposed to theory in a more pointed way. He argued that it was imperative to care for one's body, insofar as it is the home of the spirit to which it is closely linked: To acquire the spiritual force of Gandhi together with the physical force of Joe Louis, should not leave any man of tomorrow indifferent. To be spiritually and physically strong is a quality our Prophet requires from us, when he says: "God prefers the strong over the weak believer and loves him more." (The same holds true for a people). 98 This quote, which brings together the Prophet Muhammad and American boxing star Joe Louis-then the uncontested world heavyweight champion-represents a striking example of the notion of muscular Islam.
Another Muslim intellectual-cum-scout leader was Chikh Bouamrane (Shaykh Bu ʿAmran) from the rival BSMA, which stayed close to the reformist AUMA. In independent Algeria's clerical hierarchy Bouamrane would rise up to the chairmanship of the Higher Islamic Council, the state authority issuing official fatwas. In 1951, he argued in an article in the BSMA organ al-Hayat that religion was about "fighting the spirit of abandonment, systematic pessimism and egoism." Bouamrane even demanded a "missionary spirit" as the basis for a renewed Islam, understood as the way of life of a reformed society. 99 Regarding ideas of muscular religion on the part of this Muslim scout leader, a clear influence from Catholic associations is detectable. For example, a priest working with the Scouts de France wrote under the heading "Praying Means Acting" that: I am also supporting every boy who, raised as a scout, has acquired a real sense of physical activity. From the beginning of his training he willingly has to employ his body in the search for God. . . . Apart from these educative values with their personal as well as social usefulness, physical activities within prayer represent a true religious value. Through them, we recognize that everything inside us is God: our body as our soul, that both belong to Him, that both are at His service. 100 In this passage, the central feature of muscular religion comes out very clearly: the fusion of body and soul, the inherent spiritual value of physical activity. As already mentioned, besides the scout movements, the patronages were the main sites of Catholic physical education, not only in France but also in Algeria. At the 1930 Centenaire, colonial and metropolitan patronages staged impressive manifestations. 101 They especially promoted team sports; the Spartiates from Oran (a patronage of the Salesians of Don Bosco), for instance, even won the title of basketball champions of the French Union. 102 In general, they also served a social and political function: The church, in the face of an anticlerical Republic, was developing a network of resistance towards laicization, by multiplying associations to organize the believers in all sectors of life. The patronage, from this point of view, was only one link in a chain, ranging from private schools to Christian trade unions, societies for mutual aid, cooperatives and various associations. Together, they constituted a sort of Catholic counter-society that aimed at preparing the establishment of a Christian social order which would spring from the ruins of modern society. . . . From this perspective, the practice of sports appears as a way of preparing oneself for the conflict, also by disarming an anticlerical rhetoric which depicted religion as something for women. To the image of an effeminate Christianity was, thus, opposed one of a virile Christianity, capable of defending itself against its adversaries. 103 A Catholic counter-society obviously stood in opposition to the laicist French Republic. During the interwar period, the Scouts and Guides de France were part of an ultraconservative current in the French political sphere, their honorary patron being the staunch royalist and first resident-general of the protectorate of Morocco, Marshal Hubert Lyautey. 104 Consequently, SDF leaders, who had long been recommending figures such as Jeanne d'Arc and the crusader king Louis IX (who in the 13th century led two holy wars against North African Muslims) as role models for their young scouts, 105 were quick to embrace the ideology of the "national revolution" from 1940, as their publications from the Vichy period prove. 106 In contrast to this tendency, the scouts who were more in line with republican values found a home in the laicist Éclaireurs de France (EDF). Nonetheless, it was initially the White Fathers and White Sisters and then the SDF/GDF that showed the most openness toward Muslim scouting. During the war of independence, among the most vocal critics of colonialism were many Christians, including not only the group that cofounded the AJAAS but also the archbishop of Algiers (whom the settler ultras would nickname "Mohammed" Duval accordingly). 107 Despite the right-wing tendencies inherent to Catholic scouting, Algerian Muslims would find a model in French muscular Christianity. The is ̣lāh ̣ movement showed a strikingly similar process to the one described earlier: as already mentioned, private schools, cultural circles, social movements, scout troops, and sports teams were all part of the efforts of the reformists around Ben Badis. The difference was that Islamic reformists did not at all perceive themselves as conservative, but rather as renewers of religion against traditionalist authorities. 108 Despite their insistence on masculinity, which formed an integral part of ideas of muscular religion, the AUMA was very concerned with the reform of women's status in society and included girls in relatively elevated numbers in their schools and scout troops. 109 Nevertheless, the idea of forming new, virile Muslim subjects out of a people that had hitherto been stuck in a somehow "effeminate" passivity (as Orientalist discourse would have it) and of constructing a counter-society that sooner or later would assert itself against the dominant-in this case, colonialorder certainly lay at the heart of the reformist project.
C O N C L U S I O N
Considering the pronouncements of Bouzouzou and others, it is obvious that the split between the SMA and BSMA did not constitute a break between a secular nationalist and an Islamic reformist movement. Religion retained its central place in the SMA after 1948-the possibility of creating a purely secular scout movement after the model of the EDF, which was also very active in Algeria, apparently never played a role. 110 In fact, Bouzouzou stated categorically that "Our name 'S.M.A.' excludes any idea of irreligion and shows well that our movement has a particular confession: Islam." 111 Not only "the famous murshid," 112 but also Kaddache, who certainly was a nationalist activist and not a religious scholar, repeatedly stressed that Islam must not be solely an identitarian marker employed occasionally for political legitimacy. Both emphasized the foundational role of faith in the social and moral reform that the movement envisaged. 113 On the other hand, after the bloody clashes of May 1945 between nationalist militants and French security services and the electoral fraud related to the vote for the Algerian Assembly three years later, loyalty to France was no longer an option. Nationalism, in other words, had permeated all the various movements.
BSMA members explained the 1948 split as the takeover of the original Scouts musulmans algériens by the PPA-MTLD, which wanted to use the existing structure, with its possibilities for paramilitary training, for a future insurrection. A new generation of independence activists from this party no longer contented themselves with the gradual construction of a counter-society and no longer wanted to restrict political activity to protests or international networking. These young activists around Kaddache, Lagha, Louanchi, Harbi, and others wanted to get rid of colonialism as soon as possible and, if necessary, by force. For them, the scout movement should, in the words of Kaddache, "continue its role as a nationalist revolutionary preschool." 114 Talk about the scouts as the "soldiers of the future" remained no dead letter: Aït Ahmed recalled that, in the 1940s, young nationalists actually prepared themselves for an insurrection "under the pretext of scout exercises." 115 The prominent SMA leader Lagha was even killed during the so-called battle of Algiers in 1957. 116 This was accompanied by a reconfiguration of organizational structure: the network of loosely connected scout troops that had been established around reformist circles in the 1930s gave way to a more centralized system, following the hierarchical bureaucratic model of the nationalist party. 117 From the BSMA perspective, the question was whether to become a mere instrument of a single party and its struggles or continue the scouts' core work of social reform above partisan political considerations. 118 Founders of the new, supposedly nonpolitical association could nonetheless be committed nationalists and some had also been active in the PPA. 119 Like the leaders of reformism, they saw their position as transcending politics and the divergences between parties-which does not mean that they did not have political goals and an anticolonial agenda. 120 Watanabe has argued that the reformists' political vision differed markedly from the nationalist one, in particular with regard to the BSMA, whose members they saw neither as soldiers nor as political activists. 121 On the other hand, their insistence on being not one party among others but above politics might not have been all that different from the perspective of the nationalists, who perceived themselves as the only true representatives of the Algerian people-a view that would, not incidentally, lead to the absorption of all other political currents, including AUMA, into the FLN as the new single party after 1955. 122 Harbi, who was a member of the PPA-MTLD and then the FLN, stressed that his party did not have a political doctrine -apart from nationalism-and that its members considered themselves the leaders of their people towards modernity. 123 Seen in this light, the split of the Algerian Muslim scout movement was part of a struggle for leadership between AUMA and the PPA-MTLD, with their different ideas about politics. It was certainly not a confrontation between religion and secularism.
Although the argument presented here has focused on the Muslim Scouts and not on Algerian nationalism as a whole, it is obvious that, as far as this youth movement was concerned, religion remained a central tenet, beyond its function in identity politics, as well for the allegedly secular nationalist current. This also means that Muslim youth movements embracing ideas of muscular Islam are no recent phenomenon. Taking the role of religion seriously, it is no coincidence that Muslim scouts were first active in Protestant and Catholic troops. Despite its common association with Protestantism, with the Scouts and Guides de France or the patronages, the French Catholic church, too, was very active in promoting a sort of muscular Christianity, in metropolitan France as well as in Algeria. The involvement of Muslim scouts in the SDF/GDF shows the potential of colonial ambivalence: in a "ruse of History," even a movement that venerated medieval crusaders could serve as a model for devout Muslims in their effort to build their own counter-society.
|
v3-fos-license
|
2016-10-21T18:57:48.000Z
|
2016-10-21T00:00:00.000
|
36149189
|
{
"extfieldsofstudy": [
"Physics",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/ncomms15506.pdf",
"pdf_hash": "d95466afb88e9ed7ebd4047b427265f70104a703",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2661",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "945a5c8642e6b6b4616b407bc18d9db21828d9ef",
"year": 2017
}
|
pes2o/s2orc
|
Highly indistinguishable and strongly entangled photons from symmetric GaAs quantum dots
The development of scalable sources of non-classical light is fundamental to unlocking the technological potential of quantum photonics. Semiconductor quantum dots are emerging as near-optimal sources of indistinguishable single photons. However, their performance as sources of entangled-photon pairs are still modest compared to parametric down converters. Photons emitted from conventional Stranski–Krastanov InGaAs quantum dots have shown non-optimal levels of entanglement and indistinguishability. For quantum networks, both criteria must be met simultaneously. Here, we show that this is possible with a system that has received limited attention so far: GaAs quantum dots. They can emit triggered polarization-entangled photons with high purity (g(2)(0) = 0.002±0.002), high indistinguishability (0.93±0.07 for 2 ns pulse separation) and high entanglement fidelity (0.94±0.01). Our results show that GaAs might be the material of choice for quantum-dot entanglement sources in future quantum technologies.
Supplementary Figure 1 Evaluation of the visibility of two-photon interference. Two-photon interference measurement of a representative quantum dot. The dashed coloured lines are the fits of peak 0 (red), 1 (violet), 2 (blue), 3 (green) and 4 (orange), respectively. The black line is the sum of all single peak fits. The areas under the peaks 1, 2 and 3 are used for the calculations of the two-photon interference visibility.
In order to extract the visibility of two-photon interference, we fit the experimental data using the following equation: where y 0 is the offset, A i the area of peak i, x 0 the position of the first peak, w the width of the peaks and d the temporal distance between the peaks. From the experimental conditions we expect that the distance d between the peaks is the same.
Furthermore, all the peaks should have the same width, which is mainly determined by the time jitter of the avalanche photodiode (500 ps). In general, one would also expect that peak 0 and 4 as well as 1 and 3 are equal in intensity. However, we have to consider the slightly different intensity between the two excitation pulses as well as the different detection efficiency for both fibre outputs. This is taken into account by leaving the A i as free parameters.
We have then calculated the two-photon interference visibility via To correct for the imperfections of the beam splitter we measured the mode overlap (1 − ε) = 0.96 ± 0.01 (using the fibre beam splitter to perform a Michelson measurement on the shaped excitation laser), the transmission coefficient T = 0.48 ± 0.005 and the reflection coefficient R = 0.52 ± 0.005 (using a power meter) and calculate the corrected visibility with (see Supplementary information of 1 and 2 ): where we put g 2 (0) = 0.
Supplementary Note 3: Decay time and coherence time measurements
We measured the decay time of exciton (X) and XX under two-photon excitation using a detector with a time resolution of around 50 ps. The acquired data are presented in Supplementary Figure 3 where V is the interference fringe visibility, A the amplitude, y 0 the offset and T 2 is the decay constant. The parameters y 0 and A, which under ideal experimental conditions should be 0 and 1 are left as free parameters because of small imperfections of the mode overlap. The calculated coherence time T 2 for X and XX are 305±39 ps and 109±15 ps, respectively. From the lifetime and coherence-times we can obtain an estimate for the visibility of two-photon interference by using (see 4 which is found to be V TPE = 0.6 and V TPE = 0.4 for X and XX, respectively. The data from the two-photon interference experiment (see Supplementary Table 1), yields a higher visibility for the same QD (V TPE = 0.69 and V TPE = 0.76 for X and XX, respectively). This discrepancy can be explained by the presence of decoherence processes on a time scale that is larger than the time delay between the two laser pulses used to generate the interfering photons. A closer inspection of the difference between the determined values indicates that the effect on the XX is more dominant. A possible explanation is that the XX is more sensitive to spectral diffusion mediated by temporally charged defects than the X state, but additional investigations (see for example 5 ) are needed to test this hypothesis.
Supplementary Note 4: Source of entanglement degradation
Although a fidelity of 0.94 is high, it is not yet perfect. We have investigated this in more detail by using a simple model according to 6 . The density matrix of the model system in the [H XX H X , H XX V X ,V XX H X ,V XX V X ] basis is given by: where Using the measured lifetime of the exciton transition for T 1 , the value of the g (2) (0) to estimate k, and considering that the effect of cross-dephasing can be safely neglected 6 , i.e., g H,V = g H,V , the only unknown parameter entering in Supplementary Equation 7 is the spin-scattering time. Here, we assume that T SS is mainly determined by the Fermi-contact interaction between the confined electron and the nuclear spins 7 , while the heavy-hole dephasing related to a dipole-dipole interaction 7,8 , is assumed to be weaker and is therefore not considered. Taking the values from the literature (T SS = 15 ns for GaAs QDs 9 ), we can therefore estimate the behaviour of the entanglement fidelity as a function of the FSS. This is shown in Supplementary Figure 4 (a) with and without including the effect of the background laser (as estimated by the value of the auto-correlation function for X, that is, k = 1 − g (2) (0) = 0.975 (see Supplementary Figure 5)). The obtained experimental data are in consensus with the theoretical curve and support the supposition that the reduced value of fidelity for QD2 is due to to the background of laser photons. Most importantly, this figure highlights that near-unity values (0.99) of entanglement fidelity can be obtained in QDs with suppressed FSS (s=0). This is in contrast to what is reported for InGaAs QDs where maximum values of around 0.9 were predicted 10 . It is therefore interesting to compare directly the two systems using the very same model. Supplementary Figure 4 (b) shows the results of such a comparison, as obtained by using the literature values for the spin scattering times (T SS = 1.9 ns for InGaAs 11 ) and by assuming identical lifetimes. The calculations indeed confirm that the maximum entanglement fidelity that can be reached in InGaAs QDs is roughly 10% lower than in GaAs QDs, that is, bound to values around 90%. Moreover, the calculations also highlight the importance of having short X lifetimes to reach high values of entanglement at non-zero FSS, although a combination of FSS=0 and short X lifetime is the key to reach the ideal levels of entanglement needed by the envisioned applications. From this perspective, another potential source of technical problems is the rejection of the stray light.
As the laser is spectrally separated from the X and XX line a fibre Bragg grating could be used to filter out the laser emission 12 .
Alternatively, rejection of the straight light can be achieved by decoupling excitation and collection. This solution -which has been already employed in 13 and 14 -can also be used in combination with different concepts for on-chip quantum optics 15 .
5/7 Supplementary Note 5: Evaluation of the entanglement fidelity
For calculating the fidelity from the 6 cross-correlation measurements reported in Fig. 3(b) and (c) of the main text, the raw counts at g (2) (0) within a time window of 1.6 ns (the bunching peak is within this time window) are summed up for all polarization settings. The degree of correlations is calculated via: XX,X − g (2) XX,X g (2) XX,X + g (2) XX,X , where g (2) XX,X is the co-and g (2) XX,X the cross-polarized correlation measurement, respectively, in the base µ (linear, diagonal and circular). In Supplementary Table 2 The fidelity is given by 6 : which yields f = 0.88 ± 0.01 and f = 0.94 ± 0.01 for QD2 and QD3, respectively. The errors are calculated by assuming a Poisson distribution for the correlation counts and propagated by Gaussian error propagation.
|
v3-fos-license
|
2018-12-12T14:41:31.608Z
|
2014-06-01T00:00:00.000
|
142510929
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.journals.aiac.org.au/index.php/alls/article/download/370/311",
"pdf_hash": "4da5e5065b704a13b528846f2f1eac68c0b64bd8",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2663",
"s2fieldsofstudy": [
"Linguistics"
],
"sha1": "4da5e5065b704a13b528846f2f1eac68c0b64bd8",
"year": 2014
}
|
pes2o/s2orc
|
On the Effects of Social Class on Language Use : A Fresh Look at Bernstein ' s Theory
Basil Bernstein (1971) introduced the notion of the Restricted and the Elaborated code, claiming that working-class speakers have access only to the former but middle-class members to both. In an attempt to test this theory in the Iranian context and to investigate the effect of social class on the quality of students language use, we examined the use of six grammatical categories including noun, pronoun, adjective, adverb, preposition and conjunction by 20 workingclass and 20 middle-class elementary students. The results of Chi-square operations at p<.05 corroborated Bernstein’s theory and showed that workingclass students were different from middle-class ones in their language use. Being consistent with Bernstein’s theory, the results obtained for the use of personal pronouns indicated that middle-class students were more person-oriented and working-class ones more position-oriented. Findings, thus, call for teachers' deliberate attention to learners’ sociocultural variation to enhance mutual understanding and pragmatic success.
Introduction
The relationship between language and social class is both theoretically and empirically a key issue in critical discourse studies and sociolinguistic research.A major concern in the analysis of language and social class has been how language variation acts as a marker and instrument for social and racial stratification.As a result, language has been analyzed variously by linguists and sociologists.In the1970s, the British sociologist, Basil Bernstein conducted a study of working-and middle-class children.He argued for the existence of two quite distinct varieties of language use in society: the elaborated code and the restricted code, which he claimed to account for the relatively poor performance of working-class pupils in language-based subjects while they were scoring just well as their middle-class peers in mathematical subjects.According to Atherton (2002), the essence of the distinction between the two codes is in what language is suited for.The restricted code works better than the elaborated code in situations where there is a great deal of shared and takenfor-granted knowledge in the group of speakers.This code is economical and rich, conveying a vast amount of meaning with few words, each of which has a complex set of connotations and acts like an index, pointing the hearer to a lot more information which remains unsaid.On the contrary, the elaborated code spells everything out, not because it is better, but because it is necessary so that everyone (can) understand it.It has to elaborate because the circumstances do not allow the speaker to condense.The elaborated code works well in situations where there is no prior or shared understanding and knowledge, where more thorough explanation is required.If one is saying something new to someone s/he has never met before, s/he would most certainly communicate it in the elaborated code.Spring (2002).The sections that follow aim at shedding more light on Bernstein's theory through analyzing the effects of social class on language use in general and on his proposed dichotomies between the two linguistic codes and modes of socialization (personal and positional) in particular.2. Theoretical Framework Bernstein's (1971) theory can be explained in terms of three basic concepts of language codes, class, and control.He reformulated Restricted and Elaborated codes.The restricted code "employs short, grammatically simple, and often unfinished sentences of poor syntactic form; uses few conjunctions simply and repetitively; employs little subordination; tends toward a dislocated presentation of information; is rigid and limited in the use of adjectives and adverbs, makes infrequent use of impersonal subject pronouns; confounds reasons and conclusions; uses idioms frequently and makes frequent appeals to "sympathetic circularity" (Wardhaugh, 1992: 317).In contrast, the elaborated code "makes use of accurate grammatical order and syntax to regulate what is said; uses complex sentences that employ a range of devices for conjunction and subordination; employs prepositions to show relationships of both a temporal and logical nature; shows frequent use of the pronoun I; uses with care a wide range of adjectives and adverbs; is likely to arise in a social relationship which raises the tension in its members to select from their linguistic resources a verbal arrangement which closely fits specific referents" (Wardhaugh, 1992: 317).
'Control' refers to the role of families and their social control, the way of decision making in families and the relationship among the members.Bernstein (1972b) made a distinction between position-oriented and person-oriented families.In the former, language use is closely related to such matters as close physical contact among the members, a set of shared assumptions, and a preference for implicit rather than explicit meaning in communication.In personoriented families, on the other hand, language use depends on these factors less, and communication is more explicit and context-free.That is, it is less dependent for interpretation on such matters as physical surroundings.According to Bernstein, position orientation leads to a strong sense of social identity with some loss of personal autonomy, whereas person orientation fosters personal autonomy.Wardhaugh (1992, P. 360) Finally, Bernstein used Brandis's (1970) Social Class Index through which he analyzed the working-class and the middle-class by considering the frequencies of use of grammatical categories.The present study also uses these concepts and frameworks in its investigation of the relationship between language use and one's social class.
Review of the Literature
Bernstein's theory of language codes is perhaps one of the most challenging theories in sociolinguistics in that it received both support and criticism in the field.Influenced by his ideas, many researchers have commented on the different ways in which adults from various social classes respond linguistically to their children.Hess and Shipman (1965) studied middle-class and lower working-class mothers, helping their four-year-old children in either blocksorting tasks or the use of Etch-A-Sketch.The study revealed important differences, with the middle-class mothers far better able to help or instruct their children than the lower working-class ones, who were unable to offer much assistance to their children.Robinson and Rackstraw (1967) also found that middle-class mothers, far more often than the lower working-class mothers, tried to answer their children's Wh-questions (which are considered as information seeking questions) with genuine explanations.Bernstein and Henderson (1969) reported social class differences in the emphasis placed on the use of language in two areas of children's socialization: interpersonal relationships and the acquisition of basic skills.The results showed that middle-class mothers placed much greater emphasis on the use of language in the person area, relative to their working class counterparts, whereas working-class mothers put greater emphasis on the use of language in the transmission of basic skills.Newson and Newson (1970) found that working class mothers invoke authority figures such as police officers in threatening their children.Cook (1971) found that lower working-class mothers used more commands to their young children and often relied on their positional authority to get their way than did middle-class mothers, who preferred to direct their children attention to the consequences of what they were doing.To search for a relationship between social class and mothers ' speech, Henderson (1972) investigated the language used by a hundred mothers to their seven-year-old children.The mothers were divided into middle-class and working-class groups.He reported that relative to the working-class mothers, the middle-class mothers favored the use of abstract definitions, explicit rather than implicit definitions, and information giving strategies in answering children's questions.They also used language to transmit moral principles and to indicate feelings.In Jay, Routh and Brantley's (1980) study twenty-five mothers of all social class levels were asked to tell, as if to a six-year-old child, stories suggested by several cartoon picture sequences.These stories were then played to a hundred six-year-old children of high and low social class levels, who were then asked standard comprehension questions about their content.An analysis of the comprehension scores revealed a significant main effect of the social class of the adult speakers and of the social class of the child listeners.In a more recent study, Rodríguez and Hines Montiel (2009) tried to describe and compare the communication behaviors and interactive reading strategies used by Mexican American mothers of low and middle socioeconomic status (SES) backgrounds during shared book reading with their preschool children.Significant differences between different SES groups regarding the frequency of specific communication behaviors were revealed.Middle-SES mothers used positive feedback and yes/no questions more often than did low-SES mothers.Mexican American mothers also used a variety of interactive reading strategies with varying frequencies, as measured by the Adult/Child Interactive Reading Inventory.They enhanced attention to text some of the time, but rarely promoted interactive reading/supported comprehension or used literacy strategies.All the above-mentioned studies were concerned with how adults from different social classes respond linguistically to their children.The results of these studies are consistent with that of Bernstein's.Moreover, reference can be made to many studies and programs which addressed the language for children and socialization.Likewise, in the available literature, references have been made to the studies that differentiated between restricted and elaborated language codes and addressed the consequences they hold for those who use them.Williams (1969) tried to determine whether statistically reliable social class differences could be found in the degrees and types of syntactic elaboration in the speech of selected Negro and White, male and female, fifth-and-sixth-grade children from whom language samples had been obtained in the Detroit Dialect study.The corpus of some 24,000 words represented the speech of children selected from relatively low and middle ranges of a socioeconomic scale used in the original study.A quantitative description of syntactic elaboration was obtained by using a modified immediate constituents procedure which provided coding of the structural divisions of English sentences.The results indicated that children from the higher-status sample tended to employ more, and more elaborated, syntactic patterns.Such status differences generally prevailed across the sexes, but did vary across the levels of a topical variable and the race variable.Lareau (2002) examined the effects of social class on the interaction inside the home upon ten-year-old black and white children.The results showed that middle-class parents emphasized concerted cultivation through efforts to foster children's talents via organized leisure activities and extensive reasoning.Working-class and poor parents appeared to accept the accomplishment of natural growth, providing conditions under which children can grow but leaving leisure activities to children themselves.These parents also used directives rather than reasoning.Middle-class children, both white and black, were gaining an emerging sense of entitlement from their family life.Working-class and poor children did not display the same sense of entitlement or advantages.Aarefi (2008) investigated the difference between linguistic-cognitive skills in Turkish and Kurdish students with Farsi as their mother tongue from different economical-social backgrounds, using Vygotsky's theory of general cognitive development and Bernstein's theory of social class and differences in speech quality.She found that the average number of words the middle socioeconomic children level used was far higher than the average number of words the children from low socioeconomic class used.The language skill in using words by the Turkish and Kurdish speaking children had no relationship with their cultural backgrounds.There was also a significant difference between the parents' level of education; children whose parents had a higher level of education used more words in writing.Aliakbari et al. (2012) conducted a research project on fifth graders in Tehran, Iran and analyzed both the language and the social class data.The results of the correlation analyses indicated a significant relationship between the total social class scores and certain grammatical categories.The relationships between the language data and the social class factors also displayed a similar trend.They, thus, concluded that their findings supported Bernstein's theory to a great extent.In spite of the fact that many studies confirmed Bernstein's ideas, there are also some critics in the literature.Rosen (1972) criticized Bernstein on the grounds that he had not looked closely enough at working-class life and language.Labove (1972) argued that one cannot reason from the kind of data presented by Bernstein that there is a qualitative difference between the two kinds of speech Bernstein describes, let alone a qualitative difference that would result in cognitive and intellectual differences.Cooper (1976) examined aspects of Basil Bernstein's sociolinguistic account of educational failure empirically.Two groups of students from the first year of an upper school in England, one with primarily non-manual backgrounds, the other with primarily manual backgrounds, were observed in math and science classrooms, through informal discussions with teachers, and through school records and reports, to determine which of Bernstein's two codes appeared to underlie the disciplinary and pedagogic technique of the teachers of the classes observed.The findings showed that in terms of indicators for both regulative and instructional content, the observed math and science curricula appeared to be predicated on a restricted rather than an elaborated code for both classes of students.He concluded that Bernstein's emphasis on certain pupils lacking an elaborated code accounting for working-class failure and middle-class success is misplaced.Thorlindsson (1987) also made an attempt to test Bernstein's sociolinguistic model empirically.The relationship was examined among all the major variables of the model including social class, family interaction, linguistic elaboration, IQ, and school performance.The correlations among social class, family interaction, IQ, and school performance were along the lines hypothesized by Bernstein, whereas linguistic elaborations did not play their predicted role.The empirical results indicated that an important revision of the model was needed.Findings, thus, suggested that a clear distinction should be made between cognitive and pragmatic aspects of the sociolinguistic codes, and between macro and micro elements of social structure.Bolander ( 2009), assessing the relevance of Bernstein's theory for German-speaking Switzerland, showed that the uptake of Bernstein's outlook was and continues to be minimal for the Swiss German context and explores reasons for this conclusion.Acknowledging that certain aspects of Bernstein's theoretical outlook are potentially relevant for the Swiss German context in light of the contemporary studies which highlight a connection between social background and differential school achievement, he concludes that they need to be reassessed in light of the awareness of the variety of interdependent factors which can and do influence the performance of children and adolescents at school.As posited earlier and is clearly understood from the literature reviewed, Bernstein's theory has attracted the attention of many researchers and sociolinguists.Yet, in spite of all these studies, one cannot determine with certainty how social class affects language use.
Focus of the Study
Bernstein claims that working class students have access only to restricted codes and middle class students to both restricted and elaborated codes, because middle-class members are geographically, socially, and culturally mobile.His theory has inspired a good number of studies.In order to take a different measure in this relation, the present study intends to investigate the use of grammatical categories of noun, pronoun, adjective, adverb, preposition and conjunction among working-class and middle-class children.The result of this study is hoped to raise teachers' understanding of the effect of social class on students' language use and determine whether they should consider it in their educational programs or not.
Research Questions
This study seeks answer to the following questions: 1-Does social class affect ones use of grammatical categories in L1 writing?2-How different are middle-and working-class students in their social control with reference to their use of personal pronouns?6. Methodology 6.1 Participants 100 female students aged between 9 and 11 took part in the study.They were third or fourth grade elementary students in the city of Eivan in the province of Ilam, in western Iran.The reason for selecting students at these levels was that practicing writing tasks, which is the channel of instrumentation in this study, is part of their educational programs of these levels.Of these 100 participants, based on a social class questionnaire, 20 middle class and 20 working class students were selected.
Instruments
In conducting the present study two instruments have been adopted to collect the data.To determine students' social class, a converted version of Wilftang's (1990) questionnaire was administered.Different views on factors to be included in determining one's social class were considered and to make it suitable for the context of the study several open-ended questions were added.After translation and revision, it was piloted, re-examined and finally administered as an 11-item social class questionnaire (a copy of which is provided in Appendix A), comprising 10 multiple-choice questions with a variable number of choices and one open-ended question (each choice is indicative of a different level of social class).The questionnaire was completed by the students themselves.Because some students avoided expressing their fathers' job, in order to be sure of the correctness of their answers, it was completed by their parents as well.Another measure used in this study was Picture sequences which required the students to write a story in an equal time space to examine their language use differences.It was the same picture sequences used by Bernstein in his original study (a copy is provided in appendix B).Such an analysis was used in Bernstein's studies but instead of written description he used a verbal description of the picture cards.
Data Collection
The social class questionnaire was administered to the students who were already familiar with writing tasks.They, then, received selected pictures and wrote their stories in an equal time space.All grammatical categories of noun, pronoun, adjective, adverb, preposition and conjunction were counted manually by the researchers.To ensure the reliability of the scoring, correlation coefficient was measured for each category.The result which ranged from .79 to .88 was evaluated as moderate reliability, in line with Farhady, Ja 'farpur, and Birjandi (2006).To check whether the differences between the frequencies of grammatical categories for working-class and middle-class groups were significant, separate chi-square tests were run.Moreover, to determine subjects' social control, uses of personal pronouns by both groups were compared and their frequencies computed as well.
Results
Using SPSS software, descriptive statistics including frequency, mean and standard deviation of each category were computed for two groups of participants.As can be seen in Table 1 below, the means and the standard deviation of both groups differed.In order to answer the question of the study, first, all linguistic categories in students' writings of both groups were counted.Then, 6 chi-square tests were run to compare the differences between the frequencies of the grammatical categories.As is noticeable from the results in Table 2, for all six grammatical categories, the observed χ² is greater than critical χ².Accordingly, it can be claimed that the participants' social class has influenced their language use.To determine students' social control and answer the second question, the use of personal pronouns between the two social classes was analyzed.As Table 3 indicates, the frequency of the use of personal pronouns by the middle-class subjects is higher than that of the working-class participants.The use of the third-person plural pronoun 'they' and the first person singular 'I' had the highest frequencies among middle-class students.The second person plural 'you' and the third person singular 'he/she' had the lowest frequencies.For the working-class members, the most frequently used pronouns were 'they' and the first person plural 'we'.In order to find out whether differences between the uses of the personal pronouns were significant, six chi-square tests were run.The difference was significant only for the use of the first person singular 'I'.These results somehow corroborate Bernstein's theory, which maintains that users of the elaborated code make frequent use of the pronoun 'I' and are person-oriented while users of the restricted code are position oriented.The working-class participants gave more importance to the third person plural and the first person plural, which signifies that they paid more attention to group work and shared assumptions and were more positionoriented.The frequency of using the first person singular pronoun 'I' among the middle-class subjects indicated that they are more person-oriented.
Discussion and Conclusion
This study took a fresh look at Bernstein's theory and the question whether social class differences can produce different language use.To this aim, frequency of the use of grammatical categories of noun, pronoun, adjective, adverb, proposition and conjunction by 20 working class and 20 middle class elementary students were compared.Chi-square results corroborated Bernstein's theory regarding the effect of social class on language use.The findings of the study can be explained by referring to Bernstein's Elaborated and Restricted codes.Working-class students have access to the restricted codes, the ones they reveal in the socialization process where the values reinforce such codes but middle class have access to both restricted and elaborated codes.Another question of this study was related to the social control of the middle-and working-class students based on their use of personal pronouns.The most outstanding result in the use of personal pronouns was the use of the first person singular pronoun 'I' by middle class students.The results again certified Bernstein's theory on the grounds that the working-class members are more position-oriented and give more attention to group work and shared assumption and that middle-class students are far more person-oriented and tend towards personal autonomy.
The results accordingly corroborated Bernstein's theory in that restricted and elaborated codes are indicative of different social classes.It also shows how complex the educational matters are that teachers should consider.It implies that teachers and program developers should consider learners' social class differences, design correct curriculum to help working class students achieve elaborated codes, and look for ways to hinder the waste of student's talent in the lower social class.
Table 1 .
Descriptive statistics for use of grammatical categories among two social classes
Table 2 .
Chi-square results for comparing the frequencies of grammatical categories of the groups
Table 3 .
Frequency of the use of personal pronouns among the groups
|
v3-fos-license
|
2017-12-21T22:12:52.945Z
|
2017-12-15T00:00:00.000
|
720666
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/s11671-017-2389-x",
"pdf_hash": "2460ba34415224006736d97a533f015975b68f87",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2668",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"sha1": "2460ba34415224006736d97a533f015975b68f87",
"year": 2017
}
|
pes2o/s2orc
|
Biocompatible 5-Aminolevulinic Acid/Au Nanoparticle-Loaded Ethosomal Vesicles for In Vitro Transdermal Synergistic Photodynamic/Photothermal Therapy of Hypertrophic Scars
Biocompatible 5-aminolevulinic acid/Au nanoparticle-loaded ethosomal vesicle (A/A-ES) is prepared via ultrasonication for synergistic transdermal photodynamic/photothermal therapy (PDT/PTT) of hypertrophic scar (HS). Utilizing ultrasonication, Au nanoparticles (AuNPs) are synthesized and simultaneously loaded in ethosomal vesicles (ES) without any toxic agents, and 5-aminolevulinic acid (ALA) is also loaded in ES with 20% of the entrapment efficiency (EE). The prepared A/A-ES displays strong absorbance in 600-650 nm due to the plasmonic coupling effect between neighboring AuNPs in the same A/A-ES, which can simultaneously stimulate A/A-ES to produce heat and enhance quantum yields of reactive oxygen species (ROS) by using 632 nm laser. In vitro transdermal penetrability study demonstrates that A/A-ES acts as a highly efficient drug carrier to enhance both ALA and AuNPs penetration into HS tissue. Taking human hypertrophic scar fibroblasts (HSF) as therapeutic targets, synergistic PDT/PTT of HS indicates that A/A-ES could enhance quantum yields of ROS by photothermal effect and localized surface plasmon resonance (LSPR) of AuNPs, resulting in a high level of apoptosis or necrosis. In a word, the prepared A/A-ES shows a better synergistic PDT/PTT efficiency for HSF than the individual PDT and PTT, encouraging perspective for treatment of HS. Electronic supplementary material The online version of this article (10.1186/s11671-017-2389-x) contains supplementary material, which is available to authorized users.
Background
Hypertrophic scar (HS), a common and inevitable problem after cutaneous dermal injury, has a much thicker fibrotic dermis than normal skin [1,2]. Histopathologically, HS displays the increases of hypertrophic scar fibroblasts (HSF), which are arranged in wavy patterns, oriented to the epithelial surface and form nodular structures [3]. Although various treatments are available clinically, there are many challenges in HS treatments due to the numerous limitations. Intralesional injection therapy is widely used for clinical practices. However, it is limited by both uncomfortable operations and side effects, such as permanent hypopigmentation and skin atrophy [4]. Pressure therapy is limited for side effects, such as tissue ischemia as well as decreasing tissue metabolism [5]. To overcome these limitations, laser therapy serves as a topical and non-invasive modality that has been developed and applied in HS treatments for more than 25 years, by taking advantages of the laser irradiation [6]. Generally, laser therapy can be divided into photodynamic therapy (PDT) and photothermal therapy (PTT) based on the different principles.
PDT has been used to treat HS with the advantages of its high selectivity and few side effects [7]. Its principle evolves two steps: (a) photosensitizers preferentially aggregate in HSF and (b) under the irradiation of an appropriate laser, photosensitizers produce cytotoxic reactive oxygen species (ROS) which lead to the apoptosis of HSF [8,9]. Among various photosensitizers, 5-aminolevulinic acid (ALA) is proven to be an excellent candidate for local treatment modality in dermatology without significant side effect. Therefore, ALA-based PDT (ALA-PDT) has been widely used in HS treatment with marketing permission from the US Food and Drug Administration in 2010 [10]. However, its efficiency is controversial for two limitations: (a) the poor penetrability of ALA into both HS tissue and HSF and (b) the low quantum yields of ROS. In order to produce a marked effect, a high-dose ALA or high-level laser is applied in clinic. Unfortunately, high-dose ALA leads to damage of the sebaceous gland and epidermis, and high-level laser tends to result in healthy tissues injured. Therefore, much attention has been paid to enhance penetrability of ALA and quantum yields of ROS in PDT treatment of HS. Recently, ethosomal vesicles (ES), a specifically designed liposome, are found to be able to overcome the barrier in HS for topical delivery and achieve significant progress [11,12]. In our prior work, the prepared ALA-loaded ES (ALA-ES) is capable of delivering much more ALA into HS compared with traditional hydroalcoholic solution system [13]. Therefore, ES can enhance penetrability of ALA to improve PDT efficacy of HS. Meanwhile, a new synergistic treatment modality, which combines PDT with PTT, holds the promise to enhance both the quantum yields of ROS and the treatment efficacy of HS.
PTT is also an extraordinary theranostic approach for various diseases [14,15]. Up to now, it has been successfully applied in clinical treatment of HS [16]. Its mechanism evolves harvesting light energy, generating heat, and then resulting in tissue vaporization, coagulation, HSF apoptosis, and collagen denaturation. However, PTT has severe side effects in HS treatment, such as oozing, ulceration, and burning discomfort, due to its poor selectivity toward HS tissue with high-level laser [4]. Recently, PTT, bridging nanotechnology, has been regarded as a potential HS treatment with highly selective and minimally invasive for the photothermal effect. And more importantly, based on Au nanoparticles (AuNPs) as effective photo-adsorbing agents, PTT has been confirmed to enhance quantum yields of ROS for two reasons: (a) thermal PDT significantly increases apoptotic cell death through enhancing generation of ROS in a temperature-dependent manner, and [17] (b) AuNPs can conjugate with ALA and enhance quantum yields of ROS due to localized surface plasmon resonance (LSPR) [18,19]. Therefore, ALA/AuNP-based synergistic photodynamic/photothermal therapy (PDT/ PTT) holds the promise to overcome current limitations of both PDT and PTT in HS treatment.
Recent, AuNP-based synergistic PDT/PTT has been widely used in various cancer therapies by injection ways [20,21]. Different from cancers, HS is suitable for using topical administration [22]. However, the collagen bundles in HS dermis present great barriers to the penetration of ALA and AuNPs, which restricts the PDT/PTT synergistic treatment efficiency for HS. Therefore, how to make ALA and AuNPs simultaneously penetrate into HS is critical to synergistic PDT/PTT with maximum therapeutic efficacy and minimum side effect [23,24]. Furthermore, a suitable ALA/AuNP-based synergistic PDT/PTT should also satisfy the following conditions: (a) AuNPs can generate heat by He-Ne laser which is used in ALA-PDT, and (b) the delivery system should be high biocompatible. However, the reported various photosensitizers/AuNPs cannot be applied by a topical transdermal delivery and HS treatment for penetrability and poor biocompatibility [25].
Herein, ALA/AuNP-loaded ES (A/A-ES) with excellent biocompatibility and penetrability is developed for synergistic PDT/PTT of HS in this work. The biocompatible A/ A-ES is prepared by both AuNPs and ALA-loaded ES via an ultrasonication process without any toxic agent. The prepared A/A-ES shows a strong absorbance in the range of 600-650 nm, as a result of the plasmonic coupling between neighboring AuNPs co-loaded in A/A-ES. This enables the use of He-Ne laser to stimulate A/A-ES to simultaneously generate heat and ROS, which could promote HSF apoptosis. A/A-ES displays excellent penetrability to simultaneously deliver ALA and AuNPs into HS in the in vitro study. At last, taking HSF as the target, in vitro efficiency PDT/PTT for HS is investigated by accumulation of intracellular protoporphyrin IX (PpIX), quantum yields of ROS, and apoptosis of HSF. Furthermore, the penetrability into HSF is also observed by TEM. Due to the synergistic effect, A/A-ES facilitates both ALA and AuNPs to simultaneously penetrate into HS and HSF, causing a higher level of cell apoptosis compared to individual PTT or PDT. In a word, A/A-ES is a promising transdermal delivery system for topical ALA and AuNP administration, has great potential in synergistic PDT/ PTT of HS, and opens a new window for HS treatment.
Results and Discussions
The Characterization of A/A-ES Ultrasonication was the key parameter in preparing A/ A-ES for two reasons: (a) AuNPs could be formed via ultrasonication without any toxic agent, which endowed A/A-ES with biocompatibility; (b) ultrasonication could rearrange the lipid bilayers to form more vesicles with small sizes and relatively lager internal cores, which could load more ALA and AuNPs. In this work, AuNPs were formed as described in the following schemes: (a) highly reactive H• and OH• radicals were generated within the bubbles by the homolysis of H 2 O (Eq. 1), (b) the oxidizing radicals H• could abstract the alpha H of CH 3 CH 2 OH and form a reducing radical CH 2 •CH 2 OH (Eq. 2), and (c) during a pyrolysis within the bubbles, the radical CH 2 •CH 2 OH could reduce Au 3+ to form AuNPs (Eq. 3) [26].
At first, A/A-ES was verified by UV-Vis (Fig. 1a). It had a strong absorbance in the range of 600-650 nm, as a result of the plasmonic coupling between neighboring AuNPs in the same A/A-ES [20]. Therefore, it could use 632-nm laser irradiation to simultaneously PDT and PTT for HS. Furthermore, A/A-ES exhibited a relatively narrow size distribution and the average size was 166 ± 83 nm, according to DLS analysis in Fig. 1b. Interestingly, two size distributions were attributed to unloaded AuNPs and A/A-ES. Also, the great difference between two distributions suggested that the amount of A/A-ES was much more than that of unloaded AuNPs. The PDT efficiency was depended on the amount of ALA loaded in A/A-ES. Benefiting from the transmembrane pH gradient active loading method, EE of ALA was of 20%, which was higher than the ones in reported works (less than 10%) [27]. The morphology of A/A-ES was also studied. On SEM images (Fig. 1c), A/A-ES appeared as intact spherical lamellar vesicles with size at 200 nm, and AuNPs could be clearly observed and loaded in ES. Besides the AuNPs, the lamellas extended to AuNP surface in Fig. 1d, which was the characteristic of ES [28,29]. Furthermore, the prepared A/A-ES loading different numbers of AuNPs had the similar sizes in Additional file 1: Figure S1. Therefore, A/A-ES was adjusted into the stable and deformable structure under ultrasonication, which facilitates A/A-ES to squeeze through narrow space in HS. To sum up, A/A-ES was successfully prepared with 20% EE of ALA and strong absorbance at 600-650 nm. Its morphology would also be very conducive to penetrability, which was in consistence with the in vitro PDT/PTT study in followings.
In Vitro Transdermal Penetrability Study of A/A-ES
The retention of A/A-ES was important parameters for evaluating the penetrability and treatment efficiency of A/A-ES. Therefore, the retention amount of both ALA and AuNPs in HS with different time was investigated by using Franz diffusion cells. As shown in Fig. 2a, both ALA and AuNPs rapidly reached the maximum retention in the first 2 h, due to the penetration enhancement function of ES. After reaching the maximum, the retentions both of ALA and AuNPs continuously declined because A/A-ES penetrated through the whole HS. The results indicated the A/A-ES had enough penetrability. Compared with applied dose of ALA (2 mg), 48% ALA was in HS tissue, which was in favor of PDT of HS. Furthermore, the same retention changes between ALA and AuNPs suggested that ALA and AuNPs were both loaded in ES as consistent with results of microscopes. According to the result, 2 h was a proper administration time for topical usage with the maximum retention amount of A/A-ES. In our previous works, ES had been regarded as a highly efficient drug carrier to enhance drug penetration into HS tissue [13]. Therefore, the distribution and action of A/A-ES in HS was also studied by using TEM in this work. As shown in Fig. 2b, A/ A-ES, as intact structure, was found in dermis, indicating A/A-ES could stably penetrate through epidermis and into HS dermis. In the lower dermis shown in Fig. 2c, the ES and AuNPs were observed as a separation state, suggesting A/A-ES would release both ALA and AuNPs. Interestingly, AuNPs could be aggregative in dermis even though they were not loaded in ES. Furthermore, more AuNPs were found to accumulate in dermis in Fig. 2d, which could provide the plasmonic coupling between neighboring AuNPs to harvest light energy and generate heat. In brief, in vitro transdermal penetrability study demonstrated A/A-ES was a highly efficient drug carrier to enhance both ALA and AuNP penetration into HS tissue, and the aggregative AuNPs in dermis was in favor of generate heat [20]. Therefore, the A/A-ES displayed a great potential in synergistic PDT/PTT for HS.
In Vitro PDT/PTT of HSF Biocompatibility Assay
Although the biocompatibility of AuNPs had been well proven in reported work, the biocompatibility of A/A-ES to HSF should be also studied in this work [30,31]. Different concentrations of ALA-ES, Au-ES, and A/A-ES (based on ALA concentrations from 0.1 to 10 mM, Au-ES was the same AuNP concentration as A/A-ES) were incubated with HSF for 12 h without irradiation. The result showed that there was no dark cytotoxicity in the concentrations of no more than 2.0 mM with cell survival rates more than 90%. When the concentrations were higher than 2.0 mM, a slight decrease in cell survival rates was detected. The results showed that A/ A-ES had the excellent biocompatibility, and the PDT/ PTT should be carried out at a concentration of 2.0 Mm in following studies (ca. 14% A/A-ES in culture mediums, v/v.) Fig. 3.
PDT/PTT for HSF
A/A-ES could overcome surface permeability barriers by the fusion of A/A-ES with HSF membrane and then liberate the ALA and AuNPs directly into the cell cytoplasm [32]. According to the mechanism of ALA-PDT, ALA released from A/A-ES could convert to PpIX in HSF cytoplasm. With laser, PpIX produced cytotoxic ROS leading to the cell apoptosis. Therefore, CLSM was used to study the accumulation of both PpIX and ROS in Fig. 4 [33,34]. Before laser irradiation, the red fluorescence of PpIX was mainly distributed in the cytoplasm of HSF. PpIX in HSF treated by ALA-ES and A/A-ES were much more than the autologous PpIX in HSF treated by Au-ES. Moreover, ROS in all HSF was hardly found without laser irradiation, which was also reasonable. After laser irradiation, the PpIX intensities in HSF treated by ALA-ES and A/A-ES were reduced, and ROS in these cells could be easily found with strong intensity. Meanwhile, the HSF treated by Au-ES had no response in PpIX and ROS because they did not have enough autologous PpIX. Interestingly, A/A-ES could promote more ROS generation than ALA-ES in a comparison of ROS intensity, which was attributed to the AuNPs. Furthermore, the cell morphology also provided more information. The HSF treated by ALA-ES had the eumorphism, while the HSF treated by Au-ES displayed unhealthy protrusions from the plasma membrane. In contrast, the HSF treated by A/A-ES showed as protruding and retracting "blebs," which was the feature of dying cells [35]. These differences in ROS generation and cell morphology were attributed to PTT based on AuNPs, which was also investigated by infrared imaging in Fig. 5. According to the mechanism of AuNPs-PTT, AuNPs in HSF cytoplasm could absorb 632 nm laser and generate enough heat to make the cells apoptosis or necrosis under irradiation. Therefore, the photothermal effects of ALA-ES, Au-ES, and A/A-ES were monitored by using an infrared thermal imaging camera. Compared ALA-ES, Au-ES, and A/A-ES had obviously higher temperature (41.3°C for Au-ES and A/A-ES, 36.5°C for ALA-ES) upon irradiation. After the laser was removed, temperatures of all quickly declined to a normal value in 1 min, suggesting that the laser irradiation-treatment could be safe [36]. Therefore, AuNPs loaded in ES could provide an effective PTT, which also provided by apoptosis and necrosis assay. To sum up, A/A-ES could enhance quantum yields of ROS and provide the photothermal effect to achieve an excellent efficiency of PDT/PTT synergistic treatment for HSF.
Apoptosis and Necrosis Assay
The efficiency of PDT/PTT was further studied by the apoptosis and necrosis of HSF treated by ALA-ES, Au-ES, and A/A-ES under laser irradiation. An apoptosis assay was carried out by using the flow cytometry analysis of Annexin V-FITC and propidium iodide (PI) double staining (Fig. 6). A control results showed that laser irradiation did not affect cell viability (Fig. 6a). Before irradiating, ALA-ES, Au-ES, and A/A-ES displayed the good biocompatibility. After irradiation, the HSF treated by ALA-ES, Au-ES, and A/A-ES proportion of both necrosis and apoptosis had significant differences. Briefly, there was a highest fraction of both necrosis and apoptosis of HSF treated by A/A-ES, which was in consistence with the result of CLSM. In Fig. 6e, the statistical analysis of experiments revealed that necrotic cell death increased to 61.8% with the treatment with A/A-ES, indicating the A/A-ES had better synergistic PDT/ PTT efficiency for HSF than the individual PDT (47.7% necrotic cell death) and PTT (24.3% necrotic cell death). Interestingly, the result also indicated that PDT played a more effective role in HS treatment compared with PTT, and AuNP-based PTT could help the PDT effect. These results might be explained as that A/A-ES could enhance quantum yields of ROS and provide the photothermal effect to achieve an excellent efficiency of PDT/PTT synergistic treatment for HSF. Although EE of ALA in A/A-ES was much lower than the one in ALA-ES (20 vs. 54%), there was similar necrotic cell death of both A/A-ES and ALA-ES (61.8 vs. 78%). This result could be explained from that A/A-ES could enhance quantum yields of ROS by photothermal effect and LSPR of AuNPs.
Visualization of A/A-ES in HSF
The detail changes of HSF morphology and structure caused by PDT/PTT also were investigated by both light microscopy and TEM in Fig. 7. Before irradiating, HSF treated with A/A-ES growth well, eumorphism firm adherence, indicating A/A-ES had excellent biocompatibility as expected (Fig. 7a). In their TEM image, besides various organelles in normal cytoplasm, treated HSF had a lot of AuNP aggregating in the cell cytoplasm (the blank frames in Fig. 7c). It could be explained that the fusion of A/A-ES with cell membranes could deliver more AuNPs and ALA into HSF. Therefore, AuNPs could act as the more effective photothermal source due to stronger plasmonic coupling effect and enhance the quantum yields of ROS by LSPR. Interestingly, shown in the dashed frame in Fig. 7c, some AuNPs were out of HSF due to exocytosis, which demonstrated the excellent biocompatibility of A/A-ES once more. After irradiation, HSF displayed the feature of dying cells, that is, the protrusions from the plasma membrane (Fig. 7b) [37]. Due to ROS and photothermal effect, the swelling mitochondria and the ruptured outer membrane, as the other indicators of HSF death, were found in HSF cytoplasm (Fig. 7d) [35]. Furthermore, ES was also found with its characteristic membrane structure (red frames
Conclusions
Biocompatible A/A-ES was facilely prepared for in vitro synergistic PDT/PTT of HS by permeating into HS and destroying HSF. Utilizing ultrasonication, AuNPs were synthesized and loaded simultaneously in the absence of any toxic agents. A/A-ES had strong absorbance in 600-650 nm as the plasmonic coupling effect between neighboring AuNPs in the ES with a high EE for ALA (ca. 20%). In vitro transdermal penetrability study demonstrated that A/A-ES was a highly efficient drug carrier to enhance both ALA and AuNP penetration into HS tissue. In vitro PDT/PTT for HSF indicated that A/A-ES could enhance quantum yields of ROS by photothermal effect and LSPR of AuNPs, causing a high level of cell apoptosis or necrosis. In a word, biocompatible A/A-ES had a better synergistic PDT/PTT efficiency for HSF than the individual PDT and PTT, encouraging perspective for treatment of HS. Further work will focus on the in vivo study of synergistic PDT/PTT for HS in scar models, and the relevant work is ongoing.
The Preparation of A/A-ES
One hundred eighty milligrams of phosphatidylcholine (PC, 95.8% soybean lecithin, Lipoid GmbH, Germany) dissolved in 1.8 mL CH 3 CH 2 OH, 0.6 mL HAuCl 4 (10 mM, Aladdin, Shanghai, China), and 3.6 mL ALAcitrate buffer solution (CBS, 0.01 M, 12 mg ALA, pH 4.0), in turn, were added into PC solution by dropwise. The mixture was stirred at 700 rpm for 10 min to prepare precursor solution. As shown in Scheme 1, the precursor solution was put in an ultrasonic environment at 200 W for 30 min, until it was brilliant wine red color. Then, the reaction solution was carried out with a centrifuge (8000 rpm, 20 min) to remove the residual HAuCl 4 and PC. Last, the deposition was re-dispersed in 3 ml ALA hydroalcoholic solution (ALA-HA, 2 mg/ml ALA, 30% ethanol) and incubated by a transmembrane pH gradient active loading method according to our prior work [13]. In incubation, a plenty of exterior unionized ALA diffused
The Characterization of A/A-ES
A/A-ES were negatively stained with phosphotungstic acid (1.5 wt%) and then observed by transmission electron microscope (TEM, JEOL, Japan, accelerating voltage of 120 kV). A/A-ES was also examined by a scanning electron microscopy (SEM, JEOL, Japan, accelerating voltage of 10 kV). The A/A-ES size distribution was determined by dynamic light scattering (DLS) analysis in a NiComp 380ZLS inspection system (Nicomp, USA). ALA was determined by a fluoresceamine derivatization approach, and the detail was shown in Additional file 1. The entrapment efficiency (EE) of ALA determined by an ultrafiltration method was shown in Additional file 1. Finally, UV-Vis spectra were carried
In Vitro Penetrability Study by Franz Diffusion Cells
The penetrability study of A/A-ES was carried out by using Franz diffusion cells with 2.8 cm 2 effective permeation area. The receptor cells including donor and receptor compartments were maintained at 37°C by circulating water bath. HS tissues were collected with informed consents at Shanghai Ninth People's Hospital and the ethical guidelines of the 1975 Declaration of Helsinki approved by Shanghai Ninth People's Hospital. Fresh HS tissue without fatty tissue (less than 24 h after excision) was mounted on a receptor compartment with stratum corneum upward to the donor compartment. One milliliter of A/A-ES was added into donor compartment, and then, donor compartment was covered by parafilm to prevent evaporation. After penetration with different time, HS tissues were washed promptly to remove residual A/A-ES on HS surface. To accumulate the retention amount of ALA and AuNPs in HS, HS tissues were cut to small pieces, and ALA in HS tissues was extracted by dialysis in PBS for 24 h. The extract solutions were analyzed for retention amount of ALA in HS tissue. The HS tissues retained in dialysis bags were also analyzed for retention amount of AuNP by inductively coupled plasma-mass spectrometry (ICP-MS). After permeated by ALA-ES for 2 h, HS tissue was washed, prefixed, dehydrated, infiltrated, and post fixed. After embedded in epoxy resins, they were cut as ultrasections (50 nm thickness, perpendicular to epidermis) and observed by using TEM at an accelerating voltage of 120 kV.
In Vitro PDT/PTT for HSF Cell Culture HSF was isolated and cultured by a common method as follows: The fresh HS tissue pieces (1 mm 3 , less than 6 h after excision) were digested by using collagenase type I (Invitrogen, USA) to achieve single cell suspension. The HSF grew in Dulbecco's Modified Eagle Medium (DMEM, Invitrogen, USA) containing 10% fetal bovine serum (FBS, Gibco, USA) at 37°C and 5% CO 2 . Culture medium should be changed every 3 days, and cells were passaged when 80% confluent. The passaging two and three cells were used in the following experiments.
Biocompatibility Assay
In the evaluation of the biocompatibility of A/A-ES, HSF were seeded in 96-well plates at 2 × 10 3 cells/well. The culture medium was replaced with FBS-free medium and freshly prepared ALA-ES, Au-ES, and A/A-ES in different concentrations, respectively. After 12 h, the cell viability was measured using a cell counting kit-8 (CCK-8, Dojindo, Japan) following the manufacturer's instructions.
PDT/PTT Procedure HSF were seeded in 12-well plates at 4 × 10 4 cells/ well. After 12 h, culture mediums, respectively, containing fresh prepared ALA-ES, Au-ES, and A/A-ES (14%, v/v), were replaced with the FBS-free medium for 6 h. After treatment, HSF was washed with PBS and incubated in culture medium for 1 h. Then, they were irradiated by He-Ne laser (632 nm wavelength, 40 mW/cm 2 , Shanghai Institute of Laser Technology, China) with 20 min. Then, culture medium was replaced with fresh DMEM containing 10% FBS for another 24 h in preparation for subsequent experiments. Furthermore, HSF treated with A/A-ES and irradiation was prefixed, dehydrated, and embedded to prepare ultrasections for TEM examination.
Intracellular PpIX and ROS Generation Assay
Intracellular PpIX accumulation and ROS generation in HSF were detected by using confocal laser scanning microscopy (CLSM, Leica TCS SP5, Germany). The ROS generation assay was performed using a DCFH-DA and followed the manufacturer's instructions. The coverslip with cells was mounted on a glass slide and observed at 405 nm excitation/635 nm emission for PpIX and 488 nm excitation/560 nm emission for ROS. All data was analyzed by LAS AF software.
Apoptosis and Necrosis Assay
The apoptosis and necrosis of HSF were analyzed by flow cytometry after double staining Annexin V-FITC and propidium iodide (PI) double staining. The samples were prepared according to the protocol of Annexin V-FITC/PI apoptosis detection kit and then analyzed by BD FACSCalibur (BD Biosciences, Mountain View, USA). The data analysis was performed with FlowJo 7.6 software.
Statistical Analysis
Data were presented as mean ± SD unless otherwise stated. Statistical significance was determined using a two-tailed student's test (P < 0.05) unless otherwise stated.
Additional file
Additional file 1: Figure S1.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2014-08-01T00:00:00.000
|
4986590
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/cddis2014366.pdf",
"pdf_hash": "c59b129fb7d0db875de88e69f2dd52b9814c70a3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2670",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "c59b129fb7d0db875de88e69f2dd52b9814c70a3",
"year": 2014
}
|
pes2o/s2orc
|
The RING ubiquitin E3 RNF114 interacts with A20 and modulates NF-κB activity and T-cell activation
Accurate regulation of nuclear factor-κB (NF-κB) activity is crucial to prevent a variety of disorders including immune and inflammatory diseases. Active NF-κB promotes IκBα and A20 expression, important negative regulatory molecules that control the NF-κB response. In this study, using two-hybrid screening we identify the RING-type zinc-finger protein 114 (RNF114) as an A20-interacting factor. RNF114 interacts with A20 in T cells and modulates A20 ubiquitylation. RNF114 acts as negative regulator of NF-κB-dependent transcription, not only by stabilizing the A20 protein but also IκBα. Importantly, we demonstrate that in T cells, the effect of RNF114 is linked to the modulation of T-cell activation and apoptosis but is independent of cell cycle regulation. Altogether, our data indicate that RNF114 is a new partner of A2O involved in the regulation of NF-κB activity that contributes to the control of signaling pathways modulating T cell-mediated immune response.
Nuclear factor-kB (NF-kB) is a principal transcriptional regulator playing a pivotal part in innate and adaptive immunity, inflammation, development, cell proliferation and survival. 1,2 Defects in the regulation of NF-kB-dependent gene expression contribute to a variety of diseases, including inflammatory and autoimmune diseases, neurological disorders and cancer. [3][4][5] Therefore, activation of NF-kB is tightly regulated by several NF-kB target genes such as IkBa, A20 and CYLD that function as inhibitors in a negative feedback loop. 6-10 A20 (also known as TNFAIP3) is a cytoplasmic zincfinger protein that was originally identified as a tumor necrosis factor (TNF)-inducible protein and it has been characterized as a dual inhibitor of NF-kB activation and cell death. 11 In most cell types, basal A20 expression is very low but its transcription is rapidly induced upon NF-kB activation. The essential role of A20 in the regulation of NF-kB and apoptotic signaling was clearly demonstrated with the generation of a complete A20 knockout mouse. 12,13 Mice deficient for A20 are hypersensitive to TNF and die prematurely because of severe multiorgan inflammation and cachexia. 12 However, the antiapoptotic function of A20 is not a general feature, as A20 only protects some cell types from specific death-inducing agents. 14 Protein ubiquitylation plays an important role in the regulation of NF-kB pathway, not only by controlling the stability of factors integrating this signaling cascade but also their activity. Little is known about the molecular mechanisms that regulate ubiquitin-editing and NF-kB-inhibitory function of A20. Up to date, two enzymatic activities have been associated to A20, a C-terminal ubiquitin ligase and a N-terminal de-ubiquitylating activity, acting on targets such as RIP and promoting their degradation. 15 A number of A20 interacting proteins including TAX1BP1, Itch and RING-type zinc-finger protein 111 (RNF11) are known to be required for A20 to terminate NF-kB signaling. [16][17][18] Interestingly, the expression, biological activities and mechanism of action of A20 are likely dependent on the cellular context as well as the stimulus involved. 14 Indeed, in lymphoid cells, A20 is constitutively expressed and its expression is reduced because of activation of the paracaspase MALT1 after T-cell receptor (TCR) stimulation as well as to proteasome degradation. 19 In addition, in mesenchymal stromal cells, we have recently demonstrated that A20 is constitutively expressed and its expression is reduced after TNFa stimulation because of its proteasome-induced degradation. 20 In humans, polymorphisms within the A20 genomic region predispose individuals to autoimmune diseases such as systemic lupus erythematous, Crohn's disease and psoriasis. 21 To identify new psoriasis susceptibility loci, a genome-wide association study (GWAS) of 1409 psoriasis patients and 1436 controls was carried out. 22 Next to single-nucleotide polymorphisms (SNPs) in genes involved in IL-23 signaling, loci including A20, ABIN-1 (also known as TNFAIP3-interacting protein 1 (TNIP1)) and RNF114 showed strong association with psoriasis. 22,23 RNF114 belongs to a recently defined family of RING (really interesting new gene) domain E3 ubiquitin ligases, characterized by the presence of three zinc-fingers and one ubiquitin interacting motif (UIM). 24,25 RNF114, also known as zincfinger protein 313 (ZNF313), efficiently binds K48-and K63linked polyubiquitin chains in vitro and in vivo and possess an E3 ubiquitin ligase activity. RNF114 is a soluble cytosolic protein that can be induced by interferons and synthetic dsRNA. Real-time PCR analysis demonstrated that RNF114 is clearly expressed in disease-relevant cell types, including CD4 þ T lymphocytes, dendritic cells and skin, and also in testis, pancreas, kidney and spleen, indicating that the activity of the RNF114 protein is unlikely to be restricted to the immune system. 26,27 Recently, it was observed that RNF114 has a mitogenic function and that its deregulation can disturb cell cycle control mechanism and thus influence cellular stress response. RNF114 expression is reduced at the G1 phase but increased at the S and G2/M transition, suggesting that its elevation may drive a G1 to S transition of the cell cycle. 28 Using a two-hybrid approach we found that RNF114 was able to interact with A20. Therefore, the goal of this work was to determine the role of this interaction on the stability and activity of A20 and to explore its impact on the regulation of NF-kB-dependent functions.
Results
RNF114 interacts with A20. To find new A20 interacting proteins, a yeast two-hybrid screening was performed using human thymocytes (CD4 þ CD8 þ ) cDNA library and a fulllength form of A20 (Hybrigenics, Paris, France). In this screening, three of the proteins found, A20 itself, ABIN-1 and 14-3-3, were already described as A20 interacting proteins. [29][30][31] The RING finger protein RNF114 was identified as a novel interacting protein. This interaction was confirmed using different approaches. First, a pull-down experiment using GST-A20 or GST fusion proteins and lysates of human embryonic kidney 293 (HEK293) cells overexpressing FLAG-RNF114 or FLAG-A20 (Figure 1a) was performed as well as co-immunoprecipitations assays using HEK293 cells transfected with FLAG-A20WT in the presence or not of AU5-RNF114 ( Figure 1b). The immunoprecipitation experiment with anti-AU5 antibody showed a clear interaction between FLAG-A20WT and AU5-RNF114 only when both proteins were present. Signal was never detected in the immunoprecipitation control, indicating that this interaction was specific ( Figure 1b). Those controls were included only in the first figure to simplify the rest of the figures. TNFa stimulation stabilizes FLAG-A20WT, favoring its interaction with AU5-RNF114 (Figure 1c).
To define which part of A20 was involved in its interaction with RNF114, different constructs of A20 were made. In the first experiment, we observed that the C-terminal part of A20 (390-790), containing the E3 ligase domain, was involved in its interaction with RNF114 (Figures 1d and e). To better define the domain of interaction, truncated forms of the C-terminal part were made. Altogether, the results shown in the Figure 1e demonstrate that zinc-fingers 4, 5, 6 and 7 of A20 are contributing to create a solid interaction with RNF114.
Finally, to further confirm the association between the two proteins, we checked their interaction at the endogenous level in the absence of any exogenous expression. As A20 is expressed at basal conditions in T cells, we decided to evaluate the interaction between these two proteins in Jurkat T cells by doing a co-immunoprecipitation experiment using anti-A20-or anti-RNF114-specific antibodies. We confirmed Figure 1 RNF114/ZNF313 interacts with A20. (a) Pull-down experiment using GST-A20 or GST fusion proteins and lysates of HEK293 cells transfected with FLAG-A20WT or FLAG-RNF114 is shown (* indicates unspecific band). (b) HEK293 cells were transfected with FLAG-A20WT and when indicated with AU5-RNF114. AU5-RNF114 immunoprecipitation was used to confirm the interaction with FLAG-A20. (c) HEK293 cells were transfected with AU5-RNF114 and FLAG-A20WT as indicated. Cells were treated with TNFa for 20 min and lysates were submitted to anti-AU5 immunoprecipitation (d) HEK293 cells were transfected with different forms of FLAG-A20 (WT, N-terminal: 1-390, C-terminal: 390-790) and AU5-RNF114 to determine which domains were involved in the interaction between A20 and RNF114. (e) Different constructs of A20 were prepared to define its interaction domain with RNF114. Results of immunoprecipitation experiments are shown. The symbol ' À ' indicates no interaction and ' þ ' indicates interaction the association between these two proteins in reciprocal experiments, even if the interaction was more obvious when the anti-RNF114 antibody was used to co-immunoprecipitate A20 (Figure 2a). This result suggests that only a fraction of A20 is associated to RNF114. However, we cannot exclude that those differences reflect the capacity of each antibody to recognize and bind these interacting molecules (Figure 2a). We checked whether the interaction between these two proteins was modified after stimulation. For that purpose, Jurkat T cells were stimulated as indicated with TNFa or CD3/ CD28 antibodies. We observed that the association increased after TNFa stimulation (Figure 2b), likely as a consequence of an increase in A20 levels after such stimuli. Interestingly, after TCR stimulation, we observed an increase in A20-RNF114 interaction and also a striking modification of A20 molecular weight associated with RNF114 ( Figure 2b). These results indicate that under these stimulation conditions, the fraction of A20 able to interact with RNF114 was post-translationally modified. This modified form of A20 is not detectable in the whole lysate extract (INPUT) or after A20 immunoprecipitation (data not shown), supporting the notion that this modified form of A20 specifically bound to RNF114 is a small fraction of the total A20 protein pool. According to the shift in molecular weight of the modified A20, the main band could correspond to modification by a member of the ubiquitin family rather than phosphorylation, which will be more difficult to resolve on a 10% polyacrylamide gel. Furthermore, after CD3/CD28 stimulation we also observed multiple slow migrating forms of A20, disposed in a pattern that is more typical of polyubiquitylation ( Figure 2b). However, treatment with the proteasome inhibitor MG132 did not affect the amount or the accumulation of high-molecular-weight forms of A20 (data not shown). Altogether, the different experiments presented in these figures clearly demonstrate that RNF114 specifically interacts with A20, and perhaps more specifically with a modified fraction of total A20.
Effect of RNF114 on A20 ubiquitylation. RNF114 belongs to a novel family of ubiquitin ligases with zinc-fingers and an ubiquitin-binding domain, like the T-cell regulator RNF125/ TRAC-1. 24,25 Therefore, we investigated whether RNF114 could promote A20 modification by coexpressing in HEK293 cells His6-ubiquitin and FLAG-A20 in the presence or not of AU5-RNF114. As shown in Figure 3a, A20 is modified with ubiquitin in the absence of AU5-RNF114; however, ubiquitylated A20 significantly increased when RNF114 is expressed. In addition, we can observe than RNF114 increases the expression level of A20 in the absence of His6-ubiquitin (Input, anti-FLAG; Figure 3a). Ubiquitylation of RNF114 itself can also be seen under these conditions Figure 2 Interaction between endogenous RNF114 and A20 in T cells. (a) Jurkat T cells were used to confirm the endogenous interaction between the two proteins. Co-immunoprecipitations of A20 and RNF114, using anti-A20 and anti-RNF114 antibodies, are shown. (b) Co-immunoprecipitation between A20 and RNF114 in Jurkat T cells stimulated with TNFa or CD3/CD28 for the indicated times using anti-RNF114 antibodies ( Figure 3a). As TCR stimulation induced a modification of the molecular weight of A20-bound to RNF114, suggesting a post-translational modification of A20 (Figure 2b), we checked whether RNF114-induced A20 ubiquitylation was increased after phorbol 12-myristate 13-acetate (PMA)/ ionomycin in HEK293 cells. Our results revealed that level of ubiquitylated A20 increased in the presence of RNF114 and, interestingly, this effect is more pronounced after PMA/ ionomycin treatment, considered as a 'TCR-like' stimulus ( Figure 3b). These results indicate that RNF114 promotes the ubiquitylation of A20.
RNF114 induces the stabilization of NF-jB regulators.
To explore the role of RNF114 on NF-kB pathway, we evaluated the effects of RNF114 overexpression on the stability of two NF-kB regulators, A20 and IkBa. We can observe on the blot (Figure 4a, left panel) as well as on the corresponding quantification (Figure 4a, right panel) an accumulation of endogenous IkBa and A20 levels in the presence of increasing amount of RNF114. The effect of RNF114 on IkBa stability was also evaluated in a pulsechase experiment in the presence of 10 mg/ml of cycloheximide (CHX) (Figure 4b). We observed that IkBa was better stabilized when low levels of RNF114 were expressed, suggesting that other possible targets could be affected when high doses of RNF114 are used. To confirm the implication of RNF114 on A20 and IkBa stabilities, we used a GFP-expressing lentiviral vector to transduce human Jurkat T cells with specific RNF114-shRNA. We used two different shRNA sequences against RNF114 (shRNF1 and shRNF2) to reduce possible off-target effects. The percentage of infection obtained was B90% for the two tested constructs, as well as for the control (Figure 4c, left panel) and the efficiency of endogenous RNF114 knockdown was confirmed by western blot (Figure 4c, right panel). When the expression of RNF114 was knocked down in Jurkat T cells, we observed a slight but consistent decrease of A20 and IkBa expression. Corresponding western blot as well as its quantification is shown in Figure 4d. Altogether, the results presented here demonstrate that RNF114 plays a role in the regulation of IkBa and A20 stabilities. In addition, these results suggest that RNF114-induced A20 ubiquitylation would be responsible for its stabilization rather than its degradation.
Effect of RNF114 on NF-jB-dependent transcription. Because of the important role of A20 in the regulation of NF-kB, we evaluated the effects of RNF114 on the function of this transcription factor. Luciferase assays were performed in both HEK293 (Figure 5a) and Jurkat T (Figure 5b) cells using the NF-kB reporter (3-kB enhancer ConA-luciferase plasmid). 32 Overexpression of RNF114 significantly attenuated TNF-induced NF-kB activation in HEK293 and Jurkat T cells, as well as after TCR stimulation in Jurkat T cells (Figures 5a and b). To confirm the implication of RNF114 in the regulation of NF-kB, we used the previous Jurkat T cells stably transduced with shRNF114 ( Figure 4c). As can be observed in Figure 5c, knockdown of endogenous RNF114 enhanced TCR-as well as TNFa-induced NF-kB activation in Jurkat T cells. These results confirmed that RNF114 is a negative modulator of NF-kB transcription pathway acting through the stabilization of A20 and IkBa inhibitors. The role of RNF114 might be important in a cellular context or situation where alternatives to moderate NF-kB pathway could be required.
RNF114 is a regulator of TCR signaling. To determine whether RNF114 plays a role of modulator in T-cell function like its paralog TRAC1, knockdown experiments were performed using two shRNF114. First, Jurkat T cells were treated overnight with TNFa (15 ng/ml), known to induce T-cell apoptosis. Apoptosis events were evaluated by FACS analysis using Annexin V and 7AAD staining (Figure 6a), as well as by western blot using anti-caspase 7, caspase 9 or cleaved PARP antibodies (Figures 6a and b). When cells (Figure 6a). A mild but consistent effect is also observed on PARP or caspase-7 and -9 cleavages when cells were treated for 6 h with both TNFa and CHX (Figure 6b), indicating that RNF114 contributes to regulate T-cell apoptosis. However, RNF114 is not involved in the regulation of cell cycle in Jurkat T cells (Figure 6c). Then, we checked the effect of RNF114 knockdown in the regulation of T-cell activation.
For that purpose, Jurkat T cells were treated overnight or not with CD3/CD28 antibodies or with PMA/ionomycin and stained with anti-CD69 and anti-CD25 antibodies (respectively) for FACS analysis. As shown in Figure 7, RNF114 knockdown induced a significant and reproducible increase of CD69 ( Figure 7a) and CD25 (Figure 7b) expression, indicating that RNF114 is a negative regulator of T-cell activation.
Discussion
The regulation of the transcription factor NF-kB by posttranslational modifications with ubiquitin or ubiquitin-like proteins has been of increasing interest in recent years. NF-kB not only plays a crucial role in the regulation of immune and inflammatory responses, but also ensures basic functions during cell differentiation. In this study, we identified RNF114 as a new protein interacting with the inhibitor of NF-kB, A20. The domain involved in the interaction between the two proteins is present inside the E3 ligase domain of A20, suggesting that RNF114 takes part, as RNF11 or TAXBP1, in the A20 ubiquitin-editing complex. Moreover, overexpression or silencing experiments demonstrate that RNF114 is involved in the regulation of NF-kB activity. However, the inhibitory function of RNF114 on the TNFa-or TCR-induced NF-kB activation is not as strong as that mediated by A20 or IkBa effects. The mild effects of RNF114 on NF-kB activation can be explained, at least in part, by the stabilization of A20 or IkBa inhibitors. Others studies seem to highlight that RNF114 overexpression could have an activating effect on NF-kB activity. 33 This apparent discrepancy with our results might be because of experimental differences (different cell lines, stimuli, luciferase reporters) as well as different RNF114 expression levels. Indeed, we have evidences (data not shown) that the effect of RNF114 is dose dependent, indicating that regulation of its expression level or its posttranslational modification may also be important. In fact, we observed that RNF114 is also ubiquitylated (Figure 3a), SUMOylated (data not shown) and stabilized/destabilized depending on its levels of expression (Figure 4b). Therefore, we hypothesize that RNF114 activity and function might be, like for A20, tissue and stimuli dependent.
Overexpression of RNF114 increases the stability of A20 and IkBa and could be the mechanism by which RNF114 regulates NF-kB pathway. A similar mechanism has recently been shown for the protein XAF1. 28 However, in the case of A20, RNF114 overexpression increases A20 modification with ubiquitin without causing its degradation. Interestingly, the form of A20 bound to RNF114 in T cells after TCR stimulation undergoes a mobility shift to a form with higher molecular weight, suggesting that under these conditions RNF114 can induce A20 modification and affect its activity.
Finally, we demonstrate that in T cells, RNF114 does not appear to be involved in the regulation of cell cycle, but rather in the regulation of T-cell apoptosis and activation as its knockdown induces a decrease of TNFa-induced cell death and a significant increase of CD69 and CD25 expressions. Taken together, the results presented here show that RNF114 is a novel A20 interacting protein that is able to fine-tune NF-kB activity in T cells stimulated with TNFa or anti-CD3/CD28 antibodies. RNF114 induces an increase of A20 ubiquitylation that in turn modifies A20 protein half-life. In T cells, RNF114 appears to be a modulator of A20 function and of the NF-kB activity required to regulate T-cell activation. Therefore, RNF114 represents a new target candidate to develop pharmaceutical strategies to control the activation of NF-kB without suppressing its full capacity as a transcriptional activator. Studying the mechanisms that allow fine-tuning of this pathway represents an alternative to modulate immune and inflammatory Sciences, Hercules, CA, USA) as previously described. 34 To measure transcriptional activity, cells were co-transfected with a NF-kBluciferase reporter plasmid (3-kB enhancer ConA-luciferase plasmid or 3-EnhConA 32 ) together with plasmids expressing FLAG-A20, AU5-RNF114 or IkBa. Luciferase activity was measured as previously described. 35 Cells were stimulated with either 10 ng/ml of TNFa (R&D Systems, Minneapolis, MN, USA), with a mixture of CD3 (OKT3) and CD28 antibodies (BD Biosciences, Franklin Lakes, NJ, USA) or with PMA (20 ng/ml, SIGMA, St. Louis, MO, USA)/ionomycin (1 mM, Calbiochem, La Jolla, CA, USA) for the indicated times.
Depletion of endogenous RNF114 expression was achieved by RNA interference. The lentiviral shRNA expression plasmids were purchased from Open Biosystems (Thermo Scientific, Denver, CO, USA). Viral particles were produced as previously described by the Viral Vector Platform at Inbiomed Foundation. 36 Jurkat T-cell transduction was carried out at a multiplicity of infection of 10 in order to achieve 100% infection. For the luciferase experiment, cells were transfected with a NF-kBluciferase reporter plasmid (3-kB enhancer ConA-luciferase plasmid) and luciferase activity was measured later. 32 Flow cytometry. For cell cycle analysis, Jurkat cells were washed with cold PBS and fixed with 70% ethanol overnight. Cells were then washed twice with PBS and resuspended in PBS containing 5 mg/ml propidium iodide (PI) and 10 mg/ml RNase A (Sigma-Aldrich, St. Louis, MO, USA). Cell cycle analysis was performed on GFP (530/30BP)-positive and alive cells, excluding doublets.
To track T cells undergoing apoptosis, Jurkat cells were treated overnight with TNFa (15 ng/ml, R&D) alone or for 6 h in combination with CHX (10 mg/ml, SIGMA). Co-staining with Annexin-V-PE (BD Biosciences; 585/42BP) and 7-aminoactinomycin D (7AAD, SIGMA; 670 LP) was performed to differentiate early and late apoptosis as well as necrotic cells. The percentage of each population was analyzed by flow cytometry gated on GFP (530/30BP)-positive cells, excluding doublets.
For activation assays, T cells were cultured as described above and stimulated for 18 h with anti-CD3 and anti-CD28 antibodies or PMA/ionomycin to check respectively CD69 and CD25 expression. CD69-PE (BD Biosciences; 585/42BP) and CD25-PE (ImmunoStep, Salamanca, Spain; 585/42BP) expressions were measured by FACS, gated on GFP-positive cells, excluding doublets and dead cells.
Data represent the mean of three independent experiments done in triplicate. A total of 10 4 events were counted for each sample. Data were collected on a FACSCanto (BD Biosciences) and were analyzed using FlowJo software (www.flowjo.com).
Co-immunoprecipitation experiments were performed using Protein-G crosslinked with 4 mg of antibody/point of FLAG, AU5, A20 or RNF114 antibodies to immunoprecipitate exogenous or endogenous proteins as indicated. In all cases, cells were lysed for 15 min on ice in 50 mM sodium fluoride, 5 mM tetra-sodium pyrophosphate, 10 mM b-glyceropyrophosphate, 1% Igepal CA-630, 2 mM EDTA, 20 mM Na 2 HPO 4 , 20 mM NaH 2 PO 4 and 1.2 mg/ml Complete protease inhibitor cocktail (Roche, Indianapolis, IN, USA). His 6 -ubiquitylated or SUMOylated proteins were purified using denaturing conditions and Ni 2 þ chromatography as previously described. 37 Cloning. A plasmid encoding human ZNF313/RNF114 cDNA (RZPD) and PCR-subcloning was performed into pGEX6 and AU5 plasmids. The A20 gene was amplified from cDNA of purified T cells using the following primers 5 0 -ACAAA CGAATTCATGGCTGAAGTCCTTC-3 0 and 5 0 -GCCGAGGAATTCTTAGGGGCAG TTGGGCGTTTC-3 0 and cloned into a pCEFL FLAG vector. Primer sequences for deletion constructs are available upon request. The accuracy of all cloning and mutagenesis procedures was verified by sequencing.
All experiments presented in this manuscript were done at least in triplicate. For luciferase experiments and FACS analysis, data represent the mean of at least three independent experiments done in triplicate.
|
v3-fos-license
|
2018-04-03T04:12:09.163Z
|
2017-04-21T00:00:00.000
|
4442405
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1038/s41598-017-01161-0",
"pdf_hash": "f64357b31a5607bbdcab463521196de1b4504411",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2671",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "b26ed912de8124913e2269247bd49cb85ff0f051",
"year": 2017
}
|
pes2o/s2orc
|
Regulation of viral gene expression by duck enteritis virus UL54
Duck enteritis virus (DEV) UL54 is a homologue of human herpes simplex virus-1 (HSV-1) ICP27, which plays essential regulatory roles during infection. Our previous studies indicated that DEV UL54 is an immediate-early protein that can shuttle between the nucleus and the cytoplasm. In the present study, we found that UL54-deleted DEV (DEV-ΔUL54) exhibits growth kinetics, a plaque size and a viral DNA copy number that are significantly different from those of its parent wild-type virus (DEV-LoxP) and the revertant (DEV-ΔUL54 (Revertant)). Relative viral mRNA levels, reflecting gene expression, the transcription phase and the translation stage, are also significantly different between DEV-ΔUL54-infected cells and DEV-LoxP/DEV-ΔUL54 (Revertant)-infected cells. However, the localization pattern of UL30 mRNA is obviously changed in DEV-ΔUL54-infected cells. These findings suggest that DEV UL54 is important for virus growth and may regulate viral gene expression during transcription, mRNA export and translation.
In the present study, we first identified and characterized DEV-ΔUL54 and DEV-ΔUL54 (Revertant) constructs 34 . Based on results regarding the growth curve, plaque area and viral genomic DNA copy number, we found that DEV UL54 is important for virus growth. Then the viral mRNA levels, and particularly the total RNA, nuclear RNA and ribosome-nascent chain complex (RNC)-containing RNA levels, were then analysed by real-time PCR to determine the effects of DEV UL54 on viral gene expression, transcription and translation. Furthermore, the localization of UL30 mRNA in DEV-ΔUL54, DEV-ΔUL54 (Revertant) and DEV-LoxP was examined by fluorescence in situ hybridization (FISH). The results showed that DEV UL54 could inhibit or enhance viral gene expression, transcription and translation and promote the export of UL30 mRNA. Our results thus help to address a gap in the field of research on DEV UL54 function.
DEV UL54 is important for viral replication.
To investigate the functional roles of DEV UL54, we first constructed DEV CHv-BAC-ΔUL54 and DEV CHv-BAC-ΔUL54 (Revertant) by genetic manipulation of DEV CHv-BAC-G 34 (Fig. 1A, lower panels). With the help of the Cre-LoxP system, we obtained DEV-ΔUL54 and DEV-ΔUL54 (Revertant) after removing the EGFP-BAC tag (Fig. 1A, upper panels). DEV-ΔUL54 and DEV-ΔUL54 (Revertant) were identified by PCR (Fig. 1B), indirect immunofluorescence assay (IFA) (Fig. 1C) and Western blotting (Fig. 1D). The PCR results showed that the target fragments of DEV CHv-BAC-ΔUL54/DEV CHv-BAC-ΔUL54 (Revertant) and DEV-ΔUL54/DEV-ΔUL54 (Revertant) were approximately 10000 bp and 1700 bp, respectively, in size. In the IFA and Western blot analyses, no specific green fluorescence or specific band was observed for DEV-ΔUL54, which was not the case for DEV-ΔUL54 (Revertant). DEV UL13 was chosen as a control to show that the cells were successfully infected. Taken together, these results implied that the DEV UL54 deletion (DEV-ΔUL54) and revertant (DEV-ΔUL54 (Revertant)) were successfully constructed.
Analyses of the growth curve and a plaque assay revealed that DEV-ΔUL54 could efficiently grow in duck embryo fibroblasts (DEFs) ( Fig. 2A) while producing a smaller plaque size (Fig. 2B,C). Obvious recovery of DEV-ΔUL54 (Revertant) compared with DEV-ΔUL54 was found, suggesting that the defects resulted from the lack of the UL54 gene. promoted the viral mRNA expression of UL30 and gC, and an increase was observed for UL48 and gD, except in the early stage of infection. UL19 could be both positively and negatively regulated by UL54, although the negative activity was dominant. Finally, UL54 could enhance or repress gK expression in early infection. Considering these results together, we concluded that the DEV UL54 gene could inhibit or augment viral mRNA expression.
The transcriptional mRNA levels of UL19, UL30, UL48, gC, gD, and gK were then analysed by RT-PCR after performing a nuclear run-off assay. As shown in Fig. 5, the mRNA transcription levels for all candidate genes were significantly lower in DEV-ΔUL54 than in DPV-LoxP/DEV-ΔUL54 (Revertant), except for UL30 at 12 h. This result indicated that DEV UL54 predominantly promoted viral gene transcription during infection.
As shown in Fig. 6, the mRNA translation levels for target genes were significantly lower in DEV-ΔUL54 than in DPV-LoxP/DEV-ΔUL54 (Revertant) at 12 h, whereas these levels increased from 60-72 h. This finding implied that the pattern of UL54 regulation of the translation of viral genes is as follows: inhibition in the early stage and promotion in the middle and late stages. DEV UL54 promotes the export of UL30 mRNA. We investigated the export of UL30 mRNA via FISH.
The results indicated that UL30 mRNA was only located in the nucleus in DEV-ΔUL54, whereas it was located in both the nucleus and the cytoplasm in wild-type and Revertant cells (Fig. 7), meaning that UL54 could prompt UL30 mRNA export.
Discussion
The multi-functionality of HSV-1 ICP27 during infection has been well characterized 5,21,35 , but few reports have assessed its homologue DEV UL54. DEV UL54 is one of the immediate-early genes, which always encode proteins that are critical for regulation during infection. Therefore, we decided to study the regulatory role of DEV UL54 in viral gene expression.
Examination of the growth kinetics, plaque size and viral DNA copy number of three DEV-derived viruses generated by employing the Red recombination system showed a smaller plaque area, a lower viral titre and a lower viral DNA copy number for DEV-ΔUL54, which could be recovered. This finding indicated that the UL54 gene is important for DEV replication. Next, the relative expression levels of UL19, UL30, UL48, gC, gD and gK, which belong to different genotypes and have different functions, were analysed by real-time PCR. The results showed that DEV UL54 could regulate viral gene expression either positively or negatively. To learn more about the effects of DEV UL54 on viral gene expression, the relative levels of mRNA transcription and translation were analysed after performing a nuclear run-off assay and RNC extraction, respectively. The results demonstrated that DEV UL54 could inhibit or augment viral gene transcription and translation. Interestingly, inhibition of UL30 by UL54 in the early stage did not cause a decrease in the total mRNA expression level, suggesting that UL54 may facilitate the export of UL30 mRNA, which was confirmed via a direct FISH assay. The shuttling property of the DEV UL54 protein may be important in this process.
The results for DEV UL54 in our study are consistent with reported findings for HSV-1 ICP27 6, 9, 10, 36 , and high conservation may be responsible for this similarity. In the present study, we analysed gene expression only on the mRNA level; we did not study protein levels due to the lack of a polyclonal antibody. Although the analysis of translation may be a proxy to a certain extent, actual protein detection would be ideal. However, an "FRT" scar remained in the recombinant virus due to application of the Red recombination system, and the effects of an "FRT" scar on viral characteristics are unclear. A new system for the construction of recombinant viruses without a scar is thus being researched.
In summary, our results demonstrate that DEV UL54 could both positively and negatively regulate viral gene expression during transcription and translation and could promote the export of UL30 mRNA. Our research opens the door to studying the function of the UL54 gene, but our study was too shallow to characterize the detailed mechanisms of expression regulation by DEV UL54. Additionally, the domains responsible for regulation of DEV UL54 should also be examined. To perform IFA, cells infected with DEV-ΔUL54 or DEV-ΔUL54 (Revertant) were sequentially treated with 4% paraformaldehyde, 0.5% Triton X-100, anti-UL54 polyclonal antibody and FITC-conjugated goat anti-rabbit IgG. Fluorescence microscopy was then applied to image the cells 37 .
Cells and viruses.
Western blotting was performed as per standard protocols 38 . First, the cells infected with DEV-ΔUL54 or DEV-ΔUL54 (Revertant) were lysed, and the proteins were separated. Second, the separated proteins were transferred to polyvinylidene fluoride (PVDF) membranes. After blocking with BSA and incubation with anti-UL54 polyclonal antibody and goat anti-rabbit HRP-labelled IgG, the membranes were developed using a DAB kit (TIANGEN, PA110).
Plaque formation assay. The plaque assay was also performed on DEFs, as per standard protocols 39 . After inoculation with DEV-LoxP, DEV-ΔUL54 or DEV-ΔUL54 (Revertant), the DEFs were incubated on semisolid culture medium (a 2 × MEM and methylcellulose mixture) at 37 °C for several days. The cells were then fixed with paraformaldehyde and stained with 0.5% crystal violet. Plaque areas were measured with Image-Pro Plus 6.0 (IPP 6.0), and 100 plaques were randomly chosen for each virus.
Real-time fluorescence quantitative PCR (RT-PCR). DEFs were infected with DEV-LoxP, DEV-ΔUL54
or DEV-ΔUL54 (Revertant). The cells were then harvested at 12, 24, 36, 48, 60, or 72 h for viral DNA replication and mRNA expression analyses. All analyses were performed independently in triplicate, and statistical significance was evaluated with the use of the unpaired Student's t test.
To analyse DNA replication, a standard curve was first constructed. The viral DNA in the samples was then extracted according to the instructions of the Viral RNA/DNA Extraction Kit (Takara, 9766), and real-time PCR was performed with the purified nucleic acids as a template. All of the analyses were performed independently in triplicate, and statistical significance was evaluated with the use of the t test. A nuclear run-off assay 12,40 and RNC extraction [41][42][43] were performed on the samples before RNA isolation to investigate transcription and translation, respectively. For the nuclear run-off assay, nuclei were isolated according to the procedure of the nuclear extraction kit (Solarbio, SN0020). The infected cells were washed with PBS twice and centrifuged at 800 g for 5 min, and the cell pellet was harvested, after which the cells were resuspended in 1.0 mL of pre-cooled lysis buffer and pestled with a homogenizer for 10 sec after adding 50 μL of Reagent A. After centrifugation at 4 °C and 700 g for 5 min, the supernatant was discarded, and the sample was resuspended in 0.5 mL of pre-cooled lysis buffer. The suspension was then added to a centrifuge tube containing 0.5 mL of medium buffer and centrifuged at 4 °C and 700 g for 5 min. After the supernatant was discarded, 0.5 mL of lysis buffer was added, and the solution was centrifuged at 1000 g for 10 min. The supernatant was again discarded, and the pure nuclei were precipitated at the bottom of the tube. Finally, the nuclei were resuspended in 300 µL of runoff buffer (25 mM Tris-Cl (pH 8.0), 12.5 mM MgCl 2 , 750 mM KCl, and 1.25 mM NTP mix) and incubated at 37 °C for 15 min to complete transcription. The RNC was extracted using a common approach: infected cells were pretreated with 100 mg/mL CHX for 15 min and then washed with pre-chilled PBS. The cells were subsequently incubated in cell lysis buffer on ice for 30 min and centrifuged at 4 °C and 16200 r/min for 10 min to remove the cell debris. The supernatant was transferred to the surface of a sucrose buffer and centrifuged at 4 °C and 185000 r/ min for 5 h to obtain the RNC. Total RNA, nuclear RNA, and RNC-containing RNA were isolated using RNAiso Plus (Takara, D9108A) and were reverse transcribed to cDNA (Takara, DRR047A), which served as a template for the subsequent real-time PCR. The primers used in this study are available upon request ( Table 1). The relative transcription levels of the DEV UL19, UL30, UL48, gC, gD and gK genes were calculated using the 2 −ΔCt method.
FISH assay. To conduct FISH, a probe was designed and synthetized by Sangon Biotech to visualize DEV UL30 mRNA. The sequence was 5′-TAGAGTCCCCAACAGATGCGAAAAGTAGTAGTCGGTG-3′, which was tagged with FITC at the 5′ terminus. DEF cells infected with DEV-derived virus were plated onto coverslips and sequentially treated with 4% paraformaldehyde, 0.2 mol/L HCl, and 100 μg/mL protease K. After prehybridization, hybridization and DAPI staining, the cells were imaged with a fluorescence microscope 44 .
|
v3-fos-license
|
2019-04-27T13:08:12.815Z
|
2017-11-14T00:00:00.000
|
55910064
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://biomedres.us/pdfs/BJSTR.MS.ID.000527.pdf",
"pdf_hash": "1687909b98849a2a85eae62776f4075021477804",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2672",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "60e4d0515c21ae6c2349c68fd5777dddbca9176a",
"year": 2017
}
|
pes2o/s2orc
|
Farm-To-Fork Food Surveillance System: A Call for Public Health Education
*Corresponding author: Mario Brondani, Associate Professor, Director, Dental Public Health Graduate Program, Department of Oral Health Sciences, Division of Preventive & Community Dentistry, and Prosthodontics and Dental Geriatrics, University of British Columbia, 2199 Wes brook Mall. Vancouver, BC, V6T 1Z3, USA; Tel: ; Email: ISSN: 2574-1241 DOI: 10.26717/BJSTR.2017.01.000527 Mario Brondani. Biomed J Sci & Tech Res
Introduction
Food borne illnesses result from consuming contaminated food or beverage with infections (bacteria, viruses, parasites, and prions) and non-infectious (poisonings -fungi and their toxins, heavy metals, chemical, and so on) contaminants at any stage from farm-to-fork chain [1,2] Every year in the U.S.A, food borne illnesses cost more than US$150 billion in medical expenses and economy loss due to days of work missed; cause 5000 fatalities and more than 76 million reports of illness-related symptoms. It is believed that one in every six Americans (or 48 million people) gets sick with foodborne diseases from 31 known pathogens annually [3] Between 1990 and 2004, there were 639 outbreak reports linked to contaminated produce in the US, including those related to tomatoes with Salmonella served in restaurants, and lettuce with E. coli O157: H7 served at the Taco Bell© fast food chain. In Canada, readyto-eat meat products contaminated with Listeria monocytogenes resulted in 57 confirmed illnesses and 22 deaths across the country in 2011 [4] In order to respond faster, more efficiently and more effectively to national and international foodborne outbreaks, the Food-borne Illness Outbreak Response Protocol (FIORP) was updated by the Public Health Agency of Canada in 2010 [5] After the FIORP updated, there were at least four outbreaks in 2016 and 2015, two involving Salmonella infection, one involving Listeria from packaged salad products from the Dole processing plant in Springfield, Ohio and another involving Vibrio para haemo lyticus linked to raw shellfish [6].
Contaminated foods commonly associated with food borne illness are [7]. a.
Animal in origin (beef, poultry, eggs, milk, soft cheese, seafood, and so on); b.
Raw fruits and vegetables; c. Canned products (canned goods, juices, cider, and so on).
Since 2009, the Government of Canada offers an annual food recall report which, for the first six months of 2011, for example, this report showed seven recalls on products contaminated with E. coli O157: H7, 18 recalls on products contaminated with Listeria, 23 recalls on products contaminated with Salmonella, just to name a few [8]. Although American and Canadian data might be alarming, food borne illnesses are still underreported locally and worldwide, and go undiagnosed as people fail to come forward about all food poisonings and do not always seek a doctor when feeling ill. When people seek care, the medical system fails to issue a specific diagnosis [9]. Once the source of contamination is identified following a report, food recall occurs. In general, public companies affected by a recalled product can experience share price volatility, and have their stock price dropping 30% within the first week of recalls.
In 2009 the Kellogg's© lost nearly $70 million worth of peanut butter crackers and cookies recalled contaminated with Salmonella [10] In 2017, Thomas concluded that on average, an initial recall involving meat and poultry products for example, is associated with short-term reductions in shareholder wealth of up to $236 million, 5 days after the recall announcement [11] The primary sources of pathogens found in foods are from a variety of sources: feces (intestinal tracks of animals and humans), soil and water, plants and plant products, food equipment and utensils, animal feeds, animal hides, food handlers, processing plant air and dust, and more. With such a variety of contact points in which food can get contaminated, people are both the main cause and the victims in food borne illnesses. Once contaminated foods are ingested, people can be highly contagious before any symptoms appear and even after symptoms disappear. Hence, about half of healthy food handlers are carriers of disease agents. Improper handling and sanitation in food preparation (in restaurants and other eateries, and at home) are critical to preventing food borne illnesses and yet, many people do not know how to properly do it. As a result, educating the public about food safety (handling, storage and preparation) is the outcome focus of the surveillance system presented in this paper for use in a public health action to reduce morbidity and mortality and to improve health [12].
Surveillance: A call for Education in Canada.
Surveillance is 'the ongoing and systematic collection, analysis, interpretation, and dissemination of data about health-related event for use in a public health action to reduce morbidity and mortality and to improve health [13] and it is necessary to determine any significant change in frequency (outbreaks) and distribution of cases [14]. Although food borne illnesses are underreported, there are various worldwide surveillance systems in place aimed at interrupting the transmission of food-related pathogens [9,15,16]. Unfortunately, 'the need for a Canadian food and nutrition surveillance system has been recognized for some time' despite of the existence of a conceptual model on surveillance proposed by Health Canada [17] and its FIORP [5]. Hence, there is an inconsistency on how and what to report, which adds difficulties in comparing the different surveillance systems despite the existence of guidelines [11] and worksheets [9].
A framework for Education on Food Handling
Based on the different foodborne illnesses surveillance systems available [12,18] the following framework is suggested ( Figure 1). A suggested food borne illness surveillance system involving data collection, analysis, dissemination and application ( Figure 1) highlights four main components of a surveillance system: data collection (who, when, what and where -case definition), analysis (what food and contaminants are implicated, and the need for laboratory confirmation), dissemination (to health authorities and the public), and application (the means used to prevent further spread, and can include education, food recalls, inspections and regulations).The art in surveillance lies in collecting appropriate and timely information and in interpreting it correctly, which might lead to controlling the outbreak [9]. Upon analysis of all the information gathered, a suspicion of a foodborne illness case can be raised and the local health authority notified (either from the health care provider, laboratory, or other source) (Figure 1). The public is then made aware of the potential outbreak, and food recall occurs. Despite the effort, however, food borne illnesses remain under reported because, for each case that gets identified through clinical laboratory analysis, another 29 are estimated to go unreported [19]. Moreover, them is understanding of the food handling or consumption, changes in consumption patterns due to food shortages, mass food recalls and regulatory changes in food safety can only make food borne illness surveillance fallible [10,20].
Educating the Public: Variables and Indicators
The success of the application aspect of a surveillance system has to go beyond food recalls, government fines, and lawsuits to include public education on how to prevent the contamination and spread of food borne illnesses. This education involves the preparation and dissemination of information on food safety to the public at large. Information can be provided to advise the public on: how to keep illness-causing pathogens out of food, how to destroy illness-causing pathogens or how to control their growth once they have contaminated the food, as follows:
Keeping Pathogens Out of Food
Information should focus on building sanitary barriers between food handlers/consumers and the foodthey manipulate/eat and educate them on how to proper handle food. The 'Food-Safe School Action Guide' in the [21] US educates school children and staff on washing produces under running water; removing and discarding outer leaves from lettuce and cabbage; washing hands before preparing food, when switching from one type or food handling to another, and after preparing food; and regularly cleaning and disinfecting the refrigerator, freezer, and counters. The guide does reinforce hand-washing as the single most important method of preventing food contamination [22,23]. In fact, the United Nations has declared the 15th of October as the Global Hand-washing Day to improve hygiene practices worldwide. In order to avoid food contamination by employees, many public eateries have a reminder in their washrooms advising their employers about a step-by-step process on how to properly wash and dry all areas of their hands before returning to the kitchen or counter area. In New Zealand, campaigns are more direct with respect to the consequences of poor hand-washing by mentioning diarrhoea and vomiting.
These campaigns have been emphasizing that there is no need to use an antibacterial soap to do a good job and hand sanitizers should not replace of plain soap and water. However, the WHO recently suggested alcohol-based handrubs as the best option to fulfill the highest standards of safety in relation to the prevention of cross-infection when focusing on point of care [24] alcoholbased handrubs have to yet be proven efficacious and safe in handling food and food products. At the farm level as well as in commercial food operations and distribution plants, best practice guidelines on food production and handling should include inspection of fields and packing plants; utilization of third party audits to monitor workers' hygiene; testing dairy, meat, and food products for microbial contamination regularly; inspect plant after an outbreak; coordinate food recalls carried out by industry; and so on. Health agencies have regular food handling inspections for food establishments, and food handlers must have a valid Food Safe BC certificate to ensure that proper food handling procedures are observed and practiced in British Columbia, Canada [25]. It is noted that despite these efforts, there is no guarantee that proper food handling procedures are followed accordingly.
Destroying Pathogens once they have Already Contaminated the Food
The main focus here is in eliminating pathogens that have already contaminated food during production, storage or preparation. Such destruction can takes place via thermal processing (mainly cooking-heat temperature at recommended levels accordingly for different foods); non-thermal processing (irradiation, pulsed electric fields, oscillating magnetic fields, high pressure processing, pulse light technology, and freezing at commercial level); antimicrobial and sanitizers (ozone, chlorine, iodine, and organic acids at commercial level); or hurdle technology (combine interventions methods to prevent bacterial growth at commercial level). The Canadian Food Inspection Agency has an online document alerting the population about the proper way to cook meat and poultry products while other online resources offer food storage guidelines for cupboard, refrigerator and freezer [26,27], in various languages such as Dari, Cambodian and Zulu [28]. The 'Food-Safe School Action Guide' [15] reinforces cooking time and temperatures, and the importance in maintaining heat in hot foods; separation of raw meat from cooked foods and vegetables including cleaning cutting boards that contacted raw meat; chilling food by refrigerating leftovers promptly and at right temperature; keeping purchased (refrigerated) food chilled until getting home; and reheating leftovers properly.
Controlling the Growth of Pathogens in already Contaminated Food
Bacterial growth is a major source for food borne illnesses either from raw, uncooked, and improperly cooked food, or from not appropriately stored cooked food. Although there are various factors affecting bacterial growth (type of food, acidity of food, time and temperature, oxygen and moisture), the existing guidelines reinforce the need for refrigerating foods at 40o Fahrenheit (about 4.4o Celsius) or lower within two hours or less after cooking, and not leaving standing water in sinks. The World Health Organization has developed the 'five keys to safer food' campaign [28] that is available in more than 50 languages and reinforces the above points as well as the need to remind consumers and eateries to not thaw frozen food at room temperature, but in the fridge. Hence, getting information from government websites [5] and from numerous web pages including 'Livestrong' [29] and online blogs might further help to reinforce food safety practices. Food blogs such as the 'food buglady' [30] offers updated lists of food safety recalls in Canada, while the 'Gainesville's Lunch out Blog' [31] discusses issues of contamination in the U.S fast food chains. Despite the efforts outlined above, a foodborne outbreak can still happen, and it remains up to those affected individual to seek medical attention to disseminate information, which can be used to identify or warn others of potential contaminations. For example, in 2007, participants from a muddy BC cross-country mountain bike race commented on a race-related web forum that they were feeling ill with similar symptoms. Such internet activity prompted the race organizers to contact the local public health unit, which then received 13 laboratory reports of Campylobacter jejuni outbreak infection in the racers who ingested mud [32].
Limitations and Barriers
Aside from the variation of food contaminants and case definition, the underreported status of food borne illnesses, the lack of a firm Canadian surveillance system, other barriers exist in making the public aware of food borne illness. For example, there are unanswered questions about the influences of the social, cultural and physical environments in which the social aspects of food consumption and eating behaviour occur [33] including: a.
How do advertising and the mass media affect the nutritional knowledge and perceptions of Canadians? b.
What is the relationship between socio-cultural and economic status and diet? c.
What are the interactions between the individual and collective determinants of healthy eating that are unique to older adults? d.
How are the dietary habits of Aboriginal people influenced by concerns over pollutants in their local food sources? e.
What impact do self-esteem and body image have on food selection and eating behavior? f.
Another barrier regards to the perception of food borne illness risk. The lay perspective of health risk is personal and differs markedly from the expert view which tends to focus on a more impartial point of view in regards to food hazards [34]. The public's perceptions are important to be considered by health agencies when tailoring information on how to minimize the risks and public health burden of food borne illness, and on how to promote confidence in the food chain supply.
Another limitation happens in terms of food recall. According to Maple Leaf Foods, many Canadian small markets do not have a food recall plan to readily identify the immediate previous supplier and the immediate subsequent recipient of food in case of an outbreak [35]. Larger companies such as Safeway Inc, Loblaw Companies Limited, Save-on-Foods© and Costco Wholesale Corporation use personalized membership or loyalty cards to offer discounts to their loyal customers. These companies can trace purchases through the loyalty cards, which can assist in identifying purchasers of a recalled product and promptly inform the consumers.
Conclusion and Future Direction
A national food borne illness surveillance system focusing on educating the public at large is needed to monitor patterns of diseases that occur within the farm-to-fork chain. Such education should include proper hand sanitation, food storage and food preparation since the globalization of the food supply also brings the globalization of food borne illnesses. In Canada, a food borne illness [36] surveillance system could be modeled on the framework presented in (Figure 1), focusing on [28]: a.
Introducing a harmonic and standardized surveillance system across the country; b.
Strengthening local and provincial capacity for implementing such surveillance and in responding to food borne illnesses through networking; and c.
Enhancing the surveillance capacity along the entire farm-to-fork chain.
In addition, careful attention has to be placed on the eating habits of a multicultural and multi-language society such as that in Canada and in British Columbia while encouraging the public to engage more proactively in coming forward and reporting their food borne illness symptoms to a health provider/authority. The authors would like to caution the readers that ( Figure 1) has not been tested or evaluated.
|
v3-fos-license
|
2019-03-17T13:06:30.967Z
|
2017-04-05T00:00:00.000
|
37441618
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://juniperpublishers.com/jojiv/pdf/JOJIV.MS.ID.555572.pdf",
"pdf_hash": "4d9827794c81fe3832ed5127c7610a19ef026b8b",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2673",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "ddd7b4ac592d18ebf07e4e6e318ca125309bd364",
"year": 2017
}
|
pes2o/s2orc
|
Autoimmunity in Hepatitis C Virus Infection – An Immunologist’s Perspective
Hepatitis C Virus (HCV) infection is commonly thought of being associated with an increased incidence of autoimmunity. However, the immunological basis behind this heightened susceptibility to autoimmunity is not well understood. It is not clear whether HCV infection itself induces autoimmunity, how this differs compared to other viral hepatitis infections, or if the presence of autoimmunity is confounded by the previously widespread use of interferon-alpha based regimens for the treatment of HCV. Therefore, in the current era of interferonfree treatments based on direct acting anti-viral (DAA), do clinicians need to remain vigilant for autoimmune complications in their patients? This review summarises current evidence to assist clinicians with the management of subjects with HCV and when to suspect autoimmunity. As the majority of evidence in regards to HCV and autoimmunity is related to B cell mediated autoimmunity, with the association of auto antibodies (AAb), this will be the focus of this review. Overall, while published evidence generally distinguishes between HCV with and without interferon treatment, there are only a few studies that include relevant healthy age and gender matched control populations.
Introduction
Hepatitis C Virus (HCV) infection is commonly thought of being associated with an increased incidence of autoimmunity. However, the immunological basis behind this heightened susceptibility to autoimmunity is not well understood. It is not clear whether HCV infection itself induces autoimmunity, how this differs compared to other viral hepatitis infections, or if the presence of autoimmunity is confounded by the previously widespread use of interferon-alpha based regimens for the treatment of HCV. Therefore, in the current era of interferonfree treatments based on direct acting anti-viral (DAA), do clinicians need to remain vigilant for autoimmune complications in their patients? This review summarises current evidence to assist clinicians with the management of subjects with HCV and when to suspect autoimmunity. As the majority of evidence in regards to HCV and autoimmunity is related to B cell mediated autoimmunity, with the association of auto antibodies (AAb), this will be the focus of this review. Overall, while published evidence generally distinguishes between HCV with and without interferon treatment, there are only a few studies that include relevant healthy age and gender matched control populations.
Juniper Online Journal of Immuno Virology
CD81 forms part of the B cell co stimulatory complex that includes CD19, CD21 and the IFN-inducible protein CD225 that can reduce the threshold for B cell activation following antigenstimulation. Following HCV E2-CD81 binding, there are reports of up-regulation of co stimulatory markers CD80 and CD86 [6] on B cells leading to their non-specific polyclonal activation [7] resulting in hyper gamma globulinaemia. This is further supported by the high incidence (10-70%) of cryoglobulinaemia in subjects with chronic HCV infection [8].
The viral protein NS5A also increases the activity of the protein Fyn, which is a member of the Src family of kinases, leading to an increase in cell cycling and consequent B cell proliferation [9].
All of the above mechanisms are thought to contribute to B cell dysregulation, with the presumed potential for the induction of AAbs. Furthermore, there is also evidence to suggest that liver cirrhosis itself, leads to B cell abnormalities in the CD27+IgM+ memory B cell compartment [10].
AAbs in HCV-general considerations
AAbs are defined as antibodies directed against self. They can be directly pathogenic in the associated autoimmune disease, or "marker" antibodies that are not directly pathogenic [11].
Low levels of AAbs are commonly present in healthy subjects, often more so in females and also increase with age [12][13][14][15][16]. It is thought that the positive predictive value for clinical disease increases with higher titre and may precede clinical disease by more than five years [17][18][19]. Overall, the presence of AAbs in serum is not synonymous with presence of autoimmune disease, which is relevant in the context of HCV studies, as many looked at the presence of AAbs, but not clinical disease.
Furthermore, the sensitivity and specificity of the autoantibody result depends on the assay and antigenic target that was used to measure the autoantibody; newer assays are generally more sensitive and use more specific targets [20][21][22]. Therefore, older studies using historical assays may not be easily reproduced with the newer assays.
Hepatitis C and thyroid disease
It is well established that thyroid autoimmunity, whether subacute or clinically manifest, may emerge during the course of HCV infection, even in the absence of IFN-a treatment. Between 2% and 48% of patients with chronic HCV infection manifest antithyroid antibodies, with the variability in prevalence attributed to a number of factors including geography, study population ethnicity [23,24], iodine intake [25] and HCV genotype (GTs) [26][27][28].
One of the earliest and largest studies to demonstrate thyroid autoimmunity assessed 630 chronic HCV-infected subjects, presenting to clinic prior to IFN-a treatment, for the presence of abnormal thyroid function test (TFT) including free thyroxin (T4), free tri-iodothyronine (T3) and thyroid stimulating hormone (TSH), TPO antibodies and anti-thyroglobulin (Tg) antibodies [29]. This study also included three control groups selected retrospectively from a general population registry and comprising 389 subjects from an iodine deficient area, 268 subjects from an area of iodine sufficiency, and 86 subjects over the age of 40 with chronic HBV infection. There was a significantly higher incidence of hypothyroidism in the subjects with HCV infection (13% versus 3-5% of the controls). 21% of HCV-infected subjects had anti-TPO antibodies versus 10-13% of Juniper Online Journal of Immuno Virology the controls, and 17% had anti-thyroglobulin antibodies versus 9-10% of controls. These findings were confirmed by further studies [30] and three meta-analyses verified a significant association between HCV infection and thyroid autoimmunity [31,32]. Overall, female gender and anti-TPO antibody positivity was associated with a raised risk of hypothyroidism [25,33] in these studies. Nevertheless, an increased risk of thyroiditis associated with HCV infection was also demonstrated in a cohort study of predominantly male subjects (97%) attending US Veterans Affairs Health Care facilities from 1997-2004 [34].
Thyroid autoimmunity has been studied in the presence of mixed cryoglobulinaemia (MC). Although few in number, these studies suggest that MC-positive HCV-infected subjects have a higher incidence of autoimmune thyroid disease as compared to MC-negative HCV-infected subjects; this is most pronounced in females with TPOAAbs, leading to the recommendation that female subjects with MC-positive HCV infection should be screened for thyroid autoimmunity [30].
Importantly, several studies suggest that chronic HCV infection and thyroid autoimmunity may be associated with a higher rate of papillary thyroid cancer. Antonelli, et al. [35] described a higher prevalence of this papillary thyroid cancer in 139 HCV-infected subjects compared to controls [35] while another case control study for various cancers noted an association between HCV and thyroid cancer (OR 2.8 p=0.01) [35,36]. The increased risk for thyroid cancer is also true for HCV-infected subjects with MC [37]. Significant lymphocytic infiltrates in the thyroid tissue were seen in the cases of HCV-associated thyroid cancer; supporting the notion that autoimmune thyroiditis may be a predisposing factor [38,39]. Case studies have also reported that more aggressive thyroid cancers can be seen within HCV-infected subjects [40], leading to clinical recommendations to monitor HCV-infected subjects with thyroid disease for the development of thyroid cancer.
Hepatitis C and autoimmune arthritis
Arthralgia is a common symptom in subjects with HCV infection [41] and asymptomatic inflammatory joint changes have been described in the majority of subjects in an HCVinfected cohort when screened by joint ultrasonography [42]. Clinically manifest arthritis in HCV-infected subjects is seen in approximately 4% [43]. HCV-associated arthritis usually manifests as a symmetric polyarthritis (SP) or an intermittent oligoarthritis (IMO) of medium to large joints; the latter often in patients with MC.
Compared to thyroid autoimmunity, autoantibody profiling for HCV-associated arthropathy is less studied. Anti-cyclic citrullinated peptide (CCP) antibodies, which are highly specific for rheumatoid arthritis (RA) and are found in 60% to 75% of patients with RA [44,45], but can be also seen in a small proportion of patients with HCV-related arthritis (4.5% to 7%) [46][47][48].The majority of HCV-related arthritis is, however, thought to be CCP negative. This is supported by a recent study [49] investigating the prevalence of anti-mutated citrullinated vimentin (MCV) antibodies, CCP antibodies, rheumatoid factor (RF) and mixed cryoglobulinaemia (MC) in 45 subjects with HCV (GT4) infection-associated arthritis compared to 30 RA subjects. The most frequent clinical presentation was a symmetric polyarthropathy (SP), which in RA subjects was associated with CCP antibodies but not in HCV-infected subjects. Other antibodies studied such as MCV (30% in HCV-infected subjects versus 93.3% in RA subjects) did not aid further clinical distinction between RA and HCV-associated arthritis. In this particular study the level of RF positivity was high in both cohorts 73.3% in HCV arthropathy versus 86.7% in RA subjects. A potential role for CCP antibodies in distinguishing HCV-associated SP from RA was identified in further studies (0% to 11.7% CCP positivity in HCV-associated disease depending on demographics of the population studied) [48,[50][51][52]. Finally, a large nation-wide study from Taiwan revealed that in the Taiwanese population chronic HCV infection alone was associated with an increased risk for SP (hazard ratio 2.03). Due to the nature of the study no distinction between HCV-associated SP and RA was made and no data on anti-CCP antibody profiles were provided [53].
Hepatitis C and S jogren's syndrome
S jogren's syndrome (SS) is a chronic, auto immune exocrinopathy characterised by reduced salivary and lacrimal gland function causing dry eyes and mouth ("sicca symptoms"), and extra-glandular manifestations affecting multiple organ systems [54]. It may present by itself, primary SS (pSS), or in association with another autoimmune disease such as SLE or RA, termed secondary SS. Hepatitis C virus is sialotrophic with HCV RNA found in both saliva and salivary epithelial cells it has been questioned whether HCV is directly responsible for the manifestations of sicca symptoms seen as a common extrahepatic manifestation of HCV infection [55][56][57][58] or whether the sicca symptoms and sialadenitis associated with HCV identify a subset of patients with pSS (potentially with HCV as a trigger for immune dysregulation). Epidemiological data on the association of HCV with SS depends on the criteria used which have changed over time and also depends on the background prevalence of HCV, and the geographical location. The prevalence of HCV in patients with SS varies from 0-19%. Patients are now considered to have a secondary form of SS if there is concurrent HCV infection [59].
Sjogren's syndrome is associated with the presence of AAbs to SS-A/Ro and SS-B/La antigens and has inflammatory cell infiltrates within the salivary glands that appear to vary with disease course or severity of disease [60]. Positivity for SS-A/Ro and SS-B/La antibodies in patients with HCV infection and sicca symptoms has been reported in up to 25% of HCV patients [61].
However, while some subjects with HCV have very similar features compared with non-HCV pSS [61], sialadenitis in the context of HCV generally shows sufficient aetiological, Juniper Online Journal of Immuno Virology histopathological and genetic differences to allow differentiation between the two states [62,63].
From a practical perspective, a predominantly B cell infiltrate in the salivary glands is strongly associated with hyper gamma globulinaemia, autoantibody production and clinical manifestations of salivary gland swelling. Both hypo complementaemia and glandular swelling are risk factors in SS for lympho proliferative disease [64][65][66]. Patients who otherwise satisfy American-European Consensus Group (AECG) criteria for pSS, but who are HCV positive, also have a high frequency of parotid enlargement and vasculitis are RF positive with MC, have higher extra-nodal involvement in organs where HCV replicates, and a predominance of mucosa-associated lymphoid tissue lymphomas [67]. The presence of sicca symptoms in patients with HCV therefore remains a red flag to monitor for lympho proliferative disease.
Hepatitis C and cryoglobulinaemia
Hepatitis C is frequently associated with cryoglobulinaemia. Cryoglobulins are cold insoluble immune complexes that precipitate at temperatures below 37 oC and resolublise on warming [8,68]. They are classified according to the type of immunoglobulin in the cryoprecipitate [68,69]: Type I cryoglobulins are typically monoclonal and associated with B cell lymphoproliferative disease; MC are Type II or III. Type II cryoglobulins are a mixture of polyclonal immunoglobulin IgG and a commonly monoclonal IgM immunoglobulin with reactivity for RF; Type III are polyclonal immunoglobulins (IgG, IgM, IgA) or polyclonal IgG with RF reactivity. Type II and III are those associated with (50 -60%) chronic viral infections, including HCV. The reported frequency of cryoglobulins in chronic HCV-infected subjects ranges from 12 to 56%, with the highest prevalence reported in Mediterranean climates [69]. Although not a classical AAbs mediated immune disease, cryoglobulinaemia can present with a wide range of clinical features mimicking autoimmunity.
Symptoms associated with MC are variable ranging from Raynaud's phenomenon, small to medium-sized vessel vasculitis, purpura, arthalgia and asthenia to severe neurologic and renal involvement.
Other extra-hepatic manifestations of Hepatitis C associated with the presence of AAbs
Aside from the more common extra-hepatic manifestations, auto immunecytopenias have been described in association with AAbs in HCV-infected subjects [70]. In chronic HCV-infected subjects autoimmune hemolyticanemia is associated with ANA and MC positivity while subjects with HCV-infection and autoimmune thrombocytopenia had been observed to have antiplatelet antibodies.
AAbs to oxidized LDLs that have also been described and are thought to be serological marker for severity of atherosclerosis; in addition an association of these AAbs with severity of hepatitis steatosis in chronic HCV-infected subjects has been described.
Systematic Autoantibody profiling in HCV-infected subjects
The prevalence of non-organ-specific AAbs (NOSAs; specifically ASMA, ANA and anti-liver kidney microsomal type 1 antibody (LKMA-1); Table 1) has been examined in several studies with varying results. Some authors have examined the prevalence of NOSAs versus age-and sex-matched HCV-negative subjects and found no significant difference in the NOSAs between cases and controls (18% versus 10%) [71].
Others however have shown higher prevalence in the HCVinfected subjects (25%) and with chronic liver disease than in age-and sex-matched controls, including normal healthy (6%) and HBV controls (7%; HbsAg positive). NOSAs seen in this study were generally low titre and not directed against well definedantigens (e.g. F-actin in ASMA) [72]. This is similar to findings from Granito, et al. [73] in which none of the HCV-infected subjects tested by IIF had detectable ASMA VGT (VGT=vascular, glomerular and tubular) staining; a pattern usually associated with F-acting antibodies as opposed to ASMA V or ASMA VG staining, which are less specific for autoimmune hepatitis (AIH). Of note, many studies in HCV-infected subjects do not distinguish between these patterns despite their differing clinical predictive value for AIH.
NOSA positive individuals however may have higher levels of alkaline phosphatase, lower levels of platelets and prothrombin activity and increased prevalence of significant fibrosis suggesting possible association between NOSA positivity and biochemical and histological profile of HCV [74]. In a US population, a retrospective review of AAbs (as above and with the addition of RF, AMA and MC) in HCV-exposed subjects with elevated serum amino transferases (n=117) showed a high prevalence of positivity, particularly for ASMAs (66%) and RF (76%), in both men and women [75]. However, the study did not appear to include age-or sex-matched controls.
In a recent Egyptian paediatric population with chronic HCV GT4 infection (n=80), tested for the same set of NOSAs, 40% exhibited evidence of low to moderate titre ASMA without associated ANA or LKMA-1 but non-typical clinical features usually associated with the presence of ASMA. Nevertheless, it was found that HCV infected subjects with ASMA had higher levels of total bilirubin, albumin, immunoglobulins, alkaline phosphatase, and gamma-glutamyl transferase levels [76]. In addition, adult data suggests there may also be an association between ASMA antibody and degree of liver fibrosis [77].
ANA positivity (titre ≥80) appears to be an immunological epiphenomenon that does not influence clinical, biochemical or histological features of chronic hepatitis, or predict response to anti-viral treatment [78]. A large UK study of 963 chronic HCV Juniper Online Journal of Immuno Virology positive subjects reported a significant relationship between ASMA positivity and interface hepatitis in males, and that ANA was associated with increased age consistent with the frequencies reported in blood donors. The outcome of these reports suggests that ASMA may be associated with histological features of liver damage, while the association between ANA and clinical correlates remain unclear [77][78][79][80].
Conclusion
This review reflects on the current evidence for AAbs in HCV from a Clinical Immunology aspect. Although there is clear evidence for certain types of autoimmune disease in HCVinfected subjects, less is known about additional predisposing risk factors. Based on the current evidence, routine screening for thyroid autoimmunity and MC is recommended, however, the role of screening for other types of autoimmunity beyond these is less clear and should be performed only if the clinical suspicion of an autoimmune disease arises. Further studies in the field are urgently needed, as the time of initiation of treatment with DAAs may be critical to prevent future life-long autoimmune disease in some subjects.
|
v3-fos-license
|
2022-01-13T16:09:45.042Z
|
2022-01-11T00:00:00.000
|
245899785
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-3417/12/2/699/pdf?version=1641908023",
"pdf_hash": "426001a9344a270c54d31529e9bcbd9663330398",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2674",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"sha1": "3d470dc1bbc498495d53cff12bdff8e6ab7e7a76",
"year": 2022
}
|
pes2o/s2orc
|
Achieving Electrochemical-Sustainable-Based Solutions for Monitoring and Treating Hydroxychloroquine in Real Water Matrix
: Hydroxychloroquine (HCQ) has been extensively consumed due to the Coronavirus (COVID-19) pandemic. Therefore, it is increasingly found in different water matrices. For this reason, the concentration of HCQ in water should be monitored and the treatment of contaminated water matrices with HCQ is a key issue to overcome immediately. Thus, in this study, the development of technologies and smart water solutions to reach the Sustainable Development Goal 6 (SDG6) is the main objective. To do that, the integration of electrochemical technologies for their environmental application on HCQ detection, quantification and degradation was performed. Firstly, an electrochemical cork-graphite sensor was prepared to identify/quantify HCQ in river water matrices by differential pulse voltammetric (DPV) method. Subsequently, an HCQ-polluted river water sample was electrochemically treated with BDD electrode by applying 15, 30 and 45 mA cm − 2 . The HCQ decay and organic matter removal was monitored by DPV with composite sensor and chemical oxygen demand (COD) measurements, respectively. Results clearly confirmed that, on the one hand, the cork-graphite sensor exhibited good current response to quantify of HCQ in the river water matrix, with limit of detection and quantification of 1.46 mg L − 1 ( ≈ 3.36 µ M) and 4.42 mg L − 1 ( ≈ 10.19 µ M), respectively. On the other hand, the electrochemical oxidation (EO) efficiently removed HCQ from real river water sample using BDD electrodes. Complete HCQ removal was achieved at all applied current densities; whereas in terms of COD, significant removals (68%, 71% and 84% at 15, 30 and 45 mA cm − 2 , respectively) were achieved. Based on the achieved results, the offline integration of electrochemical SDG6 technologies in order to monitor and remove HCQ is an efficient and effective strategy. data
Introduction
On 11 March 2020, the World Health Organization (WHO) declared COVID-19 a pandemic. This infectious disease is caused by a new strain of CoV, a mutation (ID- 19) of its two previous forms and is called SARS-CoV-2 or CoV-19 [1]. During the pandemic of COVID-19, national and international medical organizations around the world treated or alleviated symptoms, in certain hospitalized patients, by using some drugs such as chloroquine, HCQ, azithromycin, ivermectin, dexamethasone, remdesivir, favipiravir and some HIV antivirals [2]. However, the possible use of some of them to treat COVID-19 is only an unproven hypothesis, such as the case of chloroquine and HCQ [3].
In some countries, the use of some of these drugs on large scale during the pandemic was reported, for example, in Italy [4] and Brazil [5]. Consequently, the high risk of water contamination due to their large production and utilization is a key issue to overcome urgently. Sometimes these drugs are not completely metabolized by the body, and their active forms or metabolites can be eliminated through feces and urine [6]. After that, these compounds reach the sewage system when discharged. At that point, some limitations are found in their elimination from water treatment plants which mostly rely on conventional treatment systems in industrialized countries, but it is different in developing nations since they are under different pressures where the pollutants are not efficiently treated or have limitations in the removal of these compounds, provoking environmental and health risks [7][8][9][10][11].
In Brazil, HCQ was included in COVID-19 kits together with ivermectin and azithromycin, as a pre-treatment option [12]. Recent studies have demonstrated that HCQ is present in Brazilian water ecosystems, confirming its high persistence and bioaccumulation in vegetation and groundwater [13]. Therefore, it has motived some investigations to develop sensors for quantifying and monitoring HCQ to determine its potential as contaminant as well as the search for an effective approach to remove this micropollutant from wastewaters before its discharge in water ecosystems [14].
Based on the existing literature, the treatment of different water matrices (synthetic or real) containing HCQ has been carried out by photolysis [15], adsorption [16,17], photocatalysis [18] and electrochemical technologies (electrooxidation (EO), photo-assisted EO and sono-assisted EO) using boron doped diamond (BDD) [19]. However, no real matrices polluted with HCQ were treated in the case of electrochemical treatments.
Considering that several research groups are evaluating the treatment effectiveness of certain technologies by polluting in laboratory different water matrices (e.g., river, sea, groundwater, tap water, drinking water and so on) with a well-known amount of a single target pollutant for understanding the experimental data in order to translate to real applications [20]. Then, this work aims to electrochemically treat a real water matrix polluted with HCQ. To do that, (i) real water samples were collected and preserved, (ii) identification and quantification of HCQ in these water samples was spectrophotometrically and electroanalytically performed and (iii) EO approach in a batch reactor with BDD anode was tested to decontaminate a real water matrix.
In a previous work, the use of the cork-graphite composite as electrochemical sensor for quantifying HCQ in real water matrices [14] was demonstrated, confirming HCQ pollution in lagoon water. However, the possibility to integrate technologies as an appropriate water depollution solution to eliminate HCQ from a real water matrix was not proven yet. Therefore, the development of technologies and smart water solutions to reach the Sustainable Development Goal 6 (SDG6) represent a substantial opportunity to guarantee sustainability and increase competence in water management (to treat and distribute water for human use) [21]. The possibility to integrate SDG6-based electrochemical technologies [22], which have been developed until now, for identifying, quantifying, eliminating and monitoring HCQ in/from real water samples represents a clear benefit for our society, offering a coherent vision for the future [23].
Materials and Methods
The highest quality commercially available chemicals were used. HCQ sulfate (purity 99%) and graphite powder were purchased from Sigma-Aldrich (São Paulo, Brazil). H 2 SO 4 was purchased from Merck (São Paulo, Brazil). Specific solutions were prepared using ultra-purified water obtained from a Milli-Q system (Millipore, Natal, Brazil). The raw cork that was used in the experimental studies was provided by Corticeira Amorim S.G.P.S., S.A. (Porto, Portugal). The raw cork granules were washed twice with distilled water for 2 h at 60 • C to remove impurities and other water extractable components that could interfere with the electrochemical analysis. Before use, the raw cork was dried at 60 • C in an oven for 24 h [24].
Preparation of Cork-Modified Electrodes
The raw cork granules were reduced in size; a fraction below 150 µm (designated as raw cork powder) was selected for constructing the sensor in this work, according to our previous work [24]. The cork-graphite composite sensor (working electrode) was prepared by mechanical homogenization of raw cork powder and graphite in proportions of 70:30% w/w, using 0.3 mL of paraffin oil as a binder and mixing everything in an agate mortar for about 30 min, as previously reported [14].
Electrochemical Measurements
The electrochemical tests were performed using an Autolab PGSTAT302N (Metrohm, Zurich, Switzerland) that was controlled with GPES software (4.0) and consisted of a threeelectrode cell, using Ag/AgCl (3.0 M KCl), Pt wire and a cork-graphite sensor (geometrical area of approximately 0.45 mm 2 , while a real area of 116 mm 2 , which was estimated by procedure in [25]) as the reference, auxiliary and working electrodes, respectively. Differential pulse voltammetry (DPV) parameters were as follows-modulation time: (≥0.002 s), 0.05 s; interval time: (≥0.10 s), 0.5 s; initial potential: 1.0 V; final potential: 1.7 V; step potential: 0.00495 V; modulation amplitude: 0.01995 V; potential scan rate: 100 mV s −1 ; and agitation time: 30 s. The optimized parameters were used for all measurements [14]. All analyses were performed in triplicate. All electrochemical analyses were conducted without deaeration, at 25 ± 2 • C. For the identification/determination of HCQ in river water matrices, firstly, the electrochemical sensor's response, in terms of current, was verified by constructing an analytical curve in river water samples. Secondly, the HCQ concentration in HCQ-polluted river water sample was electrochemically treated and the HCQ concentration was determined, at collected electrolysis times, using the standard addition method, where the samples were spiked with a known quantity of a standard solution of HCQ, as it is recommended for diminishing the matrix effect on the currentresponse sensibility [14]. Potentiodynamic measurements (polarization curves and cyclic voltammetry) were also carried out at 25 • C in the conventional cell described above with BDD as working electrode. The exposed apparent area of the working electrodes was 1.5 cm 2 .
Real Samples of River
River water samples were collected at the river located in Natal (5 • 56 56.1" S 35 • 10 18.4" W), Rio Grande do Norte (Brazil). After that, these were acidified to avoid their decomposition and stored at 4 • C until its use. The main physical-chemical characteristics of the real samples used in this work are shown in Table 1.
Electrochemical Treatment
All electrolysis experiments were conducted in an undivided reactor of 250 mL of capacity that was equipped with magnetic stirring to ensure homogenization and mass transport towards electrodes. Experiments were conducted under galvanostatic conditions with a power supply MINIPA MPL-3305 M triple power DC generator to apply 15, 30 and 45 mA cm −2 as applied current density (j) values. Two electrodes were placed at the center of the reactor with an electrode distance of ≈2 cm (BDD and Ti electrodes were used as anode and cathode, respectively, with a geometric area of 13.5 cm 2 ). The characteristics of the BDD are as follows: 225 sp 3 /sp 2 ratio; 500 mg L −1 boron content; and a diamond layer of 2.68 µm thickness. All BDD-electrolysis were performed for 120 min and were conducted in triplicate. The EO efficiency for degrading HCQ was studied by using 250 mL of polluted river water plus 10 mL of 0.1 M of H 2 SO 4 as supporting electrolyte, which is the volume used to acidify the sample and maintain its physical-chemical conditions. Samples were withdrawn at predetermined time intervals for quantifying HCQ with the cork-graphite sensor during its electrochemical treatment. Chemical oxygen demand (COD), NO 3 − ammonium and free chlorine were also determined by using HANNA commercial kits.
Additionally, during EO, it was possible to estimate the total current efficiency (TCE) (in %) for the treated solutions at a given electrolysis time (in s) from the COD values, using the following Equation (1) [26]: where ∆(COD) exp is the experimental COD difference between the initial COD and COD at time t, F is the Faraday constant (96,487 C mol −1 ), V s is the solution volume (L), I is the applied current (A), 8 is the equivalent mass of oxygen (g eq −1 ), t is time (s) and ∆t is the electrolysis is time interval (s). Afterwards, the energy consumption (EC) was estimated by Equation (2) in terms of kWh kgCOD −1 [26,27] with the average cell voltage registered during the EO (cell voltage being reasonably constant with just some minor fluctuations; for this reason, the average cell voltage was estimated), where E cell is the average potential difference of the cell (V) and t is the BDD-electrolysis duration in h.
Identification of the Presence of HCQ in River Water Samples
Collected samples were spectrophotometrically analyzed to identify the HCQ presence in river water according the procedure reported in [14]. From the results obtained (see Table 1), it was possible to confirm the presence of HCQ in a river water sample, which was denominated as "polluted", with approximately 26.7 mg L −1 . Meanwhile, no significant HCQ concentration was determined in another river water sample, which was considered as "non-polluted". Thus, both samples were used to electrochemically investigate: (i) The matrix effect during the analytical curve construction with composite sensor using non-polluted river water sample; (ii) The validation of HCQ concentration in polluted river water with DPV by using cork-graphite sensor and; (iii) The possibility to integrate technologies as an appropriate water depollution solution to eliminate HCQ from real water matrices.
DPV Analytical Curve and Standard Additions Method in River Water Sample
From the results obtained in our previous work, the composite sensor exhibited an excellent response for detecting HCQ in H 2 SO 4 /ultrapure water solution. However, the use of ultrapure water with a single target pollutant could mask the applicability of the experimental data to actual uses [20] with real water matrices (which are composed by several inorganic and organic compounds that can affect the electrochemical sensor response to quantify HCQ). Then, this composite sensor was used to construct an analytical curve with a non-polluted river water matrix (see Table 1). As can be observed in Figure 1, the sensor presented excellent performances, in terms of the electrochemical response (current vs. (HCQ)) as a function of HCQ concentration, when a non-polluted river water sample was used. This behavior demonstrated that non-significant matrix effect was attained at surface sensor during the analytical curve construction. In other words, it was possible to confirm that real water composition did not affect the detection of HCQ. Later, to evaluate the reliability of directly using the composite sensor in real water matrices, the sensor was used to electroanalytically quantify the HCQ concentration by standard addition method in the polluted river water sample. As can be observed in Figure 2, the HCQ signal was confirmed by intensification of the peaks associated with the addition of different volumes of standard HCQ solution to polluted sample [32]. The HCQ concentration electroanalytically determined by DPV approach was about 26.87 ± 0.34 mg L −1 (61.92 ± 0.04 µM), which was similar to the spectrophotometric measurement reported in Table 1. It is important to highlight that the results were obtained with acceptable standard deviations and confidence intervals, within 95% [31].
In order to develop suitable clean water solutions, the integration of electrochemical-SDG6 technologies was achieved by the electrochemical treatment of polluted HCQ river water matrix with BDD-electrolysis and by the determination, in real-time, of the residual HCQ concentration with the cork-graphite sensor. where S y/x is the residual standard deviation and b is the slope of the calibration plot [28,29]. The non-linearity was also evaluated from the residuals of regression curve, as can be observed in Figure 1b. Thus, the absence of significant non-linearity was confirmed, guaranteeing the reliability in real water matrices, as recommended by IUPAC [28,30] and the literature [31]. It is important to highlight that all analyses were performed in triplicate, consequently it was possible to obtain the confidence intervals and standard deviations within 95% (red dotted lines in analytical curve). Thus, this information was used in order to identify false positives and false negatives (α = β = 0.05), as already reported by experts in the field [31]. Later, to evaluate the reliability of directly using the composite sensor in real water matrices, the sensor was used to electroanalytically quantify the HCQ concentration by standard addition method in the polluted river water sample. As can be observed in Figure 2, the HCQ signal was confirmed by intensification of the peaks associated with the addition of different volumes of standard HCQ solution to polluted sample [32]. The HCQ concentration electroanalytically determined by DPV approach was about 26.87 ± 0.34 mg L −1 (61.92 ± 0.04 µM), which was similar to the spectrophotometric measurement reported in Table 1. It is important to highlight that the results were obtained with acceptable standard deviations and confidence intervals, within 95% [31].
Degradation of HCQ by BDD-Electrolysis in Real Water Matrices
The elimination of HCQ ( Figure 3a) and organic matter (Figure 3b), in terms of COD, in polluted river water sample (≈26.87 mg L −1 ) by EO approach with BDD electrode is shown in Figure 3. In order to compare j, the electrolysis experiments for removing HCQ using BDDTi reactor were performed by applying 15, 30 and 45 mA cm −2 in order to comprehend the effect of this experimental condition. As shown in Figure 3a, normalized changes of HCQ concentration were registered and a complete HCQ decay was attained, for all j values. A faster HCQ elimination was achieved when an increase on the j was reached. This behavior can be attributed to the increase on the production of the oxidants that are typically • OH radicals (5), SO4 −• (6) and S2O8 2− (7,8) [33,34], promoting a quick oxidation of HCQ in the polluted river water matrix. BDD electrode is considered a non-active anode material that can produce efficiently free heterogeneous • OH radicals at its surface [33] as well as other oxidants [11], which contribute to quickly mineralizing/degrading organics in synthetic and real water matrices [35,36].
BDD + H2O → BDD( • OH) + H+ + e In order to develop suitable clean water solutions, the integration of electrochemical-SDG6 technologies was achieved by the electrochemical treatment of polluted HCQ river water matrix with BDD-electrolysis and by the determination, in real-time, of the residual HCQ concentration with the cork-graphite sensor.
Degradation of HCQ by BDD-Electrolysis in Real Water Matrices
The elimination of HCQ ( Figure 3a) and organic matter (Figure 3b), in terms of COD, in polluted river water sample (≈26.87 mg L −1 ) by EO approach with BDD electrode is shown in Figure 3. In order to compare j, the electrolysis experiments for removing HCQ using BDD|Ti reactor were performed by applying 15, 30 and 45 mA cm −2 in order to comprehend the effect of this experimental condition. As shown in Figure 3a, normalized changes of HCQ concentration were registered and a complete HCQ decay was attained, for all j values. A faster HCQ elimination was achieved when an increase on the j was reached. This behavior can be attributed to the increase on the production of the oxidants that are typically • OH radicals (5), SO 4 −• (6) and S 2 O 8 2− (7,8) [33,34], promoting a quick oxidation of HCQ in the polluted river water matrix. BDD electrode is considered a non-active anode material that can produce efficiently free heterogeneous • OH radicals at its surface [33] as well as other oxidants [11], which contribute to quickly mineralizing/degrading organics in synthetic and real water matrices [35,36].
BDD + H 2 O → BDD( • OH) + H+ + e (5)
using BDDTi reactor were performed by applying 15, 30 and 45 mA cm −2 in order to comprehend the effect of this experimental condition. As shown in Figure 3a, normalized changes of HCQ concentration were registered and a complete HCQ decay was attained, for all j values. A faster HCQ elimination was achieved when an increase on the j was reached. This behavior can be attributed to the increase on the production of the oxidants that are typically • OH radicals (5), SO4 −• (6) and S2O8 2− (7,8) [33,34], promoting a quick oxidation of HCQ in the polluted river water matrix. BDD electrode is considered a non-active anode material that can produce efficiently free heterogeneous • OH radicals at its surface [33] as well as other oxidants [11], which contribute to quickly mineralizing/degrading organics in synthetic and real water matrices [35,36].
BDD + H2O → BDD( • OH) + H+ + e (5) In fact, polarization curves ( Figure 4) and cyclic voltammetry (inset in Figure 4) have confirmed this information. Figure 4 shows the linear polarization curves of (a) non-polluted and (b) polluted river water matrices with BDD anode at 50 mV s −1 as scan rate. The curves (a and b) are very different and show that oxygen evolution potential starts at +1.85 V versus Ag/AgCl (3 M) when non-polluted water was investigated. This means that BDD anode has high oxygen evolution overpotential in real water matrix, and consequently, it is a poor electrocatalysts for the oxygen evolution reaction (o.e.r.) compared with other electrodes reported in literature [37,38]. Meanwhile, polluted river water sample presented a well-defined peak at about +1.6 V before the o.e.r and it moderately extends to +1.9 V where the production of free heterogeneous • OH radicals [33] and other oxidants [11] is feasible. This result clearly indicated that direct and indirect oxidations can be attained, allowing a quick elimination of HCQ from real polluted water matrix [39].
Cyclic voltammograms were also obtained with (a) non-polluted and (b) polluted river water matrices with BDD anode, in acidic sample at 25 °C (inset in Figure 4). On the one hand, the CV curves did not show significant current signals due to the interaction In fact, polarization curves ( Figure 4) and cyclic voltammetry (inset in Figure 4) have confirmed this information. Figure 4 shows the linear polarization curves of (a) nonpolluted and (b) polluted river water matrices with BDD anode at 50 mV s −1 as scan rate. The curves (a and b) are very different and show that oxygen evolution potential starts at +1.85 V versus Ag/AgCl (3 M) when non-polluted water was investigated. This means that BDD anode has high oxygen evolution overpotential in real water matrix, and consequently, it is a poor electrocatalysts for the oxygen evolution reaction (o.e.r.) compared with other electrodes reported in literature [37,38]. Meanwhile, polluted river water sample presented a well-defined peak at about +1.6 V before the o.e.r and it moderately extends to +1.9 V where the production of free heterogeneous • OH radicals [33] and other oxidants [11] is feasible. This result clearly indicated that direct and indirect oxidations can be attained, allowing a quick elimination of HCQ from real polluted water matrix [39]. From data in Figure 3a, kinetic studies were carried out under pseudo-order conditions [41]. Under these experimental circumstances, the concentration of • OH in solution was kept in excess with respect to the HCQ in solution which ensures that the reaction of depollution was considered under pseudo-first-order conditions [42]. Then, the kinetic experiments were performed by monitoring the decay of HCQ concentration in real water matrix as a function of time. In view of the principles of chemical kinetics, the rate expression for the decay reaction of HCQ solution can be written as (9) By using the natural logarithm of HCQ concentration with the reaction time (min) (inset of Figure 3), linear relationships were estimated (R 2 = 0.99), which suggested that the decay of HCQ concentration followed pseudo-first order kinetics, and the positive of the slope of this line equals the apparent rate constants of 0.0554 min −1 , 0.0855 min −1 and 0.118 min −1 at 15, 30 and 45 mA cm −2 , respectively, demonstrating an improvement in the reaction rate when an increase on the j was attained [32].
As can be seen in Figure 3b, COD decay obtained by applying 15, 30 and 45 mA cm −2 was about 18, 16 and 8, respectively, after 90 min of BDD-electrolysis. Taken into consideration that up to 68% of COD removal was obtained at 15 mA cm −2 after 90 min of BDD-electrolysis, while 71% and 84% were achieved by applying 30 and 45 mA cm −2 , respectively. This results in evidence that the j influenced the degradation of HCQ, which is justified due to the efficiency production of oxidizing species, mainly free heterogenous • OH radicals at BDD surface [19,43,44]. Bensalah and coworkers [19] have reported significant insight regarding the electrochemical treatment of HCQ in synthetic effluents, confirming that the large production of • OH radicals from EO of water at the BDD surface of BDD attack immediately HCQ at the vicinity of BDD surface and decompose it into small fragments. In fact, they observed that the pH value of the solutions decreased at the beginning of the treatment due to the formation of acid intermediates during EO of HCQ [19]. Under acidic conditions, two of the HCQ functional groups exist in protonated forms, which may facilitate the rupture of C-N bonds by the attack of • OH radicals and thus release the branched group. For this reason, 4-quinolamine, oxamic and Cyclic voltammograms were also obtained with (a) non-polluted and (b) polluted river water matrices with BDD anode, in acidic sample at 25 • C (inset in Figure 4). On the one hand, the CV curves did not show significant current signals due to the interaction between the water species composition and BDD surface when the non-polluted sample was investigated, confirming that no significant matrix effect was attained. On the other hand, the typical behavior of a non-reversible system was observed when polluted water matrix was studied. The peak in the CV at +1.58 V can be understood in terms of an electroactivity of HCQ towards the diamond electrode, suggesting that the organic substrate can be directly oxidized at BDD surface [39]. Meanwhile, when the potential is increased, a smooth current signal was achieved as an extension of the HCQ peak at +1.58 V. This result indicated that other organic compounds are present in the water matrix, which could be oxidation's by-products from HCQ [19]. However, these intermediates are indirectly oxidized by free heterogeneous • OH radicals [33,37] and other oxidants [11,40] that can be produced at higher potentials [32]. This behavior is in accordance with the HCQ and COD decays in Figure 3.
From data in Figure 3a, kinetic studies were carried out under pseudo-order conditions [41]. Under these experimental circumstances, the concentration of • OH in solution was kept in excess with respect to the HCQ in solution which ensures that the reaction of depollution was considered under pseudo-first-order conditions [42]. Then, the kinetic experiments were performed by monitoring the decay of HCQ concentration in real water matrix as a function of time. In view of the principles of chemical kinetics, the rate expression for the decay reaction of HCQ solution can be written as (9) By using the natural logarithm of HCQ concentration with the reaction time (min) (inset of Figure 3), linear relationships were estimated (R 2 = 0.99), which suggested that the decay of HCQ concentration followed pseudo-first order kinetics, and the positive of the slope of this line equals the apparent rate constants of 0.0554 min −1 , 0.0855 min −1 and 0.118 min −1 at 15, 30 and 45 mA cm −2 , respectively, demonstrating an improvement in the reaction rate when an increase on the j was attained [32].
As can be seen in Figure 3b, COD decay obtained by applying 15, 30 and 45 mA cm −2 was about 18, 16 and 8, respectively, after 90 min of BDD-electrolysis. Taken into consideration that up to 68% of COD removal was obtained at 15 mA cm −2 after 90 min of BDD-electrolysis, while 71% and 84% were achieved by applying 30 and 45 mA cm −2 , respectively. This results in evidence that the j influenced the degradation of HCQ, which is justified due to the efficiency production of oxidizing species, mainly free heterogenous • OH radicals at BDD surface [19,43,44]. Bensalah and coworkers [19] have reported significant insight regarding the electrochemical treatment of HCQ in synthetic effluents, confirming that the large production of • OH radicals from EO of water at the BDD surface of BDD attack immediately HCQ at the vicinity of BDD surface and decompose it into small fragments. In fact, they observed that the pH value of the solutions decreased at the beginning of the treatment due to the formation of acid intermediates during EO of HCQ [19]. Under acidic conditions, two of the HCQ functional groups exist in protonated forms, which may facilitate the rupture of C-N bonds by the attack of • OH radicals and thus release the branched group. For this reason, 4-quinolamine, oxamic and oxalic acids, as well as chloride, nitrate and ammonium were identified during HCQ mineralization by Bensalah and coworkers [19]. These assertions are in agreement with the polarization curves and cyclic voltammograms in Figure 4. Additionally, the identification of inorganic by-products (NO 3 − and free chorine (see Table 1 values for polluted water matrix) in our investigation for the electrochemical treatment of polluted HCQ river water matrix allowed for the confirmation of the fragmentation of the initial HCQ chemical structure, as already reported by Bensalah et at. [19] when synthetic solutions were electrochemically treated, identifying the main by-products and consequently proposing a degradation pathway. Figure 5 shows the total current efficiency (%TCE) (Figure 5a) and EC (Figure 5b) when a polluted HCQ river water sample was electrochemically treated at different j (15, 30 and 45 mA cm −2 ). It is an important parameter to evaluate the viability of the EO using BDD anodes [45]. Thus, in the first 30 min for all experiments, a slow %TCE decrease was observed, as a function of time. This behavior can be associated to, on the one hand, the efficient use of current to eliminate HCQ from river water until the process is controlled by mass transport and on the other hand, the existence of parallel reactions, such as oxygen evolution and degradation of persistent by-products formed during HCQ removal (i.e., short-chain aliphatic carboxylic acids), that employ a portion of the electrical energy applied. Meanwhile, the EC increased with j and electrolysis time, thus the most rapid and efficient process was observed at high j. Obviously, as seen Figure 5b, the j has a strong effect of the EC; however, agitated beaker reactor could not be representative for energetic requirements and oxidation environment for scale-up conditions, in terms of effectiveness, mass transport and oxidants production [46]. Therefore, more studies are in progress in order to decrease EC and consequently, the relative costs. Another strategy is based on the integration of photovoltaic (PV) to electrochemical technologies to supply the electrical energy for treating different water matrices [47][48][49]. The falling price and the more efficient materials used in solar photovoltaic panels are making the process promising. Up to now, few research groups are employing solar-driven for water treatment, and as far as we know, there are few reports on BDD electrodes applied for different water matrices.
Efficiency and Energy Consumption
based on the integration of photovoltaic (PV) to electrochemical technologies to supply the electrical energy for treating different water matrices [47][48][49]. The falling price and the more efficient materials used in solar photovoltaic panels are making the process promising. Up to now, few research groups are employing solar-driven for water treatment, and as far as we know, there are few reports on BDD electrodes applied for different water matrices.
Conclusions
From this work, the following conclusions can be drawn:
This study highlights that is possible to detect and quantify HCQ in real water matrix by DPV technique using electrochemical cork-graphite device. The DPV method using the composite sensor showed a satisfactory current-response and higher sensitivity to determine HCQ in a polluted river water matrix. It is important to emphasize that in previous work [14], an electroanalytical approach was compared with the spectrophotometric method achieving good performances, confirming the analytical confidence of the measurements using cork-graphite sensor. The HCQ degradation (26.8 mg L −1 ) was studied under different j (15, 30 and 45 mA cm −2 ), demonstrating that higher current densities accelerated the organic matter elimination from solution because of the efficient electrogeneration of oxidants with BDD anode. Increasing of j, favor the electrogeneration of different oxidants at the BDD surface or via the participation of • OH radicals, favoring the elimination of HCQ from real water matrix. Detection of NO3 -and free chorine after the degradation of HCQ, evidenced by electrochemical SDG6 technology being effectively applied, promoting clean water and sanitization outcomes.
Conclusions
From this work, the following conclusions can be drawn: − This study highlights that is possible to detect and quantify HCQ in real water matrix by DPV technique using electrochemical cork-graphite device. The DPV method using the composite sensor showed a satisfactory current-response and higher sensitivity to determine HCQ in a polluted river water matrix. It is important to emphasize that in previous work [14], an electroanalytical approach was compared with the spectrophotometric method achieving good performances, confirming the analytical confidence of the measurements using cork-graphite sensor. − The HCQ degradation (26.8 mg L −1 ) was studied under different j (15, 30 and 45 mA cm −2 ), demonstrating that higher current densities accelerated the organic matter elimination from solution because of the efficient electrogeneration of oxidants with BDD anode. Increasing of j, favor the electrogeneration of different oxidants at the BDD surface or via the participation of • OH radicals, favoring the elimination of HCQ from real water matrix. − Detection of NO 3 − and free chorine after the degradation of HCQ, evidenced by electrochemical SDG6 technology being effectively applied, promoting clean water and sanitization outcomes.
|
v3-fos-license
|
2019-02-14T14:04:35.257Z
|
2019-02-14T00:00:00.000
|
61153405
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2019.00096/pdf",
"pdf_hash": "7a83f464af572d928085d14af60e1f418f870344",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2677",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "7a83f464af572d928085d14af60e1f418f870344",
"year": 2019
}
|
pes2o/s2orc
|
Dynamic Targeting in Cancer Treatment
With the advent of personalized medicine, design and development of anti-cancer drugs that are specifically targeted to individual or sets of genes or proteins has been an active research area in both academia and industry. The underlying motivation for this approach is to interfere with several pathological crosstalk pathways in order to inhibit or at the very least control the proliferation of cancer cells. However, after initially conferring beneficial effects, if sub-lethal, these artificial perturbations in cell function pathways can inadvertently activate drug-induced up- and down-regulation of feedback loops, resulting in dynamic changes over time in the molecular network structure and potentially causing drug resistance as seen in clinics. Hence, the targets or their combined signatures should also change in accordance with the evolution of the network (reflected by changes to the structure and/or functional output of the network) over the course of treatment. This suggests the need for a “dynamic targeting” strategy aimed at optimizing tumor control by interfering with different molecular targets, at varying stages. Understanding the dynamic changes of this complex network under various perturbed conditions due to drug treatment is extremely challenging under experimental conditions let alone in clinical settings. However, mathematical modeling can facilitate studying these effects at the network level and beyond, and also accelerate comparison of the impact of different dosage regimens and therapeutic modalities prior to sizeable investment in risky and expensive clinical trials. A dynamic targeting strategy based on the use of mathematical modeling can be a new, exciting research avenue in the discovery and development of therapeutic drugs.
INTRODUCTION
Cancer is a multifactorial and remarkably heterogeneous disease. Its initiation, progression, invasion, and metastasis processes all involve multiple molecular signaling mechanisms. The diversity of molecular and cellular properties across tumors from different patients, and even across cancer cells from the same patient, makes it extremely difficult to find a "one-size-fits-all" solution for therapeutic targeting of cancer. Thus, tailored targeted therapies based on each individual tumor's characteristics are required in order to optimize treatment efficacy, minimize toxicity and drug side-effects, and ultimately lead to more cost-effective patient management by giving the most appropriate drugs at the optimum dose to every patient in need (Topol, 2014;Ryall and Tan, 2015). This is the essential concept of precision medicine.
From a systems biology perspective, cancer can be viewed as a network disease caused by dysregulation of molecular signaling pathways that determine various physiological cellular processes, such as growth, division, differentiation, and apoptosis (Creixell et al., 2012). These signaling pathways are not isolated from each other, but form a complex, interconnected network with numerous regulatory feedback loops and redundant pathways that together confer significant evolutionary robustness. Still, substantial advances have been made in development of targeted therapies based on detailed mechanistic understanding of these signaling networks, and as a result, some targeted drugs are emerging for clinical use (Yildirim et al., 2007;Hopkins, 2008). However, despite positive treatment responses in some patients, a large fraction of patients fail to benefit from these targeted therapies, even when molecular markers have been used to stratify patients into groups that are expected to respond to the therapy. Taking an approved ErbB-targeted drug (Herceptin) as an example, only about half of all patients with ErbB2amplified metastatic breast cancer respond to the drug, and of those who do respond in the beginning, most eventually develop resistance (Garrett and Arteaga, 2011). This pattern of initial response followed by relapse is not unique to ErbB-targeted therapies, but has been seen for most molecularly targeted inhibitors (Al-Lazikani et al., 2012).
The disappointing response rate of targeted therapies is partly due to the resilience of oncogenic signaling networks that will often bypass a single hit through an abundance of the highly non-linear built-in feedback loops and alternative pathways that can compensate for therapeutic impact. To solve this "escape" problem, multiple therapies can be used together or in sequence, i.e., combination therapy, which can potentially block these parallel or alternative pathways activated in cancer cells (Fitzgerald et al., 2006). Since these therapeutic drugs may be administered at a smaller dosage for each individual drug, a combination therapy may stop oncogenic signaling or further delay resistance to treatments, while simultaneously minimizing overlapping toxicity. In theory, a combination approach would seem to have the potential to block alternative pathways, but, while there have been clinical successes, as with monotherapy they have not led to cure or long-term control for all patients (Chong and Janne, 2013;Yap et al., 2013;Sachs et al., 2016;Lopez and Banerji, 2017). One problem lies in the complexity of signaling networks, making it difficult to simply guess a priori which drug combinations are synergistically effective and which are not. Given the number of targeted drugs currently available and in clinical development, it is time-consuming and expensive to do unbiased screening of the large number of possible drug combinations at their clinically relevant dose and dosing schedules. Therefore, there is a major need for approaches that will allow us to identify effective drug combinations where two or more drugs work synergistically to suppress malfunctioning signaling.
Testing potentially clinically relevant drug combinations using mathematical models (see Box 1) offers a reasonable yet relatively simple and expeditious way to accomplish this task by computationally examining multiple targets through extensive parameter perturbation analyses (Araujo et al., 2005;Iyengar et al., 2012;Barbolosi et al., 2016). This approach allows for rapid and low-cost examination of the drug and target combination parameter space, including identification of potentially optimal drug combinations through mathematical methods, ultimately providing valuable insights which would be difficult (if not impossible) to achieve through traditional experimental and clinical trial methods and techniques. In the end, these models can help to narrow down and prioritize different target combinations prior to experimental validation.
NETWORK REWIRING
It has been extensively reported that cancer cells or cell populations adapt or evolve in response to targeted therapies, in part by rewiring molecular mechanisms to overcome the inhibitory effects of initial treatments (Gillies et al., 2012;Logue and Morrison, 2012;Azad et al., 2015;Kolch et al., 2015;Stuhlmiller et al., 2015). This rewiring may involve alterations of signaling pathways, such as addition or deletion of edges in the network, modification of reaction rates, and changes in molecular concentrations, all of which may ultimately contribute to treatment resistance, either directly through rendering the drug ineffective or indirectly by leading to activation of alternative pro-survival or anti-apoptotic pathways. There are many other biological, biochemical, and biophysical factors [e.g., genetic alteration of individual cells, outgrowth of existing resistant subclones under selection pressure from treatment, altered effectors in DNA repair, pathway-independent acquired resistance, up-regulation of efflux pumps in cellular membranes, protein level oscillations within cells even in the absence of treatment, and physical barriers that may limit diffusive and convective drug transport (Minchinton and Tannock, 2006;Garraway and Janne, 2012;Brocato et al., 2014;Stewart et al., 2015;Cristini et al., 2017)] that may also contribute to cancer resistance to treatment, but rewiring of signaling pathways very likely plays an important role as a mechanism of acquired resistance. This implies that pharmacologically targeting the compensatory mechanisms (which have emerged due to this rewiring) should help to improve treatment efficacy and patient outcome (Solit and Rosen, 2011;Akhavan et al., 2013;Camidge et al., 2014).
Even before treatment, signaling networks are rewired in cancer cells compared to normal cells. Here, we briefly discuss several recent studies working toward understanding how signaling networks are rewired in cancer cells, and discuss how identification of these alterations can enable more effective cancer treatment. Creixell et al. (2015) performed systemsbased research to evaluate whether cancer mutations perturb signaling networks and, if so, by what mechanisms. Using their collected global exome sequencing and proteomic data on the same set of cancer cell lines, some mutations were found to create new phosphorylation sites or destroy existing ones within a signaling network, or shift the network structure by upstream or BOX 1 | Mathematical modeling of cancer treatment. Mathematical modeling is not only useful in providing mechanistic explanations of the observed data and generating valuable insights into how the molecular signaling network adapts under various perturbed conditions, it can also be used to derive new experimentally and clinically testable predictions. Data-driven modeling approaches that integrate statistical analysis of large-scale cancer multi-omics (e.g., genomics, proteomics, and other omics technologies) with clinical data have been used to identify key biological processes underlying cancer pathogenesis, prognostic biomarkers, and predictive signatures for drug response (Jerby and Ruppin, 2012;Casado et al., 2013;Niepel et al., 2013). On the other hand, mechanistic modeling approaches have been used to understand the roles of individual proteins in regulating cell fate and how signaling pathways interact to influence cancer progression (Prasasya et al., 2011;Hass et al., 2017), the dynamic interactions among cancer cells and between cells and the constantly changing microenvironment (Faratian et al., 2009;Klinger et al., 2013;Almendro et al., 2014;Leder et al., 2014), biophysical drug-cell interactions, and drug transport processes across tissues (Das et al., 2013;Pascal et al., 2013a,b;Koay et al., 2014;Frieboes et al., 2015;Wang et al., 2016;Brocato et al., 2018). In addition, mechanistic models are being generated to account for pharmacokinetics and pharmacodynamics to analyze drug action, dose-response relationships, and the time-course effect resulting from a drug dose, ultimately leading to the discovery of more effective dosing schedules (Swat et al., 2011;Vandamme et al., 2014;Wang et al., 2015a;Dogra et al., 2018). Furthermore, multiscale models of cancer have been developed to predict responses to treatments (perturbations), explain therapeutic resistance, and identify potential drug combinations across multiple biological scales, including at the molecular (such as gene regulatory and signal transduction networks), the cell, as well as at the tissue and whole organism scale Deisboeck et al., 2011;Wang et al., 2011aWang et al., , 2015bGustafsson et al., 2014;Wolkenhauer et al., 2014;Wang and Maini, 2017). Overall, mathematical modeling paired with experimentation and clinical data analysis has led to substantial improvements in our understanding of the mechanistic basis for cancer progression and resistance development, advanced the systems-level interpretation of the pathophysiology relevant for drug discovery, and had an impact on the implementation and optimization of effective anticancer therapeutic strategies.
downstream rewiring of the mutated signaling node. A variety of rewiring modes were identified, including constitutive activation and inactivation of kinase and SH2 domains, upstream and downstream rewiring of phosphorylation-based signaling, and the extinction and genesis of phosphorylation sites. Their results indicate that signaling networks are both dynamically and structurally rewired in cancer cells. More recently, Latysheva et al. (2016) investigated the interaction properties and structural features of more than two thousand fusion-forming proteins, and provided insight into the genome-scale molecular principles upon which fusion proteins could escape cell-death regulation and rewire signaling networks in cancer. Notably, using an integrated experimental and computational approach, Halasz et al. (2016) predicted and then validated feedback inhibition of insulin receptor substrate 1 (IRS1) by the kinase p70S6K in a zebrafish (Danio rerio) xenograft model to confer resistance to EGFR inhibition through extensive analysis of a perturbation data set targeting epidermal growth factor receptor (EGFR) and insulin-like growth factor 1 receptor (IGF1R) pathways in a panel of colorectal cancer cells. Some studies (Pandey et al., 2014) also point to transient or short-term pathway alterations resulting from one drug as causing increased sensitivity to a second drug delivered at a later time. Morton et al. (2014) designed a nanoparticle system that successfully delivered two different drugs with varying models of action to the tumor in a sequential manner. The first drug inhibited an oncogenic pathway through rewiring that sensitized the cells to DNA damage-induced apoptosis, and the second was a genotoxic drug that took advantage of the vulnerable state of the cancer cells to kill them with enhanced efficiency. Their results highlight how understanding the ways that signaling pathways change or rewire in response to treatment or drug exposure is essential for improving current translational and clinical research.
RE-IDENTIFICATION AND RE-TARGETING
To predict cellular behavior, it is required to assess temporal-and state-based network dynamics in response to perturbations such as those induced by targeted drugs. It is thus highly rational to examine the newly rewired and altered molecular network [or networks, as some studies have found evidence that the dominant network is different at different tumor sites (Pestrin et al., 2009;Bhamidipati et al., 2013;Russo et al., 2017)], which arises after the first sub-lethal, targeted drug interventions, in order to identify and then reprioritize the targets. This will likely result in a new list of prioritized targets in the order of their importance in driving cancer cell survival and proliferation. The leading network modulator(s) on this new list should be prioritized as new drug targets in place of, or more likely in addition to, the previous top targets. In fact, rebiopsy at the time of progression of disease to guide changes in treatment has already been advocated in the literature (Yu et al., 2013;Planchard et al., 2015).
This cascade of drug targeting, network rewiring, followed by subsequent target re-identification and reprioritization (potentially for multiple cycles), in our opinion, should be repeated during the entire course of treatment. Figure 1 shows a schematic of this process (to illustrate the concept, and not a specific treatment strategy much less a prediction), where for simplicity a single molecular intervention strategy is used at the beginning. While in reality the clinical situation in terms of signaling and rewiring will undoubtedly be much more complex, we however address two critical questions here. First, why not just take out the "important" molecules (e.g., A1, A2, and B1 in our schematic) at the onset of the therapeutic protocol to completely block the downstream signaling pathways that contribute to cell proliferation? The answer is two-fold -one, as discussed, we do not necessarily know a priori what "top" targets emerge as (conventional chemo-or radioactive, or advanced targeted) therapeutic interventions apply selective pressure on the cancer cells' molecular network; secondly, this multi-target strategy will arguably be more toxic, and hence may cause more adverse side effects for the patient than necessary to achieve tumor control. Rather, the goal is to deliver optimal therapeutic efficacy at the minimum necessary level of side effects. As such, our dynamic targeting approach might just be the right answer in that it incrementally "probes" the network's adaptive capabilities by applying a staggered amount of selective pressure. Also, effective targeting does not have to "take out" a target completely; it FIGURE 1 | Illustration of the dynamic targeting strategy. The molecular signaling network changes or evolves with selective treatment. For instance, in this schematic, at time point 1, A1 emerges as the most critical node, hence during the first treatment period, A1 will be targeted with anti-A1. Assuming this to be of sub-lethal impact, the network rewires due to A1 inhibition, but the cell still finds a way to upregulate proliferation, so the treatment continues. At time point 2, A2 emerges as the top target, so the therapeutic regimen will attempt to inhibit A2 (together with A1) for the second treatment period. The network again rewires due to A2 inhibition, and the cell finds yet another way to bypass the A2 route and continues to proliferate. At time point 3, B1 becomes the top target, so the next treatment cycle will target B1 (together with A1 and A2). This process will continue until growth control is optimized and relapse to rapid replication does not occur. For each target at each treatment stage, exactly how much drug (dose) and how often to apply it (frequency) will require careful evaluation and should be different across patients. That is, other than depicted in the schematic for simplification purposes, the network adaptation is likely not hard-wired or rigidly dependent on external therapeutic pressure, but rather it undergoes a dynamic transition through an intrinsic optimization process. To manage side effects, a basic strategy could be to maximize the modulation effects on the top target specific to each treatment iteration, while keeping the "pressure" on prior targets at their respective "maintenance" minimum yet necessary dosing/frequency levels. Top targets are highlighted in yellow when the target identification process is performed. R: receptor; A1, A2, B1, B2: signaling molecules of the network. could instead be intended to modulate it up or down to redirect the network output. The second question is how frequently should the tumor system be re-examined in order to identify new targets or target combinations? While this is generally cancer type-and treatment-specific, it should also be patientspecific -yet remaining mindful of operational constraints and economics involved when translating this concept into a clinical setting. Still, in our opinion, every time a patient sees a diminishing therapeutic yield from, let alone fails a particular targeted treatment, the molecular network should be re-evaluated to potentially adjust the targeting strategy. We note that the timeline shown in Figure 1 is merely a schematic, and it follows that new network configurations (and thus the target hit-list) will differ in how fast they evolve, as would the drug dose and dosing schedules (determined uniquely for each drug delivered) for the individualized patient treatment plan.
In our dynamic approach, targets will emerge sequentially through "selection" imposed by targeted treatment and the perturbations it causes and reconfigurations the network stabilizes to. This is geared toward optimizing tumor growth control and as such differs from current combinatorics approaches (Gillies et al., 2012;Logue and Morrison, 2012), where the "most impactful" target combination is assessed once and then applied a priori, which should also incur more unexpected on-target or off-target side effects. We note that based on current reports on cell signaling (Tanay et al., 2005;Wei et al., 2016;Young et al., 2017), there are reasons to believe that there is some form of phase transition for network adaptability or maximum carrying capacity for the selection pressure or stress applied by a treatment, beyond which the cell simply dies. Rather than trying to kill all the cancer cells as efficaciously as possible, which is often impossible because of, e.g., detection limits and delivery challenges, our goal is to achieve maximum control over disease progression with minimal side effects, hence the sequential probing approach implemented in dynamic targeting.
Admittedly, there are many challenges in implementing this dynamic targeting strategy in current clinical practice. For example, immunotherapy is known to not always yield a tumor response within a time frame that other treatments may have shown, and some patients may experience initial increased size of tumor lesions with subsequent decreased tumor burden [this phenomenon is called pseudoprogression (Hodi et al., 2016)]. If a molecular targeted therapy is used together with immunotherapy, then we should give this type of combination treatment more time for re-evaluation of the patient; otherwise, it would prematurely eliminate treatments that might have been working but more slowly. As another quick example, if multiple clinical tests (genetic sequencing with high-throughput techniques, biopsy, imaging, etc.) are required for evaluating the tumor, then the question is whether it can be done in a reasonable time frame and at an acceptable risk for the patient, and if these additional assessments have a favorable cost to benefit ratio. Lastly, for any type of cancer, it should be kept in mind that only a subset of patients could benefit from a particular drug treatment. Hence, molecular diagnostics and imaging markers (Ransohoff and Gourlay, 2010;Reis-Filho and Pusztai, 2011;Jafari et al., 2017;Sepulveda et al., 2017) will be critical to correctly identify patient cohorts that are best suited for different targeted therapies, in addition to assessing response to therapy and monitoring patients for adverse drug reactions. Many other significant challenges related to further understanding tumor heterogeneity, tumor-host interactions, and immune response, etc. (Gatenby et al., 2010;Andre et al., 2013;Enriquez-Navas et al., 2016;Ibrahim-Hashim et al., 2017;Zhang et al., 2017) certainly exist in translating this strategy to clinical application. Further discussion of those challenges is beyond the scope of this article, as we only focus on introduction of a new concept, but it is worth emphasizing that many details with respect to technology, clinical care, regulation, and reimbursement need to be addressed in order to translate this concept into a reality.
To implement the dynamic targeting strategy, it would be prohibitive to evaluate the sheer number of mathematically possible drug target combinations multiple times over the course of treatment in preclinical animal models, let alone in a clinical setting. We therefore need, and should take full advantage of, large-scale unbiased methods based on mathematical modeling to evaluate and prioritize potential drug target combinations as early as possible. Indeed, mathematical network modeling has been helpful in identifying promising targets and effective combinations of existing targets (Wang et al., 2007(Wang et al., , 2011b(Wang et al., , 2012Zhang et al., 2009;Miller et al., 2013;Schoeberl et al., 2017). Once proven reliable, these models can be used to exhaustively test the efficacy of a large number of single drug and drug combinations by correlating signaling outputs with corresponding network perturbations in a dynamic fashion. Computer model simulations can be effectively integrated with quantitative wet lab studies to facilitate the process of identifying effective drug target combinations progressively over the course of treatment when treatment efficacy needs to be evaluated or a new treatment method is considered necessary; the mathematically narrowed down selection of individualized, computationally validated drug targets and combinations would then be handed over to conventional preclinical testing.
PILOT EXAMPLES
We here discuss two recent examples to demonstrate the importance of dynamic targeting in cancer treatment. We note that both examples do not represent a full implementation of the dynamic targeting process. However, they reflect the necessity for novel approaches addressing network rewiring to find new, complementary drug targets or their combinations in an effort to truly improve survival and the probability of long-term remission if not cure in cancer treatment. Lee et al. (2012) studied three cell lines from triplenegative breast cancer (i.e., estrogen receptor-, progesterone receptor-, and HER2 oncogene-negative) for their responses to seven genotoxic drugs and eight signaling inhibitors in various combinations and dosing schedules. They found that combination treatment with EGFR inhibitor (erlotinib) and DNA-damaging chemotherapy (doxorubicin) led to substantial killing of cancer cells, but only when the EGFR inhibition was used before the chemotherapy by at least 4 h. This combination treatment led to the rewiring of oncogenic signaling pathways, which has the potential to make cancer cells more susceptible to death. That is, the observed response relates to the dynamic effects on the molecular interaction network, which was rewired in response to EGFR inhibition, during which the cells once again became susceptible to death triggered by DNA damage. Since it was challenging to directly examine rewiring pathways by using wet lab experiments alone, they constructed a datadriven model based on partial least squares regression which was then used to correlate cellular responses with different forms of drug treatment. This study is significant, as it provides strong evidence that the timed application of signaling inhibitors causes the rewiring of signaling pathways in tumor cells and renders them more susceptible to subsequent chemotherapy. Other studies, such as Huether et al. (2005), also pointed to changes in apoptotic signaling pathways from a targeted therapy increasing chemotherapeutic sensitivity, with time dependence. Moreover, as also shown in other clinical research (Andre et al., 2003(Andre et al., , 2004(Andre et al., , 2009, this study by Lee et al. (2012) demonstrates that not only the selection of optimal drug combinations, but also the sequence and timing of the administration of the multiple therapeutic drugs were critical to maximize treatment efficacy. Goldman et al. (2015) also reported that if a chemotherapy drug pair is administered in the right temporal sequence combinations, the leading drug could induce a phenotypic cell state transition, thereby making the cancer vulnerable to the partner agent. Interestingly, they even proposed the use of mathematical modeling to optimize sequential treatment with two drugs to take advantage of rewiring in response to the first drug.
As another example, to understand the dynamic, non-linear behavior of signaling pathways in cancer, Bernardo-Faura et al. (2014) developed an adaptive model to study and predict changes in network architecture (i.e., topology) over time in response to drug treatment based on fuzzy logic, a method that has been widely used in computation and engineering. Using the model, they tested the dynamics of the mitogen-activated protein kinase (MAPK) pathway (which was composed of 10 signaling intermediates) against a dataset derived from a melanoma cell line that was exposed to different pharmacological kinase inhibitors over 4 days. They found that, although Sorafenib (an inhibitor) was considered to have the capability to prevent phosphorylation of MEK1/2, which should in turn suppress the activation of ERK1/2, the observed ERK1/2 profile was not consistently inhibited, suggesting a signaling rearrangement compared to the original MAPK pathway. While the rewired interaction could not be specifically identified with the model, the potential underlying biological mechanisms could range from genetic mechanisms (such as mutations) to spatiotemporal pathway regulations. This result also proved an interesting point: that some biological mechanisms may enable the cell to enhance certain pathways or prevent some reported interactions from happening in order to trigger a specific response, depending on the context or cell type (Jones et al., 2008). This adaptive modeling approach can be used to characterize dynamic signaling rearrangements that grant tumors the ability to maintain proliferation and develop resistance.
CONCLUSION
Using the same drug or drug combinations throughout the course of treatment has been proven ineffective to overcome the pathway crosstalk and redundant signaling mechanisms, which are thought to be responsible (at least in part) for the modest responses observed in current trials of targeted therapies. Focusing on long-term tumor control rather than eradication, we introduce a dynamic targeting strategy, proposing that the target "signature" should change accordingly as the signaling network adapts during the course of treatment. Of course, this critically depends on being able to analyze the molecular networks readily and sufficiently, and mathematical models present an ideal platform for testing and optimizing drug combinations whenever target re-identification is needed. Ultimately, one may be able to predict the range of emerging target configurations, so that personalized, multi-tiered treatment can become proactive as opposed to being reactive to the network's intrinsic ability to adapt. Compared to current preclinical and clinical oncology practice, our concept offers a faster, more effective, and thus arguably more economic approach to explore a large number of potential treatment strategies to identify an optimal, patientspecific therapeutic regimen.
|
v3-fos-license
|
2022-12-01T15:26:22.883Z
|
2014-06-01T00:00:00.000
|
254106451
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-014-2936-x.pdf",
"pdf_hash": "ce89bb989d418e7dd199d95294444f3b0fc7bde8",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2679",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "ce89bb989d418e7dd199d95294444f3b0fc7bde8",
"year": 2014
}
|
pes2o/s2orc
|
Detection of ultra-high-energy neutrinos by IceCube: sterile neutrino scenario
The short-baseline neutrino oscillation experiments, the excess of radiation from the measurement of the cosmic microwave background radiation, the necessity of the nonbaryonic dark matter candidate, and the depletion of the neutrino flux in IceCube all seem to hint at new physics beyond the standard model. An economical way to address these issues is to invoke the existence of sterile neutrinos. We present simple extensions of the standard model with additionally three sterile neutrinos and discuss the corresponding PMNS like neutrino flavor mixing matrix. The noteworthy features of the sterile neutrino scenario advocated here are that the lightest one is almost degenerate with one of the active neutrinos, the second sterile has mass of order eV, and the heaviest one is in the keV range. In the present scenario, the short-baseline anomaly is explained through Δm2∼1eV2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Delta m^2\sim 1\, \mathrm{eV^2}$$\end{document}, the depletion of the muon neutrino flux in IceCube is explained through Δm2∼4.0×10-16eV2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Delta m^2\sim 4.0\times 10^{-16}\, \mathrm{eV^2}$$\end{document}, and the dark matter problem is addressed through Δm2∼1keV2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Delta m^2\sim 1\, \mathrm{keV^2}$$\end{document}. Our proposed mixing matrix is also compatible with the observed neutrino oscillation data. We show that the high-energy muon and the tau neutrino fluxes from Gamma Ray Bursts can be depleted in IceCube by as much as 38 and 43 %, respectively. These substantial depletions in both muon and tau neutrino fluxes are due to their small but sizable mixing with the sterile neutrinos.
Introduction
In the standard picture of neutrino oscillations, the three active neutrino states are linear superpositions of three mass eigenstates. The oscillation experiments with solar, atmoa e-mail: Subhash.Rajpoot@csulb.edu b e-mail: sarira@nucleares.unam.mx c e-mail: Hsi-ching.Wang@cgu.edu spheric, reactor, and accelerator neutrinos can be explained through the mass squared differences [1,2] with m 2 i j = m 2 i − m 2 j . The three mixing angles in this scheme have also been measured. The solar [3] and KamLand [4] data give sin 2 θ 12 0.3; the atmospheric [5] and MINOS [6] data give sin 2 θ 23 0.5. Also recently the Double-CHOOZ [7], RENO [8] and Daya-Bay [9] experiments measured the third mixing angle, sin 2 θ 13 0.1. However, the completeness of the three-neutrino mixing paradigm is in question due to several anomalies observed in the appearance and disappearance of neutrinos in data pertaining to shortbaseline (SBL) experiments; Liquid Scintillator Neutrino Detector (LSND) [10], Mini-Booster Neutrino Experiment (MiniBooNE) [11], and the reactor anomaly [12] (henceforth all combined and referred to as the SBL anomaly). The SBL anomaly cannot be accommodated with just three active neutrinos, thus suggesting the possible existence of one or more eV-scale sterile neutrinos to explain these results [13].
Although the existence of dark matter (DM) in the Universe is confirmed beyond doubt, its nature is still an outstanding puzzle both in particle physics and cosmology. To be consistent with the observations, the DM candidate should be a very weakly interacting, electrically neutral particle. Sterile neutrinos with a mass O(1) keV and lifetimes much longer than the age of the Universe are very good candidates for warm dark matter (WDM) [14,15]. These sterile neutrinos could be produced in the early Universe and their mass is generated by a Majorana mass term which is not bound to the electroweak scale. Apart from explaining the DM problem sterile neutrinos may also explain the large pulsar kick velocity [16], and their presence may also suppress the formation of dwarf-galaxies and other small-scale structures.
Gamma-ray bursts (GRBs) and active galactic nuclei (AGN) are believed to be the prime candidates for the production of ultra-high-energy cosmic rays (UHECRs) and ultra-high-energy neutrinos are their by-products [17][18][19][20]. IceCube, the km 3 -scale neutrino telescope constructed at the South Pole is meant to detect these cosmological neutrinos [21]. The IceCube collaboration recently published their analysis of data taken during the construction phase using the 40-and 59-string configurations of the detector. The combined analysis of the data does not show any neutrino signal correlated with the observed GRBs during the data taking period [22,23]. From this analysis, IceCube has set an upper bound on the neutrino flux from GRBs, which is at least a factor of 3.7 below the Waxman-Bahcall (WB) predication [24]. This depletion in the neutrino flux gave rise to many possible explanations [25][26][27].
From the astrophysics point of view, it has been pointed out recently that for the normalization of the neutrino flux, IceCube ignored the effects of the energy dependence of the charged pion production and secondary pion/muon cooling in the GRB fireball, which caused an overestimation of neutrino flux by a factor of 5 for typical GRB parameters [28]. Furthermore, by taking into account many other effects (pion and kaon production models, magnetic field effect, and neutrino flavor mixing) and doing a full numerical calculation it is shown that the neutrino flux reduces by about one order of magnitude [29]. With the revised neutrino flux calculation, a reduction in flux is also obtained by analyzing the neutrino flux from 215 GRBs during the period of the 40and 59-string configuration of the IceCube [30]. There are also alternative astrophysical models [31][32][33], which predict a lower neutrino flux compared to the WB models. So the claim by IceCube may not be that serious, but the WB models can be challenged in the future, as the observations put stringent limits on the muon neutrino flux.
To address this issue from the particle physics point of view, the existence of pseudo-Dirac neutrinos [34][35][36][37][38][39][40] is postulated. In this scenario the neutrino of each generation is composed of an almost maximally mixed active-sterile neutrino combination, separated by a tiny mass difference so that the active-sterile oscillations are possible without affecting the short-baseline oscillation results [41,42]. In a recent paper it has been postulated that, apart from the above explanation, neutrino decay can also be a viable explanation for the suppression of the muon neutrino flux [43]. Yet another very recent paper discussed the suppression of the muon neutrino flux in IceCube by assuming that all the neutrinos are pseudo-Dirac in nature and there is a mirror world replicating the interactions in the observed world and also connected to the latter gravitationally. In this scenario each active neutrino is associated with three sterile neutrinos with a very tiny splitting and oscillation from active to sterile can be responsible for the suppression of the muon neutrino flux [44]. So, if sterile neutrinos exist at all, and one/some of them are closely degenerate in mass with the active neutrinos, and also mix, they may easily evade detection in oscillation experiments. However, due to the very long baseline involved in the oscillation process the sterile neutrino, in principle, can have measurable effects on the high-energy neutrino flux. Also the possibility of a sterile neutrino was looked for in the atmospheric neutrino data collected by AMANDA and partially deployed IceCube [45].
These postulated sterile neutrinos neither participate in the weak interaction nor contribute to the invisible width of the Z boson [13]. Also there is no known fundamental symmetry in nature forcing the existence of a fixed number of sterile neutrino species. Cosmological probes such as bounds on the relativistic energy density of the universe in terms of the effective number of light neutrinos [46] have been extensively used to set bounds on the number of light neutrinos in general and the number of sterile neutrinos in particular.
In this work, we extend the Standard Model to include three additional sterile neutrinos (3 + 3). All neutrinos in the model, active and sterile, have non-zero masses and mix. The flavor mixing among the neutral leptons gives rise to a 6 × 6 matrix, analogous to the PMNS scheme for the active neutrinos. We will show that the generalized 6 × 6 matrix is compatible with the observed active neutrino oscillation data. Although our main focus is to explain the depletion of the muon neutrino flux in IceCube, our model also encompasses solutions to the SBL anomaly and the dark matter problem.
In the standard treatment of neutrino oscillations in vacuum, the flavor and the mass eigenstates are defined as ν α and ν i , respectively. The flavor states are superpositions of mass eigenstates with a non-zero mass square difference and are given as The mixing matrix U is the extended Pontecorvo-Maki-Nakagawa-Saki (PMNS) matrix. The three lowest states, |ν 1 , |ν 2 , and |ν 3 , with their respective masses m 1 , m 2 , and m 3 account for solar and atmospheric neutrino oscillations.
We assume that the sterile neutrinos are Majorana singlets. They could either be left handed or right handed. Below we shall describe two models with Majorana steriles. The first model in which |ν a , |ν b , and |ν c are left handed will be referred to as model A, while the second model in which |ν a , |ν b , and |ν c are right handed will be referred to as model B.
We present a standard model extension in which the seesaw mechanism is invoked to generate the required spectrum of light sterile neutrinos. The standard model with three generations of quarks and leptons is extended in the leptonic sector to include three right handed neutrinos and three vectorlike neutral leptons [47][48][49][50]. In total, our model has 12 neutral leptons, three left handed active neutrinos ν L ≡ (|ν eL , |ν μL , |ν τ L ), their counter parts, the three right handed inert neutrinos ν R ≡ (|ν eR , |ν μR , |ν τ R ), additionally three left handed and three right handed neutrals The interaction Lagrangian relevant for the neutrino masses and mixings is symbolically given by which gives rise to the following 12×12 neutrino mass matrix in the basis All entries in M ν are 3 × 3 matrices. Y and Y represent Yukawa couplings, = Y Φ is the Dirac mass matrix for the active neutrinos, = Y Φ is the Dirac mass matrix for the active neutrinos and N R . M is the (B − L)-breaking Majorana mass matrix of the right handed neutrinos; is the Dirac mass matrix for ν R and N L . L R is the Dirac mass matrix for N L and N R . The remaining terms in M ν are all Majorana mass matrices. The model has enough parameters to give representative values for masses and flavor mixings to address the short-baseline neutrino oscillation experiments, the excess of radiation from the measurement of the cosmic microwave background radiation, the need for nonbaryonic dark matter, and the depletion of the neutrino flux in IceCube.
Model A: In this model the light mass eigenstates are the active states ν L ≡ (|ν eL , |ν μL , |ν τ L ), and the sterile neutrals N L ≡ (N 1 , N 2 , N 3 ) L . The lightness of the states is achieved by invoking the seesaw mechanism in two stages. The first stage is between the three active neutrinos and their counter parts, the three right handed inert neutrinos ν R ≡ (|ν eR , |ν μR , |ν τ R ). The second stage is between These two stages are achieved by constraining the elements of the sub mass matrices in M ν to satisfy the seesaw conditions, The light neutrino masses for the three active states are given by The active states mix through the matrix elements of = M −1 . These mixings are responsible for the observed solar, atmospheric, and reactor neutrino oscillations. Similarly, the masses of the light steriles are given by and the states mix via the matrix elements of δ = L R M −1 R R . Further mixings between the three active states and the three sterile states are achieved through the off diagonal matrices . These mixings are considered in addressing the reactor anomaly, the flux depletion at IceCube, and dark matter.
Model B: In this model the light mass eigenstates are the active states ν L ≡ (|ν eL , |ν μL , |ν τ L ), and their counter parts, the three right handed inert neutrinos ν R ≡ (|ν eR , |ν μR , |ν τ R ). In this model also the lightness of the states is achieved by invoking the seesaw mechanism in two stages. The first stage is between the three active neutrinos and N R ≡ (N 1 , N 2 , N 3 ) R . The second stage is between ν R ≡ (|ν eR , |ν μR , |ν τ R ), and N L ≡ (N 1 , N 2 , N 3 ) L . These two stages are achieved by constraining the elements of the sub mass matrices in M ν to satisfy the seesaw conditions, The light neutrino masses for the three active states are given by The active states mix through the matrix elements of = M −1 R R . These mixing matrix elements are responsible for the observed solar, atmospheric, and reactor neutrino oscillations. Similarly, the masses of the light ν R are given by and the states mix via the matrix elements of δ = M −1 L L . Further mixing between the three active states and the three sterile states is achieved through the off diagonal matrices . These mixings are responsible for addressing the reactor anomaly, the flux depletion at IceCube, and dark matter. This model also offers the possibility of constructing a pseudo-Dirac particle [34][35][36][37][38][39] by combining two almost degenerate mass eigenstates, one from the active neutrinos and another from their right handed counterparts.
High-energy neutrino oscillation
In the (3+3) model, the vacuum oscillation probability for the process ν α → ν β is given as where (i, j = 1 to 6) and we have 15 different m 2 i j = m 2 i − m 2 j for non-zero and non-degenerate cases. For given m 2 , the oscillation probability depends on the neutrino energy E ν and the propagation distance (baseline) L. Because CP violation in the neutrino sector has not been observed yet, we take all the phases to be zero and this makes the U matrix real and simplifies the oscillation probability to the following form: where L osc = 4π E ν / m 2 i j is the oscillation length. The maximum flavor conversion in the vacuum can take place when L = L osc /2. If L L osc , the oscillations are very rapid and the oscillating term averages to 1/2. In this case the oscillation probability depends neither on the neutrino energy E ν nor on the distance L from the source. On the other hand, if L L osc , the baseline is too short for neutrinos to oscillate.
In order to explain the solar and atmospheric neutrino oscillation data we take m 2 21 and m 2 31 as given in Eq. (1) and their corresponding mixing angles. To explain the SBL anomaly we adopt the (3+1) model [51]. In the (3+1) scenario, the neutrino masses consist of three active neutrinos with masses m 1 , m 2 , and m 3 , which accommodate the observed solar and atmospheric oscillations, and a sterile state with mass m j , ( j = 4 or 5), separated from the active states by m 2 j1 ∼ 1 eV 2 m 2 21,31 . The small squaredmass differences m 2 21 and m 2 31 which are responsible for solar and atmospheric neutrino oscillations, respectively, have negligible effects in SBL oscillations. On the other hand, due to the large m 2 j1 and small active-sterile mixing, the effects of the sterile neutrino on the solar neutrino oscillation and conventional atmospheric neutrino oscillation (E ν ∼ GeV) are also negligible. However, the new large mass-squared difference m 2 j1 ∼ 1 eV 2 induces an active-sterile oscillation at short baselines ∼30 m for neutrinos with an energy in the range 20 MeV < E ν < 200 MeV, which is invoked to interpret the SBL anomaly [10]. In order to explain the depletion of the high-energy neutrino flux in IceCube we assume that the sterile neutrino |ν 4 or |ν 5 with mass m 4 or m 5 , and which does not participate in the SBL oscillation, will be almost degenerate in mass with |ν 1 or |ν 2 and we can estimate its value m 2 4.0 × 10 −16 eV 2 in the proceeding section for maximum flavor conversion on Earth. This gives many possibilities for m 2 to be considered and we take into account many of them in our analysis as shown in Table 1. A sterile neutrino with a mass of (1-10) keV is a viable candidate for dark matter [52], can explain the pulsar kicks [53], and can also play a role in other astrophysical phenomena. Finally, to be able to explain the DM problem, we assume that the sixth neutrino mass eigenstate has mass m 6 1 keV and is almost decoupled from the rest of the neutrinos, both active and sterile.
The mixing matrix
The matrix U in Eq. (2) is a unitary 6 × 6 matrix and in general can be parameterized by 15 real angles and 10 Dirac phases entering directly in the mixing matrix. The remaining five phases enter as a diagonal matrix and sit outside the matrix. The only mixing angles which are measured experimentally are θ 12 , θ 23 , and θ 13 . In discussing physics beyond the Standard Model scenario one has to incorporate these measured parameters in the analysis pertaining to oscillations involving sterile neutrinos. Models involving one (3+1), two (3+2) and three (3+3) [43,51,[54][55][56][57][58][59][60] sterile neutrinos have been proposed to explain the discussed discrepancies where many simple parametrization of the matrix U have been used [56,61].
In order to address the aforementioned problems and at the same time accommodate the existing data on the observed oscillations between the active neutrinos we propose the following 6 × 6 form for the extended PMNS matrix U : As discussed in the previous section, we take all the phases to be zero, which makes the U matrix real and after that we vary all the 15 mixing angles, keeping in mind that the U matrix is maintained unitary and with the constraints given by different observations as discussed below. Notice that the first 3 × 3 block diagonal entries are responsible for explaining the solar, atmospheric, and the reactor neutrino data, and all the mixing matrix elements in this block diagonal are compatible with the constraints given by the experiments. The active (ν e , ν μ , and ν τ ) content of the three additional mass eigenstates has to be small, which is shown in the first 3 × 3 off diagonal block in Eq. (13). Unitarity of U implies the following constraints on the remaining matrix elements: for each i = 4-6. Similarly for each α = e, μ, τ Our extended matrix gives 0.04 ≤ X i ≤ 0.13 for i = 4-6 and 0.04 ≤ X α ≤ 0.13 for α = e, μ, τ .
To further tighten the constraint on the active-sterile mixing we can use the effective neutrino mass in β-decay experiments, which is given by This contribution gives the distortion of the electron energy spectrum due to the non-zero neutrino mass and mixing and the current bound on this parameter is m e ≤ 2.2 eV [62]. Similarly in the neutrinoless double beta-decay experiments the effective neutrino mass parameter is given by The current bound on this parameter is m ee < 0.26 eV [63,64]. In our analysis, we have three different mass scales; one scale is in the sub-eV range and can even be smaller, making m 1 degenerate with m 2 . Another scale is of the order of eV, corresponding to either m 4 or m 5 . The third one is the keV scale, corresponding to m 6 . The effective neutrino mass parameter in both experiments has to get a contribution mainly from the keV and eV mass eigenstates, and to obtain this constraint, we must have |U e6 | ≤ 10 −3 . Our extended U matrix satisfies this characteristic i.e. m e ≤ 1.05 eV and m ee ≤ 10 −3 eV. Also, to preserve the well-known mixing between the active neutrinos, the mixing between the active and the sterile neutrinos are required to be small. Thus the flavor mixing matrix elements of the active neutrinos in U (first diagonal block of Eq. 13) constrain the remaining mixing elements between the active and the sterile neutrinos to be small but sizable, translating into the corresponding mixing angles to be a few degrees at most (θ i j ≤ 15 • for i, j from 4 to 6).
High-energy astrophysical neutrinos
It is believed that GRBs which are about 100 Mpc away from us are the sources of UHECRs with energies above 10 18 eV [17][18][19]. In the fireball scenario of the GRB emission [20,65], protons are Fermi accelerated to ultra-high energy and constitute probably part of the UHECRs that we observe on Earth.
The deep inelastic collision of these high-energy protons with the expanding shock wave as well as with the surrounding background can produce charged and neutral pions. While the decay of a neutral pion can give high-energy gamma rays, the decay of charged pions will produce high-energy neutrinos. So there is some correlation among the UHECRs, high-energy gamma rays, and high-energy neutrinos.
The conventional wisdom is that at the source the flux ratio is Φ 0 ν e : Φ 0 ν μ : Φ 0 ν τ = 1 : 2 : 0 (Φ 0 ν α is the sum of neutrino and anti-neutrino fluxes for the flavor α at the source) due to the decay of charged pions. The vacuum oscillation of these neutrinos on their way to Earth would average to the observed ratio (1 : 1 : 1) [66]. For high-energy neutrinos above ∼1 PeV, the muon energy is degraded in a strong magnetic field or gets absorbed in the stellar medium. So high-energy muon neutrinos will be absent and the flux ratio at the source is modified to (0 : 1 : 0) [67][68][69]. This will be further modified to (1 : 1.8 : 1.8) at Earth after vacuum oscillation [70].
Neutron beta decay will also contribute to the neutrino flux ratio. Being neutral, neutrons cannot be accelerated directly by the GRB jet. So these neutrons have to have been produced as secondaries. Around the GRB environment, highenergy neutrons can be produced through the following channels: interaction of Fermi accelerated high-energy protons in the GRB jet with the ambient hydrogen ( pp), dissociation of accelerated ions ( A) by colliding with the ambient hydrogen (Ap), interaction of high-energy protons with the ambient photons ( pγ ), and photodissociation of accelerated ions ( Aγ ) [71]. These high-energy secondary neutrons will decay in flight and produceν e , which will give a flux ratio (1 : 0 : 0) [67][68][69]. However, these scenarios have at least one shortcoming: In the GRB environment along with these neutrons, high-energy pions are also produced. The highenergy charged pions will decay to high-energy neutrinos and their energy will be an order of magnitude higher than theν e energy produced in neutron beta decay. Also the neutrino flux from pion decay will be higher than the one from the neutron decay. So in an astrophysical environment, a pure neutron source having the flux ratio (1 : 0 : 0) is highly unrealistic.
The GRB neutrinos travel distances of order ∼100 Mpc and neutrino fluxes from these GRBs at different redshifts will be averaged, leading to the averaging of the oscillations. So regardless of their initial flavor content, the flux ratio will be (1 : 1 : 1), which is one form of decoherence [72]. It should also be noted that quantum decoherence will give rise to the same flux ratio [73].
Based on the observed flux of UHECRs, Waxman and Bahcall estimated the neutrino flux, which is E 2 ν dN ν /dE ν ∼ 5 × 10 −9 GeVcm −2 s −1 sr −1 in the energy range ∼100 TeV-10 PeV [24]. For GRBs at a redshift of z ∼ 1 and L ∼ 100 Mpc with neutrinos energy 100 TeV ≤ E ν ≤ 10 PeV, the maximum flavor conversion will take place for In other words, the high-energy GRB neutrinos cannot probe a mass squared difference smaller than m 2 4.0 × 10 −17 eV 2 . For our estimate of the neutrino flux we will use this result for the maximum conversion of neutrinos in the IceCube detector. The oscillation length for standard neutrinos as well as for neutrinos satisfying m 2 ∼ 1 eV 2 and m 2 ∼ 1 keV 2 are very short compared to the astrophysical distances, which corresponds to the condition L L osc , and the oscillation probability will be averaged for these cases which will be independent of the neutrino energy and the distance from the source. For our analysis, here we consider the neutrino energy E ν = 1 PeV, which corresponds to m 2 4.0 × 10 −16 eV 2 for maximum flavor conversion on Earth and for this case we replace the oscillatory factor in Eq. (12) by unity.
For the treatment of the neutrino oscillation in Eq. (12), we have neglected the matter effects for GRBs as well as the Earth. The reasons are twofold.
(1) The region of the GRB fireball where these highenergy neutrinos are produced has a very low matter density, which makes the matter potential contribution negligible.
(2) For m 2 4.0 × 10 −16 eV 2 the average potential experienced by a PeV neutrino inside the Earth is √ 2G F n e m 2 /2E ν . Thus, also the Earth's matter has negligible effect on the neutrino oscillation.
Results and discussion
In the light of insufficient detailed knowledge on the region of the GRB fireball and the region surrounding it where the highenergy neutrinos are produced, as discussed in the previous section, we consider three different flux ratios at the source: the conventional one, (1 : 2 : 0), the muon-damped source, (0 : 1 : 0), and the beta beam (1 : 0 : 0). First of all, it is unclear which flux ratio affects the flux determination on Earth, and secondly there is also uncertainty in the elements of the U matrix and a number of other astrophysical factors: the shape of the neutrino spectra depends on the primary cosmic ray energy spectra and of the target material; and at very high-energy, semileptonic decay of the charm quarks will give rise to extra neutrinos. In our analysis, we neglect the last two uncertainties in calculating the flux ration on Earth.
After traveling a distance L, the neutrino flux of a given flavor on Earth is given by The condition L L osc is satisfied for the standard neutrinos as well as for neutrinos satisfying m 2 ∼ 1 eV 2 and m 2 ∼ 1 keV 2 . For all these cases the oscillatory term in Eq. (12) will be replaced by a factor 1/2. For neutrinos traveling a distance beyond ∼100 Mpc, averaging the sources over the redshift will give an average flux even for very small m 2 , due to incoherent flavor mixing. We keep the sixth neutrino mass m 6 = 1 keV fixed throughout the calculation. Table 1 summarizes our findings. We have considered six different possibilities that give a sizable very high-energy neutrino flux depletion in IceCube. In the first four cases we have taken either m 4 or m 5 ∼1 eV. For the remaining cases both m 4 and m 5 are almost degenerate with either m 1 or m 2 , but not both. In these last two cases considered, we do not have ∼1 eV neutrino mass and we will be unable to explain the SBL anomaly.
As shown in Table 1, for the initial neutrino flux ratio (1 : 2 : 0), we observe that Φ ν e > Φ ν μ , Φ ν τ is satisfied always. For the mass degeneracy involving m 1 , the electron neutrino flux, Φ ν e , is always enhanced from its vacuum value by 6-8 %. At the same time Φ ν μ decreases by 24-28 % and Φ ν τ by 26 %. On the other hand, for the mass degeneracy involving m 2 , while the Φ ν e is decreased by 1-6 %, Φ ν μ is decreased substantially by 28-38 %, and Φ ν τ is decreased by 11-20 %. The substantial depletion in Φ ν μ and Φ ν τ in the (1 : 2 : 0) scenario is due to the sizable mixing of the ν μ and ν τ with the sterile neutrinos, which can be seen from the mixing matrix given in Eq. (13). For the initial flux ratio (0 : 1 : 0), as shown in Table 1, the flux observed on Earth is almost identical for mass degeneracies involving m 1 (possibilities I and II). Similarly it is also almost identical for degeneracies involving m 2 (possibilities III and IV). In all these cases we find Φ ν τ > Φ ν μ > Φ ν e , but we have values still lower than the vacuum oscillation value of 1/3. In the last two cases (possibilities V and VII), the sterile neutrinos with masses m 4 and m 5 are taken to be degenerate with either m 1 or m 2 . We see that the muon neutrino flux is degraded substantially by as much as 38 %. In general, our results show that there is a marked decrease in the muon neutrino flux.
For the beta beam flux ratio (1 : 0 : 0), as shown in the fifth column of Table 1, we observe that Φ ν e > Φ ν μ > Φ ν τ is always satisfied and in all these cases, while the tau neutrino flux is heavily suppressed (36-43 %), Φ ν e is dramatically increased by as much as 45-58 %. The Φ ν μ is depleted by 9-34 %. We observe a substantial depletion in muon neutrino flux in the above three scenarios, which may clearly be measurable by IceCube. Without coherence, the depletion in flux due to very small m 2 will be smaller.
IceCube can isolate the muon neutrino events from the rest through the track and shower events. The most probable signature of the sterile neutrinos is the depletion of muon neutrino flux due to its mixing with the former. In all the three cases, (1 : 2 : 0), (1 : 0 : 0), and (0 : 1 : 0), we obtain depletion in the muon neutrino flux as well as in the tau neutrino flux. It has also been argued that due to activesterile flavor mixing, there will be an excess of electron neutrinos with a particular energy and zenith angle dependence [74]. For the conventional flux ratio (1 : 2 : 0) and the beta beam flux ratio (1 : 0 : 0) we do get an excess of the electron neutrino flux on Earth. In the conventional scenario this excess in Φ ν e is due to the mixing of neutrinos of mass m 1 with the steriles of mass m 4 and m 5 , which is absent in the muon-damped scenario (0 : 1 : 0). But it is to be noted that in the beta beam scenario, the enhancement in Φ ν e can be very high (between 45 and 58 %) for all activesterile mixing. As argued previously, this scenario is highly unrealistic.
For SBL neutrino oscillations, except for the case of the m 2 ∼ 1 eV 2 term, no other mass square difference will contribute because sin 2 (π L/L osc ) is very small in Eq. (12). In the standard scenario of three active neutrinos, the ν μ (ν μ ) → ν e (ν e ) oscillation is almost zero, whereas in the present scenario we have a non-zero contribution coming from the mixing of light sterile neutrinos (with mass m 4 and/or m 5 ) with active neutrinos. The oscillatory term with m 2 6i L/4E ν 1 for i = 1-5 (L ∼ 30 m and E ν ∼ 100 MeV) will, in principle, contribute to the SBL anomaly. But the mixing of the sixth neutrino is vanishingly small, resulting in a contribution that is negligible. Thus the sixth neutrino decouples from the rest and serves as the non-baryonic dark matter of the Universe [14,15]. There are also other explanations for the non-observation of these highenergy neutrinos in IceCube, where it is argued that GRBs may not be the source of high-energy cosmic rays. In that case, there will be no neutrinos. Another explanation is that the GRB fireball calculations of the neutrino flux is subject to sufficiently large astrophysical ambiguities, leading to evading the IceCube limit [29].
|
v3-fos-license
|
2023-08-20T15:03:53.832Z
|
2024-03-01T00:00:00.000
|
261017119
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://ieeexplore.ieee.org/ielx7/9424/4389054/10224277.pdf",
"pdf_hash": "ff986317fdf9413b8dee895bc6db4f58592bb237",
"pdf_src": "IEEE",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2680",
"s2fieldsofstudy": [],
"sha1": "1e0ebc2110f743f7292bb403a9c20c82312f63c1",
"year": 2024
}
|
pes2o/s2orc
|
Interaction-Aware Short-Term Marine Vessel Trajectory Prediction With Deep Generative Models
Navigation safety is of paramount importance in areas with heavy and complex maritime traffic. Any ship navigating such a scenario should be able to foresee the future positions of other ships and adjust its path accordingly to avoid collisions. However, predicting future trajectories is a very challenging problem due to many possible future trajectories from the inherent uncertainty and the complex interaction dynamics between different ships. In this article, we propose a deep generative model based on the conditional variational autoencoder framework to learn marine vessel movement and predict future trajectories. The model is able to produce a multimodal probability distribution over future trajectories and model the complex interactions between vessels. Experiments are performed in two-vessel encounter scenarios from real-world automatic identification system data. The proposed model outperforms the baseline methods, including both kinematics-based and data-driven methods. The trajectories predicted by the proposed model are also analyzed to demonstrate the effectiveness of the model.
O VER the past few decades, intelligent marine transporta- tion systems have received increasing attention from the maritime industry.The development of digital twin [1], remotely operated, and autonomous ships is within such a context.They are expected to increase the efficiency of maritime transport while reducing fuel consumption and extending the operating window [2].Understanding vessel motion is a key skill of intelligent systems, which includes predicting the future trajectories of other traffic vessels.This prediction enables a range of downstream tasks, such as predictive planning, model predictive control, and collision avoidance.
The challenge of the accurate trajectory prediction for marine vessels arises from the complexity of human behavior and its diversity of internal and external stimuli [3].The future behaviors of marine vessels may be driven by their intent, the interaction with surrounding vessels, the environment, and traffic rules.Most factors cannot be directly observed and need to be inferred from noisy perceptual cues.Traditionally, the constant velocity model is used for vessel trajectory prediction, and the future position is simply extrapolated by its velocity and course.More advanced model-based approaches involve a Bayesian filter to estimate the acceleration or turning rate [4] and then assume these parameters remain constant.These methods apparently have difficulty in modeling the intent of the vessels as well as other stimuli and, thus, often lead to high prediction errors in the real world.
Pattern-based methods, especially machine-learning methods, are able to address the aforementioned complexities involved in trajectory prediction by learning from historical data.These methods learn motion behavior by fitting different function approximators to the data, ranging from hidden Markov models to more recently deep neural networks.Learning such models requires recording historical traffic data.For marine vessels, the automatic identification system (AIS) is used, which is an automatic tracking system that uses transceivers on the vessel and is used by vessel traffic services.Information provided by AIS includes unique identification, position, course, speed, etc.Its main purpose is to allow ships to view marine traffic in their area and to be seen by that traffic.The historical AIS data are also saved and it is usually publicly accessible from different organizations, such as coastal administration.Thus, these AIS data form a rich dataset for analyzing the behavior or traffic of marine vessels.It can also be used for learning trajectory prediction models, and different machine-learning models have been applied [5], [6] by using AIS data.
However, unlike human motion prediction [3], these models for marine vessels focus on long-term predictions (up to several hours) and, thus, do not take into account the inherent uncertainties of the predictions and interactions between ships.This may be due to the fact that long-term trajectory prediction mainly depends on the ship's destination and route.It is pretty deterministic when the destination and the route are known.However, this is not the case for the short-term prediction (up to several minutes) as it is heavily influenced by these two factors.For example, in the case of an encountering ship, as shown in Fig. 1, whether the ship gives way or passes directly depends on the behavior of the encountering ship.Also, an aggressive captain might only alternate the course slightly in a give-way situation, while a conservative captain would alternate the course in a much larger manner, which results in the inherent uncertainties (aleatoric uncertainty) of the predictions.Therefore, it is important to consider ship interactions and prediction uncertainty for short-term ship trajectory prediction.Note that the uncertainty can be divided into aleatoric and epistemic uncertainty.Uncertainty in the predicted trajectory comes from unknown stimuli, such as the captain's intentions, and cannot be removed by collecting more data.Therefore, it is regarded as aleatoric uncertainty.To model this uncertainty, predicted trajectories are treated as distributions and modeled by a variational autoencoder.
In this article, we propose a novel model based on deep neural networks for short-term trajectory prediction of marine vessels.In particular, we approximate the predicted aleatoric uncertainty with a deep generative model: Conditional variational autoencoder (CVAE).It includes a latent random vector to represent the uncertainty.Multiple future trajectories can be generated by sampling from this latent vector.The model is implemented in a sequence-to-sequence (seq2seq) manner with the use of the recurrent neural network (RNN) to better handle the sequential data.The interaction between the vessels is encoded as context for the CVAE model.The performance of the model is demonstrated with two-vessel encounter scenarios from AIS data.Although CVAE has been applied to the trajectory prediction of people and vehicles, it is the first time that it is applied to marine vessels.The main contributions of this article can be highlighted as follows.
1) A novel model is developed for short-term trajectory prediction of marine vessels, which includes the prediction uncertainty and the interaction between vessels.2) Extensive experiments are performed to validate the model and the detailed analyses of the future trajectory patterns generated by the model are conducted.CVAE is a deep generative model based on autoencoders.Other generative models, such as regression generative networks [7], lead to min-max optimization problems that are known to be unstable to train.The main focus of vanilla autoencoders is usually to compress data for downstream tasks, as shown in [8].And for CVAE, although it uses an autoencoder architecture, its focus is on model distribution.The rest of this article is organized as follows.Section II presents the literature reviews of trajectory prediction.The illustration of the proposed prediction model is given in Section III.In Section IV, experiments are conducted with AIS data to validate the model, and the results are shown and discussed.Finally, Section V concludes this article.
A. Human and Vehicle Trajectory Prediction
Trajectory prediction for humans and vehicles has been extensively studied in recent years, especially in the application domains of autonomous vehicles and service robots.Learningbased methods are one of the modeling approaches that have made promising progress recently.In particular, RNNs for sequence learning have become a widely popular modeling approach in such a context [9], [10], [11].Altche and de la Fortelle [10] use a long short-term memory (LSTM) network for highway trajectory prediction.Similarly, Park et al. [11] use an LSTM network as well but use an encoder-decoder structure to generate the future trajectory sequence in highway traffic scenarios.This kind of model is also called the seq2seq model.However, these methods only produce a single deterministic trajectory output, thus neglecting to capture the uncertainty inherent in the prediction process.Predicting multiple future trajectories or the distribution of possible future outcomes is critical for safety-critical systems, as it requires reasoning over many possible future outcomes to guard against worst-case scenarios.In [9], the future position is assumed to follow a Gaussian distribution to represent the distribution of possible future outcomes.This method is simple but not able to account for the multimodal futures' distributions.In such a context, a popular approach is to use deep generative models that model the future trajectories implicitly as latent variables.Gupta et al. [12] leverage generative adversarial networks to capture future distribution.The model consists of a generator and a discriminator network, and the generator outputs trajectory samples, which are then evaluated by the discriminator.Rhinehart et al. [13] use a flow-based generative model, while Ivanovic et al. [14] use CVAE framework.These generative models show promising results in generating multimodal distributions of future trajectories.The interaction between different agents also has a nonnegligible effect on future trajectories.Many approaches attempt to aggregate information from neighboring agents.Alahi et al. [9] model the interaction of pedestrians by sharing the hidden state of each individual RNN using a pooling mechanism.Salzmann et al. [15] represent the scene as a graph and aggregate the information from neighboring agents by an elementwise sum.In addition to the above two factors, more stimuli for the trajectory prediction, such as the target destination [16], can also be included.
B. Trajectory Prediction for Marine Vessels
In the maritime domain, the term trajectory prediction is used not only for the traffic vessel but also for the controlled vessel as in [17].This is due to the inaccurate dynamic model and uncertain environmental effects [18] on the controlled vessel.The major difference is whether the control command or future plan is available.In this article, we focus on the traffic vessel since neither the control commands nor the future plan is accessible.Unlike human and vehicle trajectory prediction, the prediction horizon of marine vessels is usually much longer (on the order of several minutes).Also, lanes are not designated as vehicles.The learning-based approaches have received increasing attention in recent years.Gao et al. [19] apply a similarity-based method to determine the destination point of the vessel from historical data and use an LSTM network to generate multiple support points to the destination point.The predicted trajectory is the cubic interpolation of the support and destination points.Capobianco et al. [6] developed a seq2seq model, where the encoder is a bidirectional LSTM network and the decoder is a unidirectional LSTM network.The attention mechanism between the encoder and decoder is utilized.Murray and Perera [5] developed a twostep approach.The historical AIS trajectories are first clustered using a clustering method and a local prediction model is built for each cluster.The local prediction model is similar to [6], which is a seq2seq LSTM network with an attention mechanism.Nguyen and Fablet [20] learn a prediction model based on the transformer architecture.But instead of learning a regression model, they discretize position into bins and learn a classification model.Multistep prediction is made by applying this model recursively.The majority of the research for ship trajectory prediction focuses on finding a suitable network architecture, either for long-term or short-term prediction.Liu et al. [21] apply a graph convolutional neural network to aggregate the information from surrounding vessels.The future position of the vessel is assumed to follow a Gaussian distribution.As a result, all of them except Liu et al. [21] use a deterministic approach that does not consider the inherent uncertainty of the prediction.However, the multimodel distributions of future trajectories are not considered in [21].Besides, most of them do not try to capture the interaction between different vessels, which could be nonneglectable in short-term prediction.In this article, we propose a model that includes the prediction uncertainty as well as vessel interaction and emphasize their importance.
III. GENERATIVE MODEL FOR VESSEL TRAJECTORY PREDICTION
In this section, a general CVAE model and the gated recurrent unit (GRU) are described and we apply them in the context of vessel trajectory prediction.Then, the core characteristics of the proposed CVAE trajectory prediction model are illustrated.
A. Conditional Variational Autoencoder
Given a dataset D = {(x i , y i )} N i=1 , the conditional generative modeling tries to fit a model of the conditional probability distribution p(y|x).Once fit, the model can be used to generate samples y given x, which can be used to represent uncertainty for trajectory prediction.In other words, the aleatoric uncertainty is modeled as a conditional probability distribution p(y|x).In this article, we consider p(y|x) to be defined by a fixed set of parameters, which we fit into the dataset with the objective of maximizing the likelihood of the observed data.In such a context, neural networks are often used due to their strong expressivity.The commonly used model includes CVAE [22] and conditional generative adversarial network (CGAN) [23].We choose to use CVAE because CGAN is harder to train and may suffer from mode collapse.
A CVAE is a latent conditional generative model.The CVAE consists of an encoder q φ (z|y, x) parameterized by φ and a decoder p θ (y|z, x) parameterized by θ.The encoder takes the inputs y and x and produces a distribution p(z|x), where z is the latent vector.The decoder uses x and samples from p θ (z|x) to produce y.The model can be described by To efficiently perform the marginalization in (1), a proposal distribution q φ (z|y, x) is used.The marginal likelihood in (1) becomes By taking the log of both sides in (2) and using Jensen's inequality, the evidence lower bound (ELBO) is derived as follows: where D KL is the Kullback-Leibler (KL) divergence.Therefore, instead of maximizing the log-likelihood directly, the ELBO is maximized.By using the reparameterization trick [24], the ELBO is tractable and can be optimized via stochastic gradient descent.The negative ELBO is, therefore, minimized and the loss for a single training example (x, y) is During training, the negative log likelihood (first term) is modeled as the mean square error.The distribution p(z|x) is modeled as a standard multivariate Gaussian distribution p(z|x) ∼ N (0, I).The loss is minimized to find the neural network parameters φ and θ.
B. Gated Recurrent Unit
The GRU [25] is an RNN with a gating mechanism to avoid gradient vanishing problems in RNN.It is similar to LSTM but has fewer parameters.Given sequential data x 1:T , the GRU processes the sequence by repeating the following function: Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
where x t and h t are the input and hidden state at time t, respectively.r t , z t , and n t are the reset, update, and new gates, respectively.W is the weight of the network.Note that LSTMs can be used instead of GRUs in this article.GRU was chosen because it can consider long-term dependencies while having fewer parameters than LSTM.
C. Interaction-Aware Trajectory Prediction
In order to model the complex trajectory prediction process of marine vessels, we are interested in learning a model for future trajectory prediction that satisfies the following desires.
1) The model is history dependent that the intent or future trajectory can be predicted from the past trajectory.
2) The interaction between the two encountering vessels is captured.
3) The model is able to generate multiple future trajectories to account for uncertainty in the forecasting process.The proposed seq2seq CVAE is illustrated in Fig. 2 .Here, we denote the ship that we want to predict its trajectory as the target ship, while the other one is called the own ship.Three modules are included in the model: An interaction history encoder, a future trajectory encoder, and a future trajectory decoder.All these three modules are parameterized with GRU.
1) Interaction History Encoder: The interaction history encoder is designed to encode the past information from the target ship and own ship into vectors.In particular, the trajectories of the target ship and own ship over the past 3 min are encoded using different GRUs.The interaction history encoder can be represented in the following function: where p t−δt 1 :t os and p t−δt 1 :t ts are the positions of the own ship and target ship from time t − δt 1 to t, respectively.The GRU() is the GRU network by applying (5) recursively and, thus, h os and h ts are the final hidden states.
2) Future Trajectory Encoder: The future trajectory encoder outputs the mean and variance of the latent vector z by encoding the future trajectory conditioning on the outputs of the interaction history encoder, which can be represented as follows: μ, σ = Linear h f ts , h ts , h os (7) where Linear() is a linear mapping.
3) Future Trajectory Decoder: The future trajectory decoder recursively generates the future position by taking the hidden state and the predicted position from the last time step.The initial hidden state and position are obtained as follows: The future trajectory, therefore, can be generated by recursively applying the following function: where gru() is the same function as (5).In summary, the interaction history encoder is designed to address 1) and 2).That is, the future trajectory depends on the past trajectories of both the own ship and the target ship.To address 3), we are interested to learn a conditional probabilistic distribution p(p t:t+δt 2 ts |p t−δt 1 :t ts , p t−δt 1 :t os ) based on the CVAE framework.In particular, the future trajectory encoder is only used in the training phase and not in the inference phase.In the inference phase, we instead randomly sample the latent vector z from the multivariate standard Gaussian distribution z ∼ N (0, I).Taking the random vector z and the conditional vectors from the target ship and the own ship, the future trajectory decoder is able to generate the future trajectory.Therefore, by randomly generating the latent vector z, we are able to generate multiple future trajectories that enable a future trajectory distribution.Algorithm 1 and Algorithm 2 present the pseudocode for training the model and generating trajectory from the model, respectively.
IV. EXPERIMENTS
In this section, we present experiments on AIS data in twovessel encounter scenarios to show the effectiveness of the proposed model.
A. Dataset
The vessel trajectory data are collected from AIS data.The raw AIS data are retrieved from the database of the Norwegian Coastal Administration (Kystverket).The raw data contain essential information on longitude and latitude coordinates, speed over ground, course over ground, true heading, and static information, such as maritime mobile service identity (MMSI) and ship length.Since the raw AIS data are broadcast at a different frequency and contain anomaly data and stationary ship data, the anomaly and stationary data are filtered out and the resampling and downsampling are performed to bring the AIS data to 0.1 Hz.
In this study, we focus on the encounters between ferries crossing from Horten to Moss and merchant vessels navigating from the North Sea toward Oslo or Svelvik.The trajectories of the ferries and the encountered ships are extracted.The data for the whole month of January 2019 are extracted as the training set, while the first ten days of February 2019 are used as the test set.This results in 173 encounter cases in the training set Fig. 3. Encountered trajectory data.The ferry travels from Horten to Moss, while the merchant ship travels from south to north.In such scenarios, the ferry is responsible to give way.
TABLE I DESCRIPTION OF THE ENCOUNTER TYPES IN THE DATASET
and 63 encounter cases in the test set.Each encounter case is approximately 35 min.The trajectories of the ferry and the merchant ship are shown in Fig. 3.In this article, only longitude and latitude coordinates are used.The goal is to make future trajectory predictions for ferries only since the dataset only contains ferries from Horten to Moss and merchant ships from bottom to top (see Fig. 3).In this case, the ferry is responsible for deciding whether to give way or go straight through according to the Convention on the International Regulations for Preventing Collisions at Sea.
Although more ships can be involved in an encounter and relatively large datasets can be easily collected since the AIS data are publicly available, we only focus on this small dataset with two-ship interactions because it is labeled and reviewed by experts.It is more convenient to analyze whether the model can predict the trajectory under different encounter situations.Future work will include more ships and larger datasets.Note that there are three types of encounters as defined in [26] in the dataset.Table I lists a detailed description of these three encounter types.While Type 2 is similar to Type 3 in trajectory, they differ significantly in how ships pass.
B. Implementation Details
For the interaction history encoder, future trajectory encoder, and future trajectory decoder, a two-layer GRU unit with a hidden size of 256 is used.The position of the vessel is normalized with z-score normalization.The Adam with decoupled weight decay regularization is used as the optimizer.A cosine annealing schedule with an initial learning rate of 1 × 10 −3 is used.The model is trained for 1000 epochs.
C. Quantitative Performance
In this part, the performance of the proposed model is evaluated quantitatively.The longitude and latitude coordinates in the dataset are actually converted to meters for training and testing the proposed model.
1) Baselines: The performance of our model is compared with several baselines.
1) Kinematic model (KM):
The model simply extrapolates trajectories with the assumption of constant speed and course direction.2) seq2seq model: The seq2seq model follows an encoder-decoder structures.The encoder is used to encode the past trajectory, while the decoder is used to generate the future trajectory.The encoder and decoder are parameterized by RNNs.Note that even though this model does not consider the interaction between agents, it has been used for trajectory prediction on vehicles [11] and vessels [27].We parameterize the model using a GRU with the same hidden size and layers as our proposed model.In addition, Monte-Carlo dropout [28] is used to generate multiple future trajectories from the model.3) Social LSTM (S-LSTM): S-LSTM [9] models that each agent by an individual LSTM with the hidden states at each time step is shared.Since only two vessels are considered, we do not use the pooling mechanism but directly share the hidden states.Besides, the LSTM is changed to GRU.This model is trained and inferred in an autoregressive manner.The future trajectories at each time step are assumed to follow a Gaussian distribution and the model is trained with negative log likelihood.2) Evaluation Metrics: Two different metrics that are widely used for trajectory prediction are considered: 1) Average distance error (ADE): The average Euclidean distance of all estimated and true points of the trajectory.2) Final distance error (FDE): The Euclidean distance between the predicted final destination and the true final destination at the end of the forecast period.Since the proposed model can predict the future trajectory distribution, we sample 50 trajectories from our model and compute the average trajectory to evaluate ADE and FDE.Furthermore, we evaluate the predictive distributions using the Best-of-N (BoN) ADE and FDE, which we denote as BoN-ADE and BoN-FDE, respectively.BoN-ADE and BoN-FDE were proposed in [12].This is a way to evaluate whether multiple predicted trajectories cover the true one.Here, we sample 50 future trajectories and compute their errors from the true trajectories to obtain the best five trajectories.The errors for these five trajectories indicate how well our multiple predicted trajectories match the true trajectories.
3) Results: Fig. 4 shows the Euclidean distance error at different prediction time horizons.It is shown that the error of the KM increases dramatically with prediction time horizons.When the prediction time exceeds 1 min, the proposed model provides the smallest error among all the baseline models.
In Table II, we compare the performance of our model with the baseline methods.The naive KM produces high prediction errors.The pattern-based methods clearly outperform the KM.While seq2seq has similar errors on ADE and FDE, it does not perform well on the BoN metric, suggesting that Monte-Carlo dropout has difficulty modeling predictive distributions.This may be due to the fact that the method is often used to approximate epistemic rather than aleatoric uncertainty.The S-LSTM and our proposed model outperform seq2seq models, demonstrating the importance of involving vessel interaction.The proposed model provides the smallest error among all metrics, which shows the superiority of the proposed model in modeling the behavior of marine vessels.
D. Trajectory Prediction Analysis
The quantitative evaluation shows that the proposed model outperforms other baseline methods.In this part, the actual behavior of the proposed model in different settings is analyzed.
1) Trajectory Prediction With Different Baselines: In Fig. 5, the prediction results of the proposed model and the baseline methods are shown in several random samples from the test set.
It is shown that the KM model produces high prediction errors, especially around nonlinear regions.The other learning-based methods are able to predict nonlinear behaviors.The sampled trajectories from the proposed model are able to match the real trajectory and the average of these sampled trajectories provides the least error.
2) Trajectory Prediction on Different Time Steps: Fig. 6 presents the predictions on random scenarios at different time steps from type 1, type 2, and type 3 encounters.The results show that across all encounter types, the predicted uncertainty is lower after the ferry passes the merchant ship, which fits with our intuition that the ferry only needs to focus on its destinations after passing.In addition, the predictions show that Type 2 and Type 3 have less uncertainty before passing than Type 1, possibly due to the ferry's ability to quickly recognize that it can safely pass the merchant ship without changing course.
3) Trajectory Prediction With the Change of Encounter Type:
To analyze the change of encounter, we sample 3 encounter scenarios from type 1, type 2, and type 3, respectively.The type 3 scenario is linearly interpolated to the type 1 scenario, as shown in Fig. 7(a).The type 1 scenario is linearly interpolated to the type 2 scenario, as shown in Fig. 7 the transition between these two encounters is not considered.In Fig. 7(a), it can be found that the merchant ship barely changes, and when the ferry approaches the merchant ship, the predicted trajectory changes from passing directly to giving way.Similarly, in Fig. 7(b), the ferry has little changes, and when the merchant ship moves further, the predicted trajectory changes from giving way to passing.These qualitative results demonstrate that our model is able to capture the interaction between two vessels.
E. Ablation Study
To validate the design choice of how we aggregate the interaction information, an ablation study is performed.In particular, we performed experiments on the following.
1) No interaction is considered.
2) The vectors from two vessels are summed.
3) The vectors from two vessels are max pooled.Table III presents that including the interaction improves the prediction performance.In addition, simply concatenating the vectors from two vessels provide the best performance in the proposed model compared with other aggregation methods.
To evaluate the lookback windows in Fig. 2, four different lookback windows were evaluated in addition to 3 min.As shown in Table IV, as the lookback window increases, the prediction error tends to decrease.It shows that the model successfully extracts temporal information rather than fitting temporal noise.However, the improvement in forecasts is not significant when the lookback window is longer than 3 min.
V. CONCLUSION
In this article, an interactive-aware short-term trajectory prediction model for marine vessels was proposed.The model followed the CVAE framework and, thus, was able to model the inherent uncertainty of the forecasting process.By sampling from the latent space, it can quickly generate multiple future trajectories.The interaction was encoded into a context vector for the model.The model was implemented in a seq2seq manner to account for time-series data.Experiments were performed on real-world two-ship encounter AIS data.The proposed method outperformed the baseline methods.In addition, we qualitatively showed that our model successfully models the uncertainty as well as the interactions.
Future work will extend our model to multivessel encounter scenarios over larger areas.More stimuli can be considered, such as ship types, semantic maps, and weather conditions.More broadly, there are also architectural considerations when large datasets are involved and integration with downstream planning tasks when predictive models are available.
Fig. 1 .
Fig. 1.Illustration of the possible trajectories in an encounter scenario.The red lines denote the possible trajectories, which depend heavily on the captain and the interactive ship.
Fig. 2 .
Fig. 2. Schematic illustration of the neural network architecture of a CVAE for vessel trajectory prediction.The solid lines denote all the processes for the inference, while the dashed lines represent the processes only used in training.
Fig. 4 .
Fig. 4. Prediction performance versus prediction time horizon.Since our model produced samples on the predicted trajectories, we computed the average trajectory to calculate the Euclidean distance error.
Fig. 5 .
Fig. 5. Illustration of the predicting trajectories based on different methods.We randomly draw nine samples in the test dataset.The black line indicates the ground truth trajectory of the vessel.
(b).Since the ferry has almost the same trajectory for Type 2 and Type 3 encounters, Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
Fig. 7 .
Fig. 7. Ferry trajectory predictions when the encounter type changes.(a) From type 3 to type 1 encounter.(b) From type 1 to type 2 encounter.
TABLE II QUANTITATIVE
RESULTS OF ALL THE METHODS ON THE DATASET (UNIT: M)
TABLE III EFFECT
OF DIFFERENT AGGREGATION METHODS (UNITS: M)
TABLE IV EFFECT
OF DIFFERENT LOOKBACK WINDOWS (UNITS: M)
|
v3-fos-license
|
2023-10-20T15:06:57.276Z
|
2023-01-01T00:00:00.000
|
264340286
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1051/epjconf/202328709027",
"pdf_hash": "d0deeda457186809a1c2218b3f85744814c7118e",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2683",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "836c9ce527bf114e0fb36d1be7f5a6c61cb1c505",
"year": 2023
}
|
pes2o/s2orc
|
Optoelectronic oscillator controlled by photodiode-based optoelectronic chromatic dispersion and FBG integration
. High Optoelectronic Chromatic Dispersion in Ge PN-type photodetectors affects the output of the Optoelectronic Oscillator. This is utilized to achieve high sensitivity wavelength monitoring and strain sensing. The sensitivity is enhanced for higher oscillating mode numbers and lower cavity lengths.
Introduction
Commercial PN photodiodes can exhibit a very large effective chromatic dispersion, known as optoelectronic chromatic dispersion (OED).The OED sensitivity of PN photodetectors is measured with sinusoidal modulated light and is defined as the wavelength-dependent change in RF phase-shift = /.It has been shown to be dependent on an absorption spectrum parameter of the semiconductor: −1 ( ⁄ ).We have demonstrated that germanium possesses a huge value in the C-and Lband owing to a very large value of the absorption parameter [1].For example, with a 4 MHz sinusoidal illumination, a commercial Ge PN photodiode (GPD Optoelectronics, GM3) exhibits a wavelength-dependent RF phase-change of 1 / in the telecommunication C-band.To achieve comparable dispersion in SMF28 optical fiber would require 400 in length [2].As a result, PN photodiodes have potential in high-sensitivity wavelength monitoring and spectral sensing.By integrating a Ge PN photodiode with an FBG interrogation system, a wavelength-shift resolution = 1.25 /√ was achieved.Furthermore, the OED sensitivity of the photodetector was enhanced by RF interferometry-based phase-shift amplification of = 410 4 , resulting in femtometer-resolution wavelength monitoring [3].OED was also utilized for high sensitivity spectral sensing of ethanol in water [4].
In this work we show that the high OED in the photodiode affects the operation of the Optoelectronic Oscillator (OEO).Besides altering the resonant frequencies of the oscillator, we demonstrate applications in wavelength sensing and FBG interrogation.
Theory
Fig 1 is a schematic of a single-cavity OEO.The feedback loop can generate self-sustained oscillations if its overall gain is larger than the loss and the circulating waves add up constructively in phase.While the former condition can be achieved by adding an electrical/optical amplifier to the system, the latter can be achieved by controlling the phase using a long fiber delay line [5].The Q-factor of a long fiber-delay line is given as = , where is the oscillator frequency and the τ is the delay given by / with being the optical length of the fiber and the speed of light.The relative spectrally-dependent time delay in a fiber is given as = (where is the dispersion due to the fiber).We show that by the addition of a high OED Ge-PN photodetector in the OEO system, the overall time delay is modified to = ( + ) (where is the OED dispersion from the PN photodetector).
Experiment
We measured the m=1 frequency for two different photodiodes.According to equation ( 7), a high OED photodetector in the OEO cavity will give an additional dispersion.A commercial Ge PN-type biased photodetector (Thorlabs, PDA 50B-EC) operated at zero gain was used.With increasing wavelength, the primary resonant frequency in the OEO cavity decreases.However, no significant change in the resonant frequency is observed when a low OED InGaAs PIN photodiode is integrated in the OEO cavity.These results are shown in Fig. 2. In the next experiment, the fiber cavity length was 5 , and the fundamental oscillating mode displays a decrease in RF frequency shift of approx.218 / for increase in wavelength, as plotted in Fig. 3a.This response is inversely proportional to the fiber length, as seen in Fig. 3b, as predicted by theory.In a third experiment, an FBG illuminated with broadband LED light is inserted in place of the tuneable laser in Fig 1 , and the reflected signal of the FBG acts as the optical source of the OEO.The axial strain applied on the FBG can be measured as the change in frequency of the oscillations generated in the OEO loop.We measured an RF frequency shift of 0.2645 / on the fundamental oscillating mode, which gave a minimum strain resolution of 38 .Higher modes and shorter cavity lengths can enhance the sensitivity to the sub-microstrain regime.This improved strain resolution can be achieved by incorporating appropriate RF filters.
Conclusion
In this work, we show for the first time that the photodiode OED can significantly affect the RF oscillation frequencies in an OEO system.The huge value of OED in germanium photodiodes can be utilized for a variety of spectral sensing applications.We demonstrate an application in FBG-based strain sensing.
Fig 2 :
Fig 2: Oscillating mode = 1 RF frequency shift dependence on wavelength for (a) InGaAs-PIN and (b) Ge-PN photodetectors integrated with an OEO system of fiber length 1 .
Fig 3 :
Fig 3: (a) RF frequency with wavelength for mode m=1 for different cavity lengths.Longer cavity lengths show a flatter response.(b) RF frequency shift with wavelength for different modes at cavity length L=5m.
|
v3-fos-license
|
2017-11-25T00:17:40.858Z
|
2017-06-13T00:00:00.000
|
46071283
|
{
"extfieldsofstudy": [
"Psychology",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://academic.oup.com/workar/article-pdf/4/3/238/24854461/wax014.pdf",
"pdf_hash": "9c3e35a8fef7a55e9aea02609badd5be5ad58cc7",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2684",
"s2fieldsofstudy": [
"Sociology",
"Economics"
],
"sha1": "24ffa181350d74aa4dc36d032486c8982c6821ae",
"year": 2018
}
|
pes2o/s2orc
|
Early Adversity and Late Life Employment History—A Sequence Analysis Based on SHARE
Numerous studies have linked poor socioeconomic circumstances during working life with early retirement. Few studies, however, have summarized entire patterns of employment histories and tested their links to social position at earlier stages of the life course. Therefore, this article summarizes types of late life employment histories and tests their associations with adversity both during childhood and early adulthood. We use data from the Survey of Health, Ageing and Retirement in Europe (SHARE) with retrospective life history data on 5,857 older men and women across 14 countries. Employment histories are studied with annual information on the employment situation between ages 50 and 70. To summarize employment histories we apply sequence analysis and group histories into 8 clusters with similar histories. Most of these clusters are dominated by full-time employees, with retirement before, at or after age 60. Additionally, we find clusters that are dominated by self-employment and comparatively late retirement. The remaining clusters are marked by part-time work, continuous domestic work, or discontinuous histories that include unemployment before retirement. Results of multinomial regressions (accounting for country affiliation and adjusted for potential confounders) show that early adversity is linked to full-time employment ending in retirement at age 60 or earlier and to discontinuous histories (in the case of women), but not to histories of self-employment. In sum, we find that histories of employees with early retirement and discontinuous histories are part of larger trajectories of disadvantage throughout the life course, supporting the idea of cumulative disadvantage in life course research.
A BSTR ACT
Numerous studies have linked poor socioeconomic circumstances during working life with early retirement. Few studies, however, have summarized entire patterns of employment histories and tested their links to social position at earlier stages of the life course. Therefore, this article summarizes types of late life employment histories and tests their associations with adversity both during childhood and early adulthood. We use data from the Survey of Health, Ageing and Retirement in Europe (SHARE) with retrospective life history data on 5,857 older men and women across 14 countries. Employment histories are studied with annual information on the employment situation between ages 50 and 70. To summarize employment histories we apply sequence analysis and group histories into 8 clusters with similar histories. Most of these clusters are dominated by full-time employees, with retirement before, at or after age 60. Additionally, we find clusters that are dominated by self-employment and comparatively late retirement. The remaining clusters are marked by part-time work, continuous domestic work, or discontinuous histories that include unemployment before retirement. Results of multinomial regressions (accounting for country affiliation and adjusted for potential confounders) show that early adversity is linked to full-time employment ending in retirement at age 60 or earlier and to discontinuous histories (in the case of women), but not to histories of self-employment. In sum, we find that histories of employees with early retirement and discontinuous histories are part of larger trajectories of disadvantage throughout the life course, supporting the idea of cumulative disadvantage in life course research.
Demographic ageing provides major challenges to European countries and their pension schemes. It raises, in particular, the question of how the proportion of older people on the labor market can be increased. Research therefore needs to improve knowledge on employment patterns at older ages and to investigate their determinants. With regard to this, studies show that contextual factors and political regulations influence the age by which people retire (e.g., tax incentives and retirement legislations; Börsch-Supan, Brugiavini, & Croda, 2009;Gruber & Wise, 1999). Besides, a wide range of individual characteristics have been related to retirement timing (Damman, Henkens, & Kalmijn, 2011;Fisher, Chaffee, & Sonnega, 2016;Wang & Shultz, 2010). The latter studies point to at least three types of determinants: employment and working conditions, poor health, and childhood adversity.
In the case of employment and working conditions, studies from different countries show that people who work in disadvantaged occupational positions, and under adverse physical or psychosocial working conditions, are more likely to retire early (Carr et al., 2016;Hintsa et al., 2015;Lund & Villadsen, 2005;Madero-Cabib, Gauthier, & Le Goff, 2015;Radl, 2013;Visser et al., 2016), to leave the labor market due to disability (Falkstedt et al., 2014;Juvani et al., 2014;Lahelma et al., 2012), and to self-report retirement intentions (Carr et al., 2016;Elovainio et al., 2005;Wahrendorf, Dragano, & Siegrist, 2013). Second, with regard to health as another determinant of late life employment, studies across different countries have linked various measures of health to employment patterns (for a review see e.g.,: van Rijn et al., 2014), including self-perceived health (Mein, 2000), poor mental health , health functioning (McPhedran, 2012;Rice et al., 2011) and chronic disease (Majeed, Forder, & Byles, 2014;Mein, 2000;van den Berg et al., 2010). A small number of studies also show that previous stages of the life course, and especially socioeconomic disadvantage during childhood, are a third determinant of late life employment histories. For example, adversity during childhood was linked to premature retirement (Bonsdorff et al., 2015;Harkonmäki et al., 2007;Madero-Cabib et al., 2015), as well as to labor market disadvantage during adulthood (Caspi et al., 1998;Dragano & Wahrendorf, 2014;Flores, García-Gómez, & Kalwij, 2015;Wahrendorf et al., 2016). Yet, the latter studies are often based on prospective cohorts (particularly birth cohorts which have yet to reach old age). Therefore, studies on the complex interrelations between disadvantages at different stages of the life course, including childhood and adulthood, and labor market involvement at older ages are lacking (Fisher et al., 2016).
But at least two shortcomings in research exist: Aside from the above mentioned small number of studies linking early stages of the life course with labor market participation, a second shortcoming refers to the measurement of labor market participation in later life. Most studies use a single measure only, for example, whether a person is in paid work at a specific age or not (Flores & Kalwij, 2014;Komp, van Tilburg, & van Groenou, 2010), the age at retirement (Raymo et al., 2011), or retirement intention (Wahrendorf et al., 2013). This neither considers how retirement is embedded within larger histories, nor-more generally-does it recognize the complexity of employment patterns in later life. To describe entire employment histories in later life, for example, not only the age of retirement is important but also the occupational situation before retirement. This includes information on whether the person was unemployed, before retiring, or whether he or she worked part or full-time before leaving the labor market (McNair et al., 2004;Parker & Rougier, 2007). On a similar note it is also important to consider differences between employed and self-employed workers, as the latter generally have lower pension levels (Cahill, Giandrea, & Quinn, 2013). In other words, when studying late life employment histories, a more comprehensive approach is needed where retirement is not isolated from larger histories, but where entire patterns of labor market participation that cover an extended time frame are considered (Aisenbrey & Fasang, 2010;George, 2014). Such an approach helps to elucidate and to understand employment participation in more detail. In addition, when studying whether types of employment histories are linked to previous circumstances, we may also identify entry-points for intervention measures at earlier stages of the life course. In sum, despite an impressive number of studies on predictors of retirement and employment at older ages, only few studies investigate complete late life employment histories in the light of adversity at earlier stages of the life course. Using data from the Survey of Health and Retirement in Europe, including details on early adversity and late life employment trajectories across 14 countries, the present study aims to extend research along these lines. Our broader conceptual framework, hereby, relies on the life course perspective. The next section briefly describes some core ideas of the life course perspective that guide our study.
THE LIFE CO UR S E PER S PECTIVE
Researchers increasingly argue that the life course perspective is a fruitful research perspective and conceptual framework that helps to better understand labor market involvements of older people (Madero-Cabib, 2015;Worts et al., 2016). Importantly, this does not simply mean that studies need to rely on longitudinal data. Foremost, the life course perspective draws attention to specific principles, or life course mechanisms, that shape individual lives (Elder, Johnson, & Crosnoe, 2003;George, 2013;Kuh et al., 2003;Sackmann & Wingens, 2003). One important principle is that studies interested in individual life course need to adopt a holistic perspective, where research not only focuses on single "transitions" (e.g., retiring from paid work), but also on whole "trajectories" (Abbott, 1995;Aisenbrey & Fasang, 2010;Sackmann & Wingens, 2003). In the case of late life employment histories, this refers to the above mentioned necessity of a comprehensive study of complete late life employment histories. A statistical method with a growing interest in that respect is sequence analysis (Abbott, 1995;Aisenbrey & Fasang, 2010;Studer & Ritschard, 2016). This method uses whole trajectories as units of analyses, and enables the identification and regrouping of types of employment histories with similar patterns (see Methods for details). The first core aim of the present study is to adopt this comprehensive perspective and to study late life employment histories based on sequence analyses.
Another core principle of the life course perspective is that individual histories do not unfold independently, but are related and shaped through different mechanisms linking previous stages of the life course and later outcomes (Dannefer, 2003;Elder et al., 2003;Kuh et al., 2003). One such notion refers to the concept of "cumulative advantages or disadvantages" (Dannefer, 2003). In this perspective, adversity at earlier stages of the life course results in further disadvantages throughout the life course as well as disadvantages at older ages. In other words, disadvantages tend to cluster longitudinally throughout the life course, where inequalities grow throughout the course of life. This perspective, notably, opens a large window to the study of late life employment histories, in particular because it means that employment patterns are part of larger histories of disadvantages. An alternative life course mechanism refers to the concept of "critical periods," which suggests that the impact of adversities differs depending on the period or life stage at which they occur. In this regard, the point at which disadvantages happen can be crucial when it comes to the impact it has. In this context, however, most studies (including the one named above) have used childhood conditions as a "critical" time window of interest (Viner et al., 2015), without studying links between adulthood conditions on health at older ages. Therefore, it is the second aim of the present article to study how adversity during childhood and adulthood are linked to types of late life employment histories.
Although this study focuses on types of late life employment histories and their links to early adversity, we need to keep in mind that late life employment histories in our sample are no doubt also linked to the historical and cultural contexts in which they unfold (between 1980 and early 2000s in our case ;Elder, 1999). For example, traditional gender roles and the division of paid and unpaid work within partnerships may lead to more women working part-time compared with men, or to women that entirely focus on domestic work (Han & Moen, 1999). In addition, links between early life disadvantages and employment patterns may be different for men and women. A recent study from Australia, for example, suggests that links between childhood adversity and weak ties to the labor market during working life are more pronounced for women, while no such association exists for men (Majeed et al., 2015). Another important factor is the country itself, as well as its national pension systems and regulations (Bennet & Möhring, 2015;Gruber & Wise, 1999). Therefore, our analyses will consider gender and country affiliation as important covariates and we will discuss our findings in the light of these aspects.
All in all, this article has two aims: First, we set out to summarize complete late life employment histories and to distinguish different types of employment histories among older men and women in Europe. In doing so, we extend current knowledge, which is largely based on studies focusing on retirement timing, and give an in-depth description of late life employment patterns in our sample, including their variation by sex and country. With the second aim, we test if adversity during childhood and adulthood is related to types of late life employment history. In accordance with the above-presented life course mechanisms, we may observe that both adversity during childhood and adulthood are related with later histories, but also that, partly, the effect of childhood is mediated by adulthood adversity. Again, we will investigate if these latter associations vary by gender, as well as considering country-affiliation in multivariable analyses.
Data Source
The present study uses the latest data (Release 5.0) from the Survey of Health, Ageing and Retirement in Europe (Börsch-Supan et al., 2013). SHARE is a longitudinal survey collecting data on a variety of sociological, economic and health-related topics among nationally representative samples of adults aged 5o or older in different European countries. The survey started in 2004-2005 with on-going waves of data collection at 2-year intervals. The third wave of SHARE consists of a separate retrospective survey collecting life history data (also called SHARELIFE; Börsch-Supan et al., 2011). Alongside partnership and children histories, this also includes information on socioeconomic circumstances during childhood and past employment histories among older men and women. In SHARELIFE, data is available for 14 countries (Sweden, Denmark, Ireland, Germany, the Netherlands, Belgium, France, Switzerland, Austria, Italy, Spain, Greece, Poland, and Czech Republic). In each country, information is collected via computer assisted personal interviews (CAPI) in the household, where samples consist of a household probability sample. At the onset of the study, the household response rate was 61.6% for the total sample ranging from 81% in France to 39% in Switzerland, with rates above 50% in 8 out of 11 countries. This is above average compared to other European Surveys (Börsch-Supan & Jürges, 2005). With respect to attrition between wave 2 and wave 3, the percentage of respondents lost varied between 34% (Austria) and 14% (Switzerland), with rates below 20% in seven countries (Schröder, 2011). To address this selection processes, SHARE provides weights, which we use in our descriptive analyses (see analytical strategy for details).
An innovation of the retrospective data collection in SHARE, is the so called "lifegrid approach. " The recall and timing of information is hereby supported by a graphical representation of the respondent's life which is filled in during the interview. This approach was first developed as a self-completion questionnaire (Blane, 1996), and subsequently transformed into Computer Assisted Personal Interviews (CAPI). Although recall bias is a disadvantage of collecting data retrospectively, there are also several advantages. First, it is an economic way of getting longitudinal information. Second, it guarantees comparable information referring to different time points in respondents' life histories. Third, validation studies revealed high accuracy of recalled information, in particular when the data collection is supported by a lifegrid (Belli et al., 2007) and when asking about socio-demographic conditions (Berney & Blane, 1997;Havari & Mazzonna, 2015) and employment histories (Baumgarten, Siemiatycki, & Gibbs, 1983;Bourbonnais, Meyer, & Theriault, 1988). The Project website presents more details about SHARE and its methods (www.share-project.org).
Respondents
In total, 28,495 participants participated in wave 3 in 2008-2011. For the aim of our study, the following sample restrictions are applied: First of all, because we are interested in employment histories from age 50 to 70, we only include men and women aged 70 or older at the time of the interview for which we have complete employment histories (n = 7,852; 3,777 men and 4,075 women). Secondly, because we investigate links between respondents occupational position during adulthood (between 25 and 49) and late life employment histories, respondents had to be in paid employment at least once during adulthood (n = 6,958; 3,707 men and 3,251 women). Finally, to prevent biased information on work histories, we additionally excluded respondents when the interviewer documented respondent difficulties in answering the retrospective interview (n = 6,540; 3,496 men and 3,044 women). We checked for missing values on all variables under study, but the amount of missing values was very low (lower than 6% for each variable) and we also found no indication of systematic missing data, which prevented the application of any imputation strategy. In sum, this leads to a final sample of 3,117 men and 2,740 women (n = 5,857).
Late life employment histories
The third wave of SHARE contains an extensive employment module that collects details on each job a respondent had during his or her working career, and also, on each period when the respondent was not in paid work (for 6 months or longer). Information on jobs includes the starting and ending date, whether the job was part-time or full-time and whether the respondent was an employed or selfemployed worker. In addition, if a person was not working, they provided a reason for not working, including retirement, domestic work or unemployment. By combining this information, we can describe respondent's occupational situation, for each year of age between 50 and 70 years. In a few cases, however, it is possible that there is information on paid work and on non-paid work for the same year (in 5% of all cases). For example, a person may have stopped and started a new job in the same year, including a 6-month gap of unemployment. In that case, we prioritize the information on non-paid work, because a break is considered more important than the continuation of a job spell. In our analyses, we distinguish between two types of non-employment-unemployment and domestic work. This distinguishes people who actively look for a job and (and thus still count towards the economically active population), and those who focus on home or family work. In sum, for the purpose of our analyses, seven situations (or "states") are distinguished: (a) " employed / full-time" (working 35 or more hours a week), (b) "employed/ part-time" (working less than 35 hours a week), (c) "self-employed" (irrespective of working hours), (d) "unemployed" (and looking for a job), (e) "domestic work" (looking after home or family), and two types of retirement, depending on whether respondent retired from paid work (f) "retired from paid work," or not (g) "retired not from paid work." A number of other states could have been included. For example, we may have differentiated self-employment according to working hours, or included additional information about the occupational position. Yet, the importance of this distinction (and the prevalence of resulting states) appeared not relevant enough to warrant the additional complexity that would have been involved in the analyses (the number of possible sequences grows extensively with numbers of states).
In sum, our approach accounts for different forms of labor market situation and describes late life employment histories, in terms of annual information for each year of age between 50 and 70.
Adverse socioeconomic circumstances
We include two binary indicators of adverse socioeconomic circumstances, one referring to childhood and another to adulthood. In both cases, measures are based on the occupational skill level, either referring to the occupation of the main breadwinner at age 10 (in the case of childhood) or to respondents' main skill level between age 25 and 49 years (in the case of adulthood). The skill level represents the broad hierarchical structure of the International Standard Classification of Occupations (ISCO) that was developed by the International Labor Office. It divides between four different levels of required skills in the occupation for a competent performance of the tasks and duties. Notably, levels may differ from formal educational qualifications of the worker, because they can also be acquired through experience and informal training. Higher skill levels are supposed to put workers in a more advantaged situation, because higher skilled occupations are expected to be related to higher salary and job security than occupations with lower skill levels (Bergmann & Joye, 2005). Also, it constitutes an important dimension in more sophisticated classification schemes, for example, within the Erikson-Goldthorpe-Portocarero (EGP) class scheme (Erikson & Goldthorpe, 1992). For the analyses, low skill level is assumed if someone belongs to the lowest level (1st skill level).
Additional variables
Besides sex, age and country affiliation, the analysis includes health during childhood and adulthood (each assessed by two indicators), partnership and parenthood history, and education, mainly as control variables in multivariable analyses.
The first measure of childhood health refers to self-rated health (less than good) when respondents were 10 years old, the second measure is whether a person reports any period of emotional, nervous, or psychiatric problems until age 16. As regards health during adulthood, we consider the number of periods (lasting longer than 1 year) respondents reported to be ill or disabled (regrouped into "none," "one," and "two or more" periods) since age 16, and whether respondents reported a period of emotional, nervous, or psychiatric problems in the same time frame. In the case of partnership history we use life history data and assess whether respondents had a partner for most of the time between 50 and 70 or not (75% or more). Parenthood history is measured by the maximum number of children (aged between 0 to 16 years) a person had during adulthood, regrouped into "no children," "one or two children" and "three or more children. " In contrast to the total number of children, this may be more appropriate for assessing child raising responsibilities during working life. Education is measured according to the International Standard Classification of Educational Degrees (ISCED-97) that we regroup into "low education" (pre-primary, primary or lower secondary education), "medium education" (secondary or post-secondary education), and "high education" (first and second stage of tertiary education). All variables are summarized in Table 1.
Analytical Strategy
Following a basic sample description in Table 1, the analyses proceed in two steps. First, we apply sequence analysis (Abbott, 1995;Aisenbrey & Fasang, 2010) and identify types of late life employment histories. Second, regression models test the associations between early life circumstances and types of late life employment histories.
More specifically, the first step starts with a general overview of late life employment histories for men and women, where we present the average years spent in the seven different occupational situations (cumulative state duration, Table 2). In addition, the mean number of spells (consecutive runs of the same occupational situation) and an indicator to describe the general heterogeneity of late life employment histories (Shannon's entropy) is presented. Then, we regroup histories with similar patterns into empirically distinct clusters. Specifically, we compare each individual's employment history to all other histories that are observed in the data and calculate differences of each single sequence to another, using Optimal Matching (Halpin, 2012;Studer & Ritschard, 2016). This adequately considers duration, timing and ordering when comparing sequences to one another-three key aspects for characterizing life trajectories (Studer & Ritschard, 2016). Statistically, differences (or "distances") are calculated in terms of transformations or, more precisely, number of operations that are necessary to make one sequence equal to the other, either by substituting states (so-called "substitution costs") or by inserting and deleting states (so-called "indel costs"). For the analysis, we follow the standard practice (Abbott & Tsay, 2000), and set the substitution costs consistently to twice the indel cost, 1.0 and 0.5, respectively. Comparing each sequence to another results in a matrix that quantifies distances for each pair of individuals in the sample (i.e., a 5857 × 5857 matrix in our study). Thereafter, we regroup similar sequences into typologies of late life employment history based on cluster analysis. More specifically, we use Partitioning Around Medoids (PAM) clustering, as implemented in the WeightedCluster package in R (Studer, 2013).
To determine the most appropriate number of clusters, we compared a 6 to 10 cluster solution based on the following measures of cluster quality, as proposed in the literature: the Average Silhouette Width (ASW), the Point Biserial Correlation (PBC) and Hubert's Gamma (HG; Studer, 2013), as well as the within/between cluster distance ratio (WB-ratio) (Aisenbrey & Fasang, 2010). These measures are presented in Supplementary Table S1. In addition, we verified each cluster solution in terms of its content validity, and whether a higher cluster solution added another cluster of interest with reasonable size. On this basis, we decided to adopt an eight-cluster solution, because all solutions revealed a good structure (an ASW above 0.5 is considered a reasonable value; Studer, 2013), and because this turned out to be the most informative cluster solution with distinct clusters. An overview of resulting clusters is presented in Figure 1 in terms of indexplots and chronograms. Indexplots draw a horizontal line for each individual, where each state has a distinct color, and chronograms present a vertical line showing the prevalence of each occupational situation in per cent for each age. Furthermore we present frequencies for each cluster and their distribution by sex in Table 3, including tests of significance (χ 2 ). Calculations and graphs are based on the SADI-package (Halpin, 2014) and the SQ-package (Brzinsky-Fay, Kohler, & Luniak, 2006) in Stata, as well as those, we use the TraMineR-package (Gabadinho et al., 2011) and the WeightedCluster-package (Studer, 2013) in R for calculating dissimilarities and clusters, respectively. The second set of analyses studies associations between the two indicators of adversity and the probability of belonging to a specific cluster of late life employment histories. For this, we investigate how the two indicators of adversity are associated with cluster membership, estimating a series of multinomial regression models with cluster membership as the dependent variable. The findings are presented for men (Table 4A) and women (Table 4B) separately. In sum, we estimate three models both for childhood and adulthood adversity. Model 1 estimates unadjusted associations between adversity and cluster membership. Model 2 estimates adjusted associations for each indicator of adversity separately, adjusted for age (included as a continuous variable), country affiliation (included as country dummies), education, partnership history, parenthood and health prior and during working life). Model 3 considers all variables simultaneously. All models have their own value in understanding the importance of life course adversity for late life employment histories: On the one hand, the first two models allow the testing of the unadjusted and adjusted effects for adversity at two different stages of the life course. On the other hand, the third model investigates the combined associations of both measures of adversity. All calculations are done with Stata 14.
To facilitate the presentation and interpretation of findings of the multinomial regression models (Table 4), we follow recent recommendations and present average marginal effects (denoted as "AME") together with levels of significance and confidence intervals (Williams, 2012). On the one hand, AME are more intuitive and easier to interpret compared to Odds Ratios, and on the other hand we do not need to use one cluster as a reference category and interpret results in relation to this category. Instead, we can contrast the probability of belonging to each cluster for people with and without adversity. For example, in case we find an AME of −5.00 for childhood adversity, this means that the probability of being part of the cluster is on average 5 percentage points lower for people with adversity than for those without adversity.
Finally, to summarize the core findings of the article, we predict the probability of being part of each cluster for levels of adversity separately and display results as bar charts in Figure 2 for the total sample (average adjusted prediction; Williams, 2012). In addition, we formally test if the association between early adversity and late life employment histories differs for men and women, introducing interactions between sex and adversity (presented in Supplementary Table S2).
In order to compensate for unit nonresponse, we apply calibrated cross-sectional weights in descriptive analysis. These weights are specifically defined for wave 3 and are calculated for each country separately (see SHARE Release guide 5.0.0 for details; Börsch-Supan, 2016). They help to reduce a potential selection bias due to unit nonresponse and to reproduce the size of each national target populations, for example, when calculating the prevalence of clusters. In addition, to account for the dependency of cases within a household, regression models for the total sample account for clustering within households by using robust estimators (Rogers, 1993).
Descriptive Findings
As shown in Table 1, our sample includes slightly more men (n = 3,117) than women (n = 2,740) with an average age of 77 years. The majority of respondents have low education (no, primary or lower secondary education), and about 20% had adversity during childhood or adulthood (for details see Table 1). Table 2 shows how many years people spent on average in the seven studied states between age 50 and 70 years (observation period: 21 years). In sum, men spent more years in paid work than women. This is both true for self-employed work and, in particular, for employed work in full-time (men: 7.6 years, women: 3.9 years). Men also had more years in retirement. But we see that women were parttime employed and in domestic work longer than men (men: 0.5 years, women: 6.8 years). Overall, men have a higher number of different spells and histories are slightly more complex (as indicated by higher values for Shannon's entropy).
Types of Late Life Employment Histories
Which types of employment histories in later life-or "clusters"-can be distinguished in our sample? Figure 1 examines this question, and Table 3 shows how the clusters vary by sex. We identify eight clusters: The first three clusters (clusters 1-3) are dominated by histories of fulltime employed workers that either retired around age 65 years, around age 60, or even earlier (at around 55). These three clusters are quite homogenous and the majority of the total sample belongs to one of them, in particular men. Clusters 4 and 5, in contrast, include persons who were self-employed workers and entered retirement around either age 65 (cluster 4) or age 60 (cluster 5). Furthermore, cluster 6 captures those who were part-time employed workers before retiring, and cluster 7 is dominated by continued domestic work without retirement. The two latter clusters are clearly dominated by women. Cluster 8, finally, covers discontinuous histories that often involve a spell of paid work, which is interrupted by unemployment before ending in retirement. It is the smallest and least homogenous cluster of the analyses. As demonstrated in Table 3, the distribution of clusters differs significantly by sex (p < .001).
Associations Between Early Adversity and Late Life Employment Histories
The second aim-to examine associations between early adversity and late life employment histories-is addressed by applying multinomial logistic regression for men (Table 4A) and women (Table 4B). We present three models both for childhood and adulthood adversity, including an unadjusted model (Model 1), an adjusted model (Model 2), and a final model where the two measures of adversity are analyzed simultaneously (Model 3). Starting with men, we see that those with adversity during childhood or adulthood are less likely to be part of cluster 4 or 5-the two clusters with self-employed workers and a rather late Employment AME AME AME AME AME AME AME AME retirement. An AME of −8.2 for childhood adversity (Model 1), for example, means the probability that men with adversity during childhood are part of cluster 4 is, on average, 8.2 percentage points lower.
Corresponding values indicate a lower probability of 10.9 percentage points in case of adversity during adulthood. Estimates remain significant in the adjusted model (Model 2, including education and health), and also when the combined effects of childhood and adulthood are estimated (Model 3). Thus, we find that the observed associations persist after adjustments for education and health, and also that childhood and adulthood adversity are both independently related to histories of self-employment with retirement around age 65 (cluster 4). When we turn to the first three clusters (full-time employed histories), adversity during childhood leads to comparatively early retirement (clusters 2 and 3), most consistently in the case of adversity (during both childhood and adulthood) and retirement before are 55. Retirement around age 65 (cluster 1), though, is not related to early adversity. Finally, albeit not significant, results indicate that adversity during adulthood is linked to discontinuous histories (cluster 8).
For women (Table 4B), early adversity (childhood and adulthood) is again related to a lower probability of being part of cluster 4 or 5 (selfemployed with comparatively late retirement). In contrast to men, however, adversity is not significantly related to an early retirement (before age 55) following work as an employee (cluster 3). Another finding for women is that those who had adversity during adulthood are more likely to have histories of part-time or domestic work in later life (but not in case of childhood adversity). Lastly, turning to cluster 8, we see that discontinuous histories are significantly related to early adversity for women.
Notably, as shown in Supplementary Table S2, the associations between each indicator of early adversity and employment histories are significantly different between men and women (see Supplementary Table S2 for details).
Finally, to summarize main findings, we present the predicted probabilities of being part of one cluster by early adversity (based on Model 3 of previous regressions) for the total sample (Figure 2). Compared with persons without early adversity, those who had early adversity have again a higher probability of belonging to clusters 2 or 3 (full-time employed workers retiring before age 65), and also to cluster 8 (discontinuous histories). In the case of clusters 4 and 5 (self-employment with retirement after age 60), however, those with early adversity have a lower probability of belonging to one these two clusters. Reported associations are significant and slightly more pronounced in the case of adulthood adversity.
DISCUSS ION
This contribution relies on retrospective data from SHARE with detailed information on late life employment histories between age 50 and 70 years for 5,457 men and women in Europe. With the first aim, we summarize employment histories using sequence analysis. This asks which types of late life employment histories can be distinguished for men and women in our sample. With the second aim, we investigate if types of employment histories in later life are linked to early adversity (measured both for childhood and adulthood).
Overall, findings of the present study are in line with previous research on life course influences on later labor market participation, specifically studies investigating consequences of early life disadvantage on labor market involvements later on (Bonsdorff et al., 2015;Carr et al., 2016;Harkonmäki et al., 2007;Radl, 2013). Yet, because we studied entire employment patterns (instead of single outcomes) in conjunction with adversity during both childhood and adulthood, we add further insight to existing literature-at least in three ways.
Firstly, by studying entire employment histories on the basis of sequence analyses we derived eight distinct types of late life employment histories out of the complexity and variety of individual histories. Importantly, in contrast to previous studies focusing on retirement timing, this broader perspective did not require that people retire in the study period or work at study onset. In doing so, we gave a more comprehensive picture of late life employment histories that, for example, also includes women who had histories of domestic work that would have been otherwise excluded (Worts et al., 2016). Furthermore, because we distinguished between different forms of labor market Figure 2. Probability of cluster membership by early adversity for the total sample. Adjusted for sex, age, country-affiliation, education, partnership and parenthood history, and health prior and during working life, n = 5,857. participation (i.e., full time employment, part-time employment and self-employment), it also became clear that retirement ages vary depending on previous types of labor market involvement, with selfemployed workers tending to have a later retirement than employed workers. The comparatively earlier retirement of employed people could be because they have restricted opportunities to work longer (even if they want to continue working)-and that self-employed workers have more freedom in deciding at what age they retire and often choose to continue working. From this perspective, findings may indicate that more flexible retirement arrangements are necessary for employed workers who want to work longer, for example, through retirement schemes that allow a reduction of working time before leaving the labor market. This argument is further supported by the fact that such a cluster (where employed people reduced their working hours before retiring) was not found in our analyses. The later retirement of self-employed people, though, may also be because they are likely to have lower pension levels which forces them to work longer (even if they do not want to continue working; Cahill et al., 2013). Or, we may assume that self-employed workers have comparatively better working conditions (e.g., lower levels of work stress or higher salary), and therefore, are more likely to continue working because they enjoy it. In fact, the group of self-employed people is probably more heterogeneous than employed workers (Blanchflower, 2000), covering a wider spectrum of motives that may lead to extended working lives (Halvorsen & Morrow-Howell, 2016).
Secondly, by studying how both childhood and adulthood adversity are linked to types of employment histories we found evidence to support the notion that disadvantaged histories are likely to be part of larger histories of disadvantages. More specifically, we found that childhood and adulthood adversity were both independently linked to clusters of full-time employment with early retirement and to discontinuous histories. This supports the idea of cumulative disadvantages (Dannefer, 2003;DiPrete & Eirich, 2006) and extends previous research that is restricted to childhood conditions. It shows that adulthood conditions are important as well (irrespective of what happened before) and that neither childhood nor adulthood can be seen as a "critical period" in its strict sense (Kuh & Ben-Shlomo, 1997).
Thirdly, our results also revealed interesting differences between men and women, both in terms of employment histories and the way early adversity was linked to types of histories. Following our expectations, most men had a history marked by full-time work (either employed or self-employed) and retirement later on. Many women, in contrast, had histories with continuous domestic work (without retirement) or part-time work. This shows that female histories are often dominated by one state only (continued domestic work), while most male histories involve different states (work and retirement). This is in line with existing research of traditional gender divisions of paid and unpaid work and indicates that, compared to men, women have a weaker attachment to the labor market (Worts et al., 2016). Aside from these differences in employment histories, the association between early adversity and late life employment history also concerns differences between both genders. Specifically, we found that the association between early adversity and discontinuous histories in later life was significant for women, but not for men. Possibly, as suggested in a recent study from Australia (Majeed et al., 2015), cultural expectations and traditional gender roles lead to greater difficulties for women in gaining a foothold in the labor market, specifically if they experience adversity earlier on. Future studies, however, need to investigate if this holds true for all countries, including those with less pronounced gender differences (Worts et al., 2016).
All in all, our study illustrates how the life course perspective helps to elucidate labor market involvement at older ages. Particularly, we see that men and women have different types of late life employment histories and that the complexity of these histories requires an in depth analysis that is not limited to retirement timing only. In addition, we see that employment histories are partly related to conditions at previous stages of the life course, including adversity during childhood and adulthood.
Strengths and Limitations
Our study profits from several strengths, including a large study sample, detailed life history data, the use of sequence analyses and the inclusion of several covariates. It is imperative, however, that we consider several limitations.
First, our study focuses on individual determinants and thereby, we did not consider details on country specific policies and pension schemes. Previous studies have shown, for example, that institutional differences between countries, such as pension systems and active labor market policies are important factors in explaining labor market participation in late life (Börsch-Supan et al., 2009;Engelhardt, 2012;Fischer & Sousa-Poza, 2006). Therefore we could have included details on pension schemes for each country into the analyses (e.g., average level of public pension or country-specific state pension ages) and investigated how these are related to cluster membership. Or, we could even have conducted clustering for each country separately. Yet, for country specific clustering, 14 countries may not be sufficient for conducting meaningful multilevel analyses on the influence of national contexts. Nevertheless, it is worth noting that-albeit we found that some clusters were more likely in some countries-clusters were represented in each country. Furthermore, when testing links between early adversity and types of late life employment histories, regression models were adjusted for country-affiliation, and we also considered country specific weights in descriptive findings. In addition, while it is plausible that the national context affects types of employment histories, the association between adversity and employment histories in later life may be less affected. In sum, we think the existing sample size is not large enough to warrant the additional complexity that would have been involved in country-specific analyses.
Second, some may argue that clustering of histories should also have been conducted for men and women separately. However, sexspecific clustering would have complicated comparisons between both sexes, such that it would not have been possible to test if men are more likely to belong to the same types of history compared to women. Furthermore, we would have had use two different cluster solutions as outcomes in multinomial regression analyses, making meaningful comparisons of links between early adversity and histories between both genders impossible.
Third, the core measures of our study were collected retrospectively, namely early adversity and employment histories between ages 50 and 70 years. As such, respondents may have remembered information inaccurately, or remembered things rosier than they were. We thus need to consider a potential recall bias. Yet, the proportion of respondents with early adversity was quite high. Likewise, there is increasing support that retrospective data (in particular those collected via "lifegrid" as the case in SHARE) provide reliable and valid information (e.g., Belli et al., 2007;Berney and Blane, 1997;Havari and Mazzonna, 2015).
Fourth, the measurements of adversity during childhood and adulthood were both based on a simple binary indicator, referring to the occupational position. Clearly, this does not adequately cover other dimensions of socioeconomic disadvantages, including material circumstances (e.g., household income or housing conditions) and educational factors (e.g., number of books or educational attainment; Galobardes, Lynch, & Davey Smith, 2004;Galobardes et al., 2006). Yet, occupational position was the only measure that was available for childhood and adulthood in SHARE. Also, while future studies may compare and test if our findings hold true for other indicators, we nevertheless maintain that the measure used is a valid indicator of socioeconomic disadvantages, as used in various previous studies (e.g., Wahrendorf et al., 2013).
Fifth, in our study, employment sequences were measured on a yearly basis, and spells were recorded only if they were longer than 6 months in the interview. We may, therefore, have bypassed short spells (e.g., spells of short-term unemployment) and underestimated the diversity of employment sequences. Similarly, albeit we distinguished seven different occupational situations in our study, future studies may go even further and include or combine additional information when defining occupational states. For example, it may be interesting to include information on voluntary work or to specify our measure of retirement in terms of types and levels of pension benefit. Similarly, it would be desirable to combine our data with information on pension benefits from administrative sources. Administrative data, however, is only available for the German subsample of SHARE (Börsch-Supan, Alt, & Bucher-Koenen, 2015), and again, we need to ask if the resulting, more detailed subgroups are large enough to allow for meaningful analyses.
Finally, our results rely on a sample of men and women born between 1908 and 1939. They grew up under specific circumstances (e.g., 1930s depression), and also had their late life employment histories during a specific historic period (mostly between 1988 and 2008). Therefore, albeit this is unavoidable for methodological reasons, the relevance of our results for today's workforce is possibly different. In fact, given that the nature of work and employment has changed significantly over the past few decades (often combined with instability and discontinuity of employment histories (Gallie, 2013;Kalleberg, 2012), we may have underestimated the present amount of discontinuous histories. Similarly, the importance of socioeconomic circumstances may be different today, and thus the impact of early adversity may be different as well.
CON CLUS IONS
In conclusion, our study shows that employment histories in late life, in particular those marked by early retirement and discontinuity, are part of larger trajectories of disadvantages throughout the life course. One implication is that policies that want to increase the amount of workers at older ages need to consider that some measures are more appropriately for specific age groups (Leisering, 2004) and should also address different stages of the life course. More specifically, pension schemes or working conditions of older workers are only one of the many interrelated aspects that are related to the labor market involvements of older workers. In fact, our findings suggest that circumstances during childhood and adulthood are also relevant. This could, for example, include policies that reduce childhood poverty or promote workforce participation at younger ages through active labor market policies. A second, rather conceptual implication is that our study may, in a broader frame, also be instrumental in elucidating health inequalities at older age. Specifically, since an increasing number of studies show that work and employment conditions, and in particular discontinuous histories predict poor health at older ages, our study adds to these by showing that histories themselves are related to social conditions earlier on (Breeze et al., 2001;Wahrendorf, 2015). In other words, our study indirectly supports existing research by observing that links between early adversity and health in older ages are partly due to labor market disadvantage (Blane et al., 2012;Wahrendorf & Blane, 2015).
SUPPLE M EN TA RY M ATER I A L
Supplementary data is available at Work, Aging, and Retirement online. Additional funding from the German Ministry of Education and Research, the U.S. National Institute on Aging (U01_AG09740-13S2, P01_AG005842, P01_AG08291, P30_AG12815, R21_AG025169, Y1-AG-4553-01, IAG_BSR06-11, OGHA_04-064) and from various national funding sources is gratefully acknowledged (see www. share-project.org). The authors would like to thank the editor and two anonymous reviewers for their helpful suggestions and comments on an earlier version of this article. Abbott, A. (1995). Sequence analysis: New methods for old ideas.
R EFER EN CE S
Annual
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2006-03-23T00:00:00.000
|
11087067
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/ar1927",
"pdf_hash": "7ca324e0ff80ba601a8d186d5b033002f505f8c0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2685",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7ca324e0ff80ba601a8d186d5b033002f505f8c0",
"year": 2006
}
|
pes2o/s2orc
|
IFNGR1 single nucleotide polymorphisms in rheumatoid arthritis
On the basis of their biological function, potential genetic candidates for susceptibility to rheumatoid arthritis can be postulated. IFNGR1, encoding the ligand-binding chain of the receptor for interferon gamma, IFNγR1, is one such gene because interferon gamma is involved in the pathogenesis of the disease. In the coding sequence of IFNGR1, two nucleotide positions have been described to be polymorphic in the Japanese population. We therefore investigated the association of those two IFNGR1 single nucleotide polymorphisms with rheumatoid arthritis in a case-control study in a central European population. Surprisingly, however, neither position was polymorphic in the 364 individuals examined, indicating that IFNGR1 does not contribute to susceptibility to rheumatoid arthritis, at least in Caucasians.
Introduction
Many pathologic autoimmune responses are characterized by an imbalance in the T helper type (Th) 1/Th2 ratio in favor of the former [1]. As activated Th1 cells mediate their functions via their signature cytokine, interferon gamma (IFNγ), the interferon gamma receptor (IFNγR) plays an important role in the pathogenesis of these diseases by transmitting IFNγ signaling. The IFNγR consists of the ligand-binding chain IFNγR1 and the signal-transducing chain IFNγR2. Within the coding region of the IFGR1 gene [GeneBank accession number NM_000416], two single nucleotide polymorphisms (SNPs) (40 C/T and 1,400 T/C) that result in the amino acid substitutions valine to methionine at position 14 (V467M) and leucine to proline at position 467 (L467P), respectively, have been identified in the Japanese population [2,3].
Th1 cells have been implicated in many aspects of the pathogenesis of rheumatoid arthritis (RA) [1]. Evidence suggests that both genetic and environmental factors contribute to the development of rheumatoid inflammation [4][5][6]. Elucidating the genetic basis of RA, however, is still one of the major chal-lenges in modern rheumatology. The identification of RA susceptibility genes has been difficult because RA is a complex autoimmune disease that, unlike classic Mendelian traits causally related to highly penetrant rare mutations of single genes, appears to be caused by small individual effects of many poorly penetrant common alleles.
The association of the two IFNGR1 SNPs 40 C/T and 1,400 T/C with susceptibility to immune disorders mediated by an imbalance in the Th1/Th2 ratio has recently been demonstrated in Japanese cohorts; for example, in allergy [2] and in systemic lupus erythematosis [3,7]. Because of the potential importance of IFNGR1 SNPs in immunity in health and disease in people of all ethnic origins, these observations prompted us to perform a case-control association study to investigate the role of both IFNGR1 SNPs in susceptibility to RA, a Th1-mediated autoimmune disease, in a Caucasian population. IFNγ = interferon gamma; IFNγR = interferon gamma receptor; PCR = polymerase chain reaction; RA = rheumatoid arthritis; SNP = single nucleotide polymorphism; Th = T helper type.
(page number not for citation purposes)
Materials and methods
One hundred and one patients with an established diagnosis of RA, according to the 1987 revised criteria of the American College of Rheumatology for the classification of the disease, were enrolled in the study. The 101 patients represented an ethnically homogeneous cohort of Caucasian RA patients. The median (range) age of the patients at time of the analysis was 63 years (17-81 years), and 76% were female. A cohort of 171 healthy individuals matched on the basis of age, sex and origin were used as a healthy control group. All protocols and recruitment sites have been approved by the local institutional review boards, and all subjects were enrolled with informed written consent.
Results and discussion
In marked contrast to previous findings, no polymorphic alleles (neither thymine at position 40 nor cytosine at position 1,400) were detected in any of the individuals tested. This was surprising because both positions were highly polymorphic in the original publications. In those publications, heterozygosity at position 40 (V14M) was detected in four individuals (4.4%) in a small cohort of 91 healthy controls and even in 15 out of 96 (15.6%) lupus patients [7], and heterozygosity at position 1,400 (L467P) was detected in four individuals (6.7%) in a cohort of 89 allergic patients, although it was absent in healthy controls [2].
To verify our results for position 1,400, therefore, we additionally analyzed genomic DNA of 82 well-characterized atopic patients with an established clinically relevant type I allergy directed, for example, to house dust, mite, birch pollen or bee venom. However, this population was not polymorphic at either of the two positions either. Our data therefore strongly suggest that the IFNG1R gene is not polymorphic at those two positions at least among Caucasians and therefore does not contribute to genetic susceptibility to RA. Some ethnic variations in the frequencies of SNPs linked to RA have been already reported [8]. Analysis of RA-associated SNPs in solute carrier family 22 members 4 and 5 (SLC22A4 and SLC22A5) [9] and in protein tyrosine phosphatase (PTPN22) [10] in different ethnic groups revealed that the disease-associated polymorphic alleles usually common in Caucasians (over 8% prevalence) are absent or only extremely rarely present in the Japanese population [8]. Our data are in line with these observations and together implicate that association findings should be carefully analyzed in different ethnic contexts to allow meaningful conclusions regarding whether the gene of interest is of importance in the susceptibility to a particular autoimmune disease.
|
v3-fos-license
|
2021-01-21T06:16:25.223Z
|
2021-01-19T00:00:00.000
|
231652058
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://jpet.aspetjournals.org/content/jpet/376/3/358.full.pdf",
"pdf_hash": "42d3f4e8f981ab52e6adcd02018249ce17e736da",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2688",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"sha1": "0a873350a57c0b0bfd5937fbc50ec69b3b6e42fe",
"year": 2021
}
|
pes2o/s2orc
|
Improved Inhibitory and Absorption, Distribution, Metabolism, Excretion, and Toxicology (ADMET) Properties of Blebbistatin Derivatives Indicate That Blebbistatin Scaffold Is Ideal for drug Development Targeting Myosin-2 s
Blebbistatin, para -nitroblebbistatin(NBleb),and para -aminoblebbistatin (AmBleb) are highly useful tool compounds as they selectively inhibit the ATPase activity of myosin-2 family proteins. Despite the medical importance of the myosin-2 family as drug targets, chemical optimization has not yet provided a promising lead for drug development because previous structure-activity-relation-ship studies were limited to a single myosin-2 isoform. Here we evaluated the potential of blebbistatin scaffold for drug development and found that D-ring substitutions can fine-tune isoform specificity, absorption-distribution-metabolism-excretion, and toxicological properties. We defined the inhibitory properties of NBleb and AmBleb on seven different myosin-2 isoforms, which revealed an unexpected potential for isoform specific inhibition. We also found that NBleb metabolizes six times slower than blebbistatin and AmBleb in rats, whereas AmBleb metabolizes two times slower than blebbistatin and NBleb in human, and that AmBleb accumulates in muscle tissues. Moreover, mutagenicity was also greatly reduced in case of AmBleb. These results demonstrate that small substitutions have beneficial functional and pharmacological consequences, which highlight the potential of the blebbistatin scaffold for drug development targeting myosin-2 family proteins and delineate a route for defining the chemical properties of further derivatives to be developed.
As a myosin-2-specific inhibitor (Straight et al., 2003;Limouze et al., 2004), blebbistatin (Straight et al., 2003) can serve as a starting point for the development of clinical drug candidate (Rauscher et al., 2018;Roman et al., 2018). It has already been an excellent tool compound since its discovery in unveiling the role of the myosin-2 family in various biologic processes. Although the chemical optimization of blebbistatin was apparently fruitful in creating more photostable, less fluorescent, noncytotoxic, and more water-soluble tool compounds for research (Rauscher et al., 2018;Roman et al., 2018) through numerous chemical optimization efforts (Lucas-Lopez et al., 2005Lawson et al., 2011;Képiró et al., 2012Képiró et al., , 2014Várkuti et al., 2016;Verhasselt et al., 2017a,b,c;Roman et al., 2018), no lead compound suitable for drug development have yet been published.
However, structure-activity-relationship studies of blebbistatin derivatives (Lucas-Lopez et al., 2005;Rauscher et al., 2018;Roman et al., 2018) suggest that a substantial chemical space available on the D-ring of compounds can be exploited to fine-tune the biologic and physicochemical properties of compounds. The structure-activity-relationship studies are in line with the crystal structure where A-B-C tricyclic core of blebbistatin fits tightly into the binding pocket and the D-ring protrudes out of the binding pocket (Allingham et al., 2005), providing substantial space for chemical alterations on this part of the molecule.
After our publications on the original synthesis, physicochemical properties, and toxicity assessment of NBleb and AmBleb (Képiró et al., 2014;Várkuti et al., 2016), NBleb has recently also been used in a drug development study focusing on the applicability of myosin-2 inhibitors in substance use relapse (patent WO2019/241469A1). In this patent NBleb was identified as a useful compound "for practice of an embodiment of the methods of the invention." Moreover, we have recently shown that AmBleb can be successfully used in ischemic stroke interventions due to its direct relaxing effect on precapillary smooth muscle cells , which otherwise remain permanently closed after stroke, thereby hindering the restart of healthy blood circulation at the capillary level even after recanalization of large vessels (Hall et al., 2014;Hill et al., 2015). Despite these promising effects in living systems, very little information is available about the pharmacological properties of blebbistatin, NBleb, and AmBleb.
The above observations motivated us to perform in-depth characterization of these tool compounds, blebbistatin, NBleb, and AmBleb, including their pharmacokinetic and pharmacodynamic properties, in vivo distribution, genotoxicity, and myosin-2 isoform specificity, which are essential to judge the feasibility of developing a drug candidate and to delineate routes for development.
Materials
High-pressure liquid chromatography (HPLC)-grade acetonitrile, chloroform and water were purchased from VWR (PA). Other chemicals were purchased from Sigma-Aldrich (Germany) if not otherwise stated. Blebbistatin was purchased from Sellekchem (TX), and isoflurane was purchased from Rotacher-Medical GmbH (Germany). Williams Medium E for freshly isolated hepatocytes was purchased from ThermoFischer (MA). Ames Microplate Format Mutagenicity Assay kit was purchased from Xenometrix.
ATPase Activity Measurements. Steady-state ATPase measurements were carried out in 50 ml volume in a flat-bottom 384-well plate (Nunc-Thermo Fischer) using an NADH-PK/LDH coupled assay described earlier (Gyimesi et al., 2008) at 25°C in the presence of 0.5 mM ATP and F-actin (25 mM for W501+, NM2s; 11.5 mM for CM; 20 mM for SkS1; and 33 mM for SmS1) in a low ionic strength buffer (10 mM MOPS pH 7.0, 4 mM MgCl 2 , 2 mM b-mercaptoethanol) for 15 minutes. Blebbistatin derivatives were added to the reaction in 0.5 ml DMSO (1% of total volume), and three parallels were measured for each point. Controls containing DMSO with myosin but no inhibitor and actin-control containing actin and DMSO but no myosin were measured in all measurement sets. Applied myosin concentrations (100 nM SkS1, 200 nM W501+, 500 nM CM and SmS1, 1 mM NM2s) were used to fit quadratic function. ATPase activity was calculated Blebbistatin Scaffold Is Ideal for Drug Development 359 at ASPET Journals on October 6, 2022 jpet.aspetjournals.org from the slope of the linear fit to the time-dependent absorbance data collected at 340 nm.
In Vitro Motility. Fluorescent actin filaments were made by combining 1 mM tetramethyl-rhodamine with 1 mM actin in in vitro motility assay buffer (25 mM imidazole, pH 7.4, 25 mM KCl, 4 mM MgCl 2 , 1 mM EGTA, 1 mM DTT) containing 1 mM ATP. Movement of polymerized F-actin filaments over full-length myosin-coated surfaces was achieved using a modification to the method of Uyeda et al. (1990). Movies were collected on an ImageXpress XL high content imaging system at 25°C with a frame rate of 3 Hz and a 40Â air objective. Compound dose responses were collected at a final concentration of 2% DMSO, 40 mM 2Â serial dilution. Custom analysis software was created by VigeneTech, in which images were thresholded based on pixel intensity, filaments were identified, trajectories were determined for each filament, and filament velocities for each movie were calculated. Only filaments .2 mm and velocity of .25 nm/s were analyzed. Three to four movies, each with 500-2000 filament trajectories, from different surfaces were analyzed, and the median velocities of these were averaged for a single n. Each data point shown is a combination of three to four individual experiments (n = 3 to 4, each containing 4000-20,000 filament tracks per data point).
Molecular Dynamics Simulations
Molecular Dynamics Simulations for Blebbistatin Derivatives. Molecular dynamics simulations and evaluations were carried out using AMBER16. For the derivatives, partial charges were calculated with the AM1-BCC charge model using the antechamber program. Force field parameters for GAFF force field were appended using parmchk2 based on atomic type similarity. Molecules were explicitly solvated with three-site model (TIP3P) waters and energyminimized in 2000 steps of steepest descent followed by 4000 steps of conjugate gradient minimization. After minimization, a 10-nanosecond NPT simulation was carried out for each molecule at 1 bar pressure and 300 K temperature using Langevin dynamics for temperature coupling with 5 picosecond 21 collision frequencies. Samples were collected for the charge distribution calculation every 10 picosecond.
Calculation of Time-Averaged Charge Distributions. The structures of the trajectory were RMS-fitted to three atoms of the tricyclic ring: the N atoms in the tricyclic ring and C6 atom of the A-ring. Since these atoms are part of a conjugate system, they remain fixed relative to one another during the simulations. We defined a 1-Å resolution rectangular with the tricyclic rings plane as the x, z plane of the ordinate system and the N linking the C and D rings as its origin. We then derived the time-averaged charge distribution by integrating the average density of each atom multiplied by its respective partial charge.
Simulations of the Myosin-Blebbistatin Complexes. The ff14SB force field was used in all subsequent simulations to model protein interactions. The initial structure of the Dictyostelium discoideum myosin-2 headblebbistatin complex was based on crystal structure 3mjx.pdb. ADP.VO 4 was replaced with ADP.PO 4 . The water molecules resolved by crystallography were retained, and the model was expanded in an 8-Å clearance dodecahedron box with three-site model (TIP3P) water molecules for explicit solvation. The complex was minimized and heated in three 100-K 20-picosecond steps to 300 K under NVT conditions, then subsequently equilibrated to 1 bar in 20 picoseconds under NPT conditions. The system was relaxed for 500 picoseconds under NVT conditions at 300 K, then further equilibrated for 60 nanoseconds under NPT conditions, by which time the structure's root-mean-square deviation (RMSD) compared with the initial structure had converged. Blebbistatin was replaced with the nitro-and amino derivatives at this point to obtain the initial structure for their relaxation. All three variants were re-equilibrated using the same equilibration protocol as described above up to the 500-picosecond 300-K NVT step. The complexes were then further equilibrated for 6 nanoseconds under NPT conditions to obtain the final structures. The trajectories were sampled every 10 picoseconds during the last 1 nanosecond of the simulation. The time-averaged charge distribution within the protein was calculated using the coordinates of blebbistatin atoms in these 100 frames with the method described for water solvent-only simulations. Side chain binding enthalpy contributions were calculating with the molecular mechanics energies combined with the generalized Born and surface area continuum solvation (MM-GBSA) method using the mmpbsa programs of AMBER.
In Vitro Metabolic Stability of Blebbistatin, NBleb, and AmBleb Pharmacokinetic Studies. After 2 days of accommodation the rodents were randomly divided into three groups. Compounds were administered intraperitoneally to each group 2.5, 5, 10, 15, 25, and 45 minutes before tissue sampling. Control animals were treated with 0.9% NaCl solution. Blood samples were collected from the heart under isoflurane inhalation anesthesia into test tubes containing 200 ml (500 IU/ml) heparin. Tissue samples from brain, heart, liver, kidney, spleen, muscle, and urine were collected from different animals after a 2-minute perfusion with Krebbs-Henseleit solution via the aorta. Tissue samples were stored in 1 ml chloroform at 280°C.
Sample Preparation for HPLC-Mass Spectrometry Analysis. Tissue samples mixed with chloroform were minced, vortexed, and sonicated for 30 minutes. Samples were centrifuged (60,000g, 20 minutes, 4°C). The organic phase was collected and dried under a fume-hood. To the dried tissue samples, appropriate amount of acetonitrile-water mixture (50:50, v/v%) was added to the dried material; samples were sonicated for 30 minutes and ultracentrifuged (84,000g, 45 minutes, 10°C). After ultracentrifugation, supernatant was collected into already weighed eppendorfs, and net weight of the samples prepared for HPLC-mass spectrometry (MS) was determined. Ten microliters of supernatant was injected for HPLC-MS analysis.
HPLC-MS Conditions. Chromatographic separation of the compounds and metabolites was carried out using an HP Agilent 1100 series HPLC system consisting of G1312A binary pump, G1365B multiwavelength detector, G1322A Degasser, G1313A auto sampler, and Waters SQ Mass Spectral Detector (Waters Corporation, Milford, MA). Chromatographic separation was achieved on an analytical C18 Merck Purospher STAR RP-18 endcapped (250 mm  4.6 mm, 5 mm) column maintained at room temperature. Isocratic separation was carried out with acetonitrile:water (50:50 v/v%) as the mobile phase with the flow rate of 0.5 ml/min. The injection volume was 10 ml. Mass detection of samples were conducted utilizing an electrospray source in positive ion mode. Blebbistatin, NBleb, and AmBleb and their metabolites were quantified based on peak areas of their respective m/z values by extracted ion chromatograms from the single quadrupole scan measurements. For blebbistatin, NBleb, and AmBleb, reference materials were readily available, and their limit of quantification was at least 300 pg [30 ng/ml sample concentration (equal to 0.1 mM) with the 10 ml injection volume applied]. The MassLynx 4.1 software was used for instrument control, data acquisition, and evaluation.
High-Resolution Mass Spectrometry. High-resolution mass spectrometric measurements were run on a SciexTripleTOF 5600+ hybrid quadrupole time-of-flight mass spectrometer (Sciex, MS) equipped with TurboV ion source. Samples were measured under electrospray condition in positive ion detection mode. The resolution of the instrument was 35,000. Source conditions were curtain gas: 45 arbitrary unit (AU), spray voltage: 5500 V, nebulizer gas: 40 AU, drying gas: 45 AU, source temperature: 450°C, collision energy in MS/ MS experiment: 35 eV, scan time: 1 second. A Perkin Elmer Series 200 micro HPLC system with binary pumps and an autosampler was used for online HPLC-HRMS measurements. A Merck Purospher Star C18 (55 Â 2 mm, 3 mm) was used for the separation. The mobile phases were water containing 0.1 v/v% formic acid (eluent A) and acetonitrile containing 0.1 v/v% formic acid (eluent B). The flow rate was 0.5 ml/min linear gradient was used starting with 20% B and increasing to 90% B by 8 minutes. This was followed by a 1-minute washing period with 90% B and returning to the initial conditions for 5 minutes for equilibrating the system. The HPLC-MS system was controlled by Analyst TF (Sciex, MA) software. Data were processed by PeakView and MasterView software (Sciex).
Isolation of Primary Hepatocytes. Primary hepatocytes were isolated from male Wistar rats (Toxi-Coop Toxicological Research Center, Budapest, Hungary) and human tissue donors (Department of Transplantation and Surgery, Semmelweis University, Budapest, Hungary) using the collagenase perfusion method of Bayliss and Skett (1996). Briefly, the liver tissues were perfused through the portal vein with Ca 2+ -free medium (Earle's balanced salt solution) containing EGTA (0.5 mM) and then with the same medium without EGTA, finally with the perfusate containing collagenase (Type IV, 0.25 mg/ml) and Ca 2+ at physiologic concentration (2 mM). The perfusion was carried out at pH 7.4 and at 37°C. Softened liver tissue was gently minced and suspended in ice-cold hepatocyte dispersal buffer. Hepatocytes were filtered and isolated by lowspeed centrifugation (50g) and washed three times. The yield and percent of cell viability according to the trypan blue exclusion test were determined (Berry et al., 1997). For pharmacokinetic studies, the hepatocytes were suspended at 2 Â 10 6 cells/ml concentration in culture medium (Ferrini et al., 1998). Incubations with rat hepatocytes isolated from four animals were performed individually, whereas human hepatocytes pooled from three tissue donors were applied.
In Vitro Pharmacokinetics of Blebbistatin, NBleb, and AmBleb. Time courses of the unchanged pharmacons (blebbistatin, NBleb, and AmBleb) in primary hepatocytes were obtained. Each compound was incubated with cell suspension (2 Â 10 6 cells/ml) at 37°C in a humid atmosphere containing 5% CO 2 . The parent compounds dissolved in DMSO were added directly to the cell culture medium at the final concentration of 30 mM. The final concentration of DMSO was 0.1%. At various time points (at 0, 5, 10, 20, 30, 45, 60, 90, 120, 180, 240 minutes), the incubation mixtures were sampled (aliquots: 0.25 ml) and terminated by the addition of 0.25 ml ice-cold dichloromethane containing the internal standard, carbamazepine (0.13 mM). Blebbistatin and its derivatives were also incubated in cell-free medium and sampled at 0 and 240 minutes. The liquid-liquid extraction step was repeated two times, and the organic phases were collected and evaporated. The extract was dissolved in 100 ml of acetonitrile-water (50:50, v/v) and was analyzed by liquid chromatography-tandem MS for quantitation of the parent compound.
Estimation of Pharmacokinetic Parameters. The intrinsic clearance (Cl int ) for hepatocytes [ml/(minÂ2 Â 10 6 cells)] was calculated from the decrease in the concentration of the parent compound as follows (Obach, 1999): where the dose (D) was the target 30 nmol (in 1 ml) and The concentration at 0 minute (B) was 30 mM (30 nmol/ml), and b was determined by fitting exponentials to the measured drug candidate disappearance. As in our case D was numerically equal to B, and Cl int was equal to b per hepatocyte concentration (2 Â 10 6 cells/ml): For scaling up the Cl int value to obtain Cl int per whole liver (g)/bw (kg) , the cell concentration in the liver (cell number in rat liver: 1.17 Â 10 8 cells/g liver, in human liver: 1.39 Â 10 8 cells/g liver), the ratio of the average liver weight and average body weight parameters (for rat: 40 g/kg, for human: 23.7 g/kg) were used. The value for predicted hepatic clearance (Cl H ) was calculated as follows (Houston, 1994;Sohlenius-Sternbeck, 2006): where the hepatic plasma flow rate (HPF) is To calculate Cl H , the hepatic flow rate (Q H for rat: 55.2 ml/min per kilogram; for human: 20.7 ml/min per kilogram), plasma/blood ratio (for rat: 0.63; for human: 0.57) and the unbound fraction of the compound (fu) values were used (Davies and Morris, 1993;Szakács et al., 2001). For hepatocyte binding, the unbound fraction was calculated: fu ¼ compound internal standard in cell 2 free medium compound internal standard in hepatocyte suspension The fu values for blebbistatin, AmBleb, and NBleb were 0.805, 0.916, and 1.0, respectively. The bioavailability (%) was determined by using the equation where E is the hepatic extraction ratio:
In Vivo Pharmacokinetics of Blebbistatin, NBleb, and AmBleb
Animals. Male Wistar rats (220-250 g) were obtained from Toxi-Coop (Hungary). Animals were maintained under standard conditions (air-conditioned animal house at 25-28°C, relative humidity of 50%, and a 12:12 hour light/dark cycles). The animals were provided with water and diet pellets ad libitum. All experiments were conducted in compliance with the Guide for the Care and Use of Laboratory Compound Stability Tests. Thirty micromolars (diluted from 1 mM DMSO stock) compounds in 0.5 ml blood sample were incubated for 0, 20, 40, and 60, 120, 240 minutes at room temperature in an Eppendorf tube supplemented with 30 IU/ml heparin. At each time point 50 ml chloroform was added to 50 ml sample aliquots. Samples were vortexed, sonicated, and centrifuged (60,000g, 20 minutes), and organic layer containing compound was collected. Organic layer of the respective samples was dried under fume-hood, and then dried samples were redissolved in acetonitrile-water mixture (50:50, v/v). Samples were then sonicated and ultracentrifuged (84,000g, 45 minutes, 10°C), and supernatant was collected. Net weight of the supernatant of each sample was determined. Stability of compounds was determined based on their peak area in HPLC chromatograms using HPLC-MS protocol described under pharmacokinetic studies in the following text.
In Vivo Pharmacokinetics: Time-Dependent Distribution of Blebbistatin, NBleb, and AmBleb in Rats. All applied animals were littermates and weighed between 220 and 230 g upon arrival to the test facility. After 1 week of acclimatization, rodents were randomly divided into three groups of seven intraperitoneally receiving 1 mg of blebbistatin, NBleb, or AmBleb dissolved in 100 ml DMSO (34 mM for blebbistatin, 30 mM for NBleb, and 33 mM for AmBleb), which DMSO volume is in the safe range for single intraperitoneal dose (Bartsch et al., 1976;Gad et al., 2006). Due to homogenous weights upon arrival and identical housing conditions, all animals were 280 6 5 g at the day of experiments. Note that 3.6 mg/kg is below the adverse cardiovascular and respiratory effect level determined for AmBleb in a separate study (Gyimesi et al., 2020). Control animals were treated with DMSO. Blood samples were collected from the heart under isoflurane inhalation anesthesia into test tubes containing 200 ml (500 IU/ml) heparin. Tissue samples from brain, heart, liver, kidney, spleen, muscle, and urine were collected from different animals after a 2-minute perfusion with Krebbs-Henseleit solution via the aorta. Tissue samples were stored in 1 ml chloroform at 280°C. We note that adverse effects were not observed at the site of DMSO injection in any organs, and no signs of inhibitor precipitation was observed in the intraperitoneum after necropsy.
Sample Processing for HPLC-MS. Tissue samples were minced, vortexed, and sonicated in chloroform for 30 minutes. Samples were centrifuged (3000g, 20 minutes, 4°C). The organic phase was collected and dried under a laminar box. Two hundred microliters acetonitrile-water mixture (50:50, v/v) was added to the dried material and ultracentrifuged (45,000g, 45 minutes, 10°C). Ten microliters supernatant was injected for HPLC-MS analysis.
Mutagenicity Test of Blebbistatin, NBleb, and AmBleb
Ames Reverse Mutagenicity Test. Ames microplate format (Xenometrix) reverse mutagenicity assay was performed according to the manufacturer's guide on TA98 and TA100 bacterial strains in the absence and presence of phenobarbital/b-naphtoflavone-induced rat liver S9 fraction. We used phenobarbital/b-naphtoflavone-induced liver S9 fractions due to the higher structural similarity of these two componds to blebbistatin than that of Aroclor 1254, another possibly applicable substance. Solubility-dictated maximal applied concentrations of the inhibitors were 100 mM for blebbistatin and NBleb (29 and 34 mg/ml, respectively) and 400 mM (123 mg/ml) for AmBleb in 4% DMSO solutions. Note that slight precipitation occurred at the 100 mM NBleb-containing wells in the case of TA100 strain and the 400 mM AmBleb-containing wells in both TA98 and TA100 strains; therefore, those data points were not included in the analysis.
Results
ATPase Inhibition of Different Myosin-2 Isoforms by Blebbistatin Derivatives. Actin-activated ATPase activities of seven different myosin-2 isoforms in the presence of different concentrations of blebbistatin, NBleb, and AmBleb were measured ( Fig. 1; Table 1), and their solubility under assay conditions was confirmed (Supplemental Methods; Supplemental Table 1). As expected based on earlier results (Képiró et al., 2014;Várkuti et al., 2016), both derivatives inhibited myosin-2 isoforms similarly to blebbistatin (Straight et al., 2003;Limouze et al., 2004;Wang et al., 2008b;Heissler and Manstein, 2011;Zhang et al., 2017). However, on skeletal muscle myosin-2, NBleb showed reduced IC 50 value compared with those of blebbistatin and AmBleb. These results indicate that the electron-withdrawing group in the para position of the D-ring positively influences the inhibitory properties of the molecule. The electron-donating amino group in AmBleb resulted in significantly lower maximal ATPase inhibition with similar IC 50 values for the NM2A and NM2B isoforms ( Fig. 1; Table 1), further confirming that electron distribution in blebbistatin's D-ring is an important determinant of the inhibitory mechanism. We also analyzed the inhibitory efficiency, defined as the maximal extent of inhibition divided by the inhibitory constant (k inh = I max /IC 50 ) (Table 1), which corresponds to the initial slope of Fig. 1. Inhibition of the actin-activated ATPase activity of seven myosin-2 isoforms. We measured the inhibitory effect of blebbistatin (A), NBleb (B), and AmBleb (C) on the F-actin-activated ATPase activities of seven myosin-2 isoforms, as indicated. Hyperbolic functions were fitted to the relative ATPase activity data points to determine IC 50 and maximal inhibition (I max ) values for each myosin-2 isoform (Table 1). IC 50 values for skeletal muscle myosin-2 were lower than the applied protein concentration; thus, a quadratic function was used to fit ATPase data to determine the IC 50 and I max parameters. Data points represent averages 6 S.D. (n = 3-12) on (A-C). the fitted hyperbola. The ratio of the inhibitory efficiencies on skeletal and cardiac muscle myosin-2s showed drastic differences among the three inhibitors. The ratio is 56 for NBleb, whereas it is only 9.7 and 5.8 for blebbistatin and AmBleb, respectively. This finding suggests that skeletal muscle myosin specificity may be achieved by substituting the D-ring of blebbistatin with electron-withdrawing groups in the para position. Moreover, these results suggest that D-ring substitutions in the para position provide opportunities to fine tune inhibition and enhance isoform specificity among myosin-2s.
AmBleb Inhibits Force Generation. We characterized how AmBleb inhibits force generation in in vivo muscle preparations using motoneuronal stimulation of muscle fibers. Briefly, neuromuscular preparations from Drosophila larval bodywall muscle were tested for force generating ability during incubation with AmBleb. As control, force generating ability was tested prior to incubation and after washout ( Fig. 2A) and compound excitatory junction potentials were recorded before (pre-), during, and after (post-) application (Fig. 2B). AmBleb inhibited force generation, whereas it did not exhibit statistically significant change in amplitude of the synaptic voltages recorded at the neuromuscular junction (pre-29.7 6 1.8 mV and post-30.1 6 0.8 mV; ANOVA P = 0.55). Blebbistatin and NBleb effects were not characterized due to possible solubility-related precipitation of these inhibitors to the surface of fibers hindering appropriate measurements. This effect was not observed with AmBleb due to the sixfold higher solubility of Ambleb in HL-3.1 saline buffer than blebbistatin and NBleb (Supplemental Table 1). Concentration dependence of relative isometric force gives 9 6 2 mM IC 50 for AmBleb (Fig. 2C).
Blebbistatin Derivatives Inhibit In Vitro Motility. We next characterized the motion-generating capability of myosin in an in vitro motility assay by measuring the movement of individual rhodamine-phalloidin-labeled actin filaments over a myosin-coated surface (Uyeda et al., 1990). Actin filament velocity movies were automatically and objectively analyzed by a custom commercial software created by VigeneTech to measure two different velocity parameters. Briefly, these parameters are MVEL and TOP5%; MVEL is the mean velocity of all moving filaments and TOP5% is the mean of the top 5% of the velocity distribution across different actin filament lengths. The mean values from many actin velocity measurements using bovine b-cardiac full-length myosin at 25°C were as follows: MVEL: 350 6 50 nm/s; TOP5%: 800 6 70 nm/s (n = 9 exp, three preps). We next measured the motility of myosin in the presence of different concentrations of blebbistatin, NBleb, and AmBleb up to the solubility limit of the inhibitors (Fig. 2D; Supplemental Table 1). Both derivatives inhibited the velocity of actin gliding similarly to blebbistatin in a dose-dependent fashion. The IC 50 of inhibition for blebbistatin, NBleb, and AmBleb was measured to be 0.7 6 0.1, 4 6 1, and 3 6 1 mM, respectively.
Charge Distribution of Blebbistatin Derivatives in the Blebbistatin Binding Pocket Influences the Conformation of Key Functional Residues in Myosin-2. Previously it was shown that the blebbistatin binding pocket is large enough to easily accommodate D-ring substitutions (Képiró et al., 2014;Verhasselt et al., 2017a). Here we calculated how D-ring substitutions with different electron profiles influence the interaction of the inhibitor with the protein. We calculated the jpet.aspetjournals.org time-averaged charge distribution around the three inhibitors in an explicit water box and in simulated myosin-bound structures. In water box, AmBleb showed a drastically different charge density profile around the para position of the D-ring, as compared with the other two inhibitors, whereas it was very similar for blebbistatin and NBleb (Fig. 3B). However, timeaveraged charge densitiy of NBleb within the blebbistatin binding pocket of Dictyostelium myosin-2 became more similar to that of AmBleb (Fig. 3B). More importantly, the different charge densities around the D-ring of the inhibitors resulted in significant differences in the conformation of key functional residues of myosin-2 (Fig. 3C). The major difference was observed in the inhibitor binding energy contribution of Lys 587 , which was practically negligible in blebbistatin and AmBleb (DG bleb = 0.006 6 0.009 kcal/mol and DG AmBleb = 0.03 6 0.03 kcal/mol, calculated from two independent runs on each inhibitor), whereas it was significant in NBleb (DG NBleb = 20.5 6 0.3 kcal/mol). This residue plays important role in phosphate (P i ) release during the chemomechanical cycle of the myosin ATPase by blocking the P i release route during the power-stroke of myosin (Gyimesi et al., 2008;Cecchini et al., 2010).
The other significant difference appeared in the orientation of Phe 466 , which has been described as a residue interacting with the D-ring of blebbistatin (Allingham et al., 2005). In the NBleb structure the phenyl ring of Phe 466 develops more contact with the inhibitor, whereas it folds outwards in the blebbistatin and AmBleb structures (Fig. 3C). These differences may be of importance because Phe 466 is in a key position between the switch-2 loop and the relay helix of myosin, structural elements that are responsible for the initiation of the powerstroke during myosin's ATPase cycle (Málnási-Csizmadia and Kovács, 2010).
Metabolism of Blebbistatin, NBleb, and AmBleb in Primary Hepatocytes. We characterized the pharmacokinetic properties in primary rat and human hepatocytes (Fig. 4, A-C) to determine the metabolic stability and to identify the major metabolites (Fig. 4, D-I) for all three tool compounds.
In case of rat hepatocytes, pharmacokinetics of blebbistatin could be fitted with a single exponential function with half-life (t 1/2 ) of t 1/2,bleb = 20.2 minutes ( (Fig. 4D). Importantly, we detected the metabolite m/z = 309 [M + H] + at two different retention times (5.35 and 6.35 minutes), which suggests that two different species with the same molecular mass were formed in these experiments. All three detectable metabolites of blebbistatin were formed quickly (t 1/2 between 3 and 10 minutes) and decomposed with similar rates than that of the original blebbistatin. We only determined the relative concentrations of these metabolites due to possible difference in molar extinction coefficients between blebbistatin and the metabolite.
Compared with blebbistatin, NBleb showed markedly slower elimination kinetics by rat hepatocytes with t 1/2,NBleb = 114 minutes ( Fig. 4B; Table 2), which clearly suggests that the D-ring substitution affects not only the inhibitory functions but also the pharmacokinetic properties of blebbistatin. Similarly to blebbistatin, metabolites could be detected with m/z = 340 [M + H] + and m/z = 350 [M + H] + (Fig. 4E). In agreement with slower elimination kinetics of NBleb compared with blebbistatin, the half-lives of formation of these metabolites were also significantly slower. Although only relative concentrations were determined for these metabolites, we could suspect that the metabolite with m/z = 350 [M + H] + may be the major metabolite in case of rat hepatocytes, as the rate of formation of this molecule was similar to the elimination kinetics of NBleb (t 1/2,NBleb = 114 minutes, t 1/2,mz350 = 70 minutes). In contrast to those of blebbistatin, neither metabolites of NBleb decomposed during the 240-minute experiment, which suggests that these metabolites are not substrates for the enzymes transforming blebbistatin metabolites. 364 Gyimesi et al. Interestingly, AmBleb decomposition by rat hepatocytes showed a markedly higher rate than that of NBleb but very similar kinetics to that of blebbistatin (t 1/2,AmBleb = 19.0 minutes) ( Fig. 4C; Table 2), confirming that D-ring substitution at least in the para-position has functional and pharmacological consequences by modulating pharmacokinetic properties of blebbistatin derivatives. Three detectable metabolites with m/z = 310 [M + H] + , m/z = 350 [M + H] + , and m/z = 352 [M + H] + could be detected during AmBleb biotransformation (Fig. 4F). Moreover, similarly to that of NBleb, the metabolite with m/z = 350 [M + H] + could be the major metabolite of AmBleb because the half-life of its formation (t 1/2,mz350 = 24 minutes) was almost identical to that of AmBleb elimination.
To investigate differences between the metabolic properties of the three inhibitors in the presence of rat and human hepatocytes, we performed the same experiments with primary human hepatocyte samples (Fig. 4, A-C). Similar decomposition kinetics was observed for blebbistatin and AmBleb as with the rat hepatocytes, although AmBleb elimination kinetics was slightly slower by the human hepatocytes (Table 2). However, the more than sixfold slower elimination of NBleb could not be detected in case of human hepatocytes; rather, a very similar rate was observed to that of blebbistatin. Moreover, characteristic differences between the two species could be observed in the metabolite profiles of the three inhibitors (Fig. 4, G-I).
Although the metabolite of blebbistatin with m/z = 295 [M + H] + could be detected, the rate of formation was 10-fold slower (t 1/2,mz295,rat = 3 minutes, t 1/2,mz295,human = 28 minutes) and it did not disappear until 300 minutes. Moreover, only one species of metabolite with m/z = 309 [M + H] + could be detected, which elimination was concomitant with the formation of a new metabolite (not detected with rat hepatocytes) with m/z = 311 [M + H] + , which metabolite did not disappear until 300 minutes either.
In contrast to the experiments with the rat hepatocytes, only one metabolite was formed from NBleb with m/z = 340 [M + H] + and from AmBleb with m/z = 310 [M + H] + . Interestingly, the metabolite with m/z = 350 [M + H] + could not be detected in either NBleb or AmBleb, which indicates that the same metabolite is formed from NBleb and AmBleb jpet.aspetjournals.org with rat hepatocytes and further suggests the absence of that reaction pathway in the human hepatocytes.
Pharmacokinetic analyses were performed on results from both rat and human primary hepatocytes, and pharmacokinetic parameters have been calculated for all three inhibitors ( Table 2). As expected from the similar kinetics of blebbistatin and AmBleb in rat samples, hepatic extraction ratio (E H ) was almost identical for these two inhibitors, whereas NBleb had somewhat lower E H levels. Consequently, the bioavailability of blebbistatin and AmBleb was around 60%, whereas that of NBleb was more than 80%. In human hepatocytes, pharmacokinetic behavior of blebbistatin was similar to that in rat hepatocytes (t 1/2 values in rat and human hepatocytes were similar), whereas substantial differences were observed in the rates of elimination of NBleb between rat and human. On the other hand, the lower hepatic clearance (Cl H ) of all three compounds in human hepatocytes than in rat cells was considered to be associated with lower hepatic flow rate in human (Q H for rat: 55.2 ml/min per kilogram; for human: 20.7 ml/min per kilogram). Regarding the bioavailability (F), it should be noted that prediction from hepatic clearance may result in an overestimation. Bioavailability is related to the total clearance (the sum of all clearances in the organs that participate in the elimination of drugs, e.g., intestine and kidney); therefore, intestinal metabolism can be expected to decrease the bioavailability of orally administered drugs, and renal clearance can also modify the bioavailability in case of extensive renal elimination. (Table 2). Relative concentrations of the major metabolites detected in rat (D-F) or human (G-I) hepatocytes are shown. OH-bleb (m/z = 309) is a mixture of two identified isomers (cf. Fig. 5; Supplemental Fig. 2), in case of rat hepatocytes, whereas only one isomer could be detected with human hepatocytes. (D) Double exponential function was fit to bleb mz295 , which showed quick formation (t 1/2 = 3.7 minutes) and elimination (t 1/2 = 87 minutes) of this metabolite. Both metabolites with metabolites with m/z = 309 showed similar kinetics indicating that all major metabolites could be fully eliminated by rat hepatocytes in 300 minutes. (E) Single exponential functions were fit to data points showing that NBleb mz350 formed more slowly (t 1/2 = 70 minutes) than NBleb mz340 (t 1/2 = 28 minutes). (F) Single exponential function fit to AmBleb mz350 (t 1/2 = 24 minutes) indicates that this metabolite could not be completely eliminated by rat metabolic enzymes, and formation of AmBleb mz352 is much slower than the formation of the other two metabolites. (G) Single exponential function fit to bleb mz295 indicate that-contrary to rat hepatocytes-human enzymes could not eliminate this metabolite in 300 minutes. Formation of bleb mz309 followed similar kinetics as with rat hepatocytes, and the rate of formation of bleb mz311 followed inverse kinetics as that of bleb mz309 , indicating that bleb mz311 is formed from bleb mz309 and that bleb mz311 could not be further metabolized by human hepatocytes. (H and I) The major difference between rat and human hepatocytes was the absence of the metabolite with m/z = 350 from both NBleb and AmBleb samples. Both NBleb mz340 and AmBleb mz310 formed with similar kinetics as with rat hepatocytes, but the elimination phase of AmBleb mz310 was not present until 300 minutes. Data points represent averages 6 S.D. (n = 4-12).
Although the assessment of total clearance would be optimal, much information can be obtained from hepatic clearance as well. In conclusion, all three compounds should still be considered as drugs with high extraction ratios, where elimination is mainly determined rather by the hepatic blood flow extraction ratio than by the activities of the metabolizing enzymes. It also indicates that further modifications of blebbistatin should be performed to have more favorable pharmacokinetic properties and to achieve higher bioavailability values. 2 Pharmacokinetic parameters of blebbistatin, NBleb, and AmBleb from experiments using rat and human hepatocytes Cl int = (ln2/t 1/2 )  ((cell number/tissue weight)/(cell number/incubation volume))  (liver weight/body weight), where t 1/2 is the time required to achieve 50% of the initial concentration at t = 0 min. Cl H = (Cl int  fu  HPF)/(Cl int  fu + HPF), where HPF = Q H  plasma/blood ratio, where Q H = 55.2 ml/min per kilogram for rat, 20.7 ml/min per kilogram for human, plasma/blood ratio = 0.63 for rat, 0.57 for human. E H = Cl H /Q H . F (bioavailability) = 100*(12E H ). Fig. 4, the hydroxylation of blebbistatin on the C-ring (1e) occurs before the reduction of the B-ring keto-group (1b). N-Ac-AmBleb (3a) could be formed from NBleb by transient formation of AmBleb and further acetylation of the amine-group as in the case of AmBleb (cf. Fig. 4). N-Ac-AmBleb (3a) could be further reduced on the keto-group of the B-ring (3e), which metabolite might also be formed by acetylation of 3b. This latter scenario is supported by the kinetics of elimination of 4-OH-AmBleb=AmBleb mz310 (3b) and the formation kinetics of 4-OH-N-AcAmBleb=AmBleb mz352 (3e) shown on Fig. 4F. Identification of Major Metabolites. Pharmacokinetic analysis of blebbistatin, NBleb, and AmBleb in freshly isolated hepatocytes showed that all three compounds were extensively metabolized forming several metabolites (Fig. 4). To identify these metabolites, the rat hepatocyte-incubated samples were analyzed by high-resolution MS and MS/MS (Fig. 5).
The keto-group on the B-ring could be reduced to hydroxyl in all three inhibitors resulting in two enantiomer forms with different chromatographic retention properties. This reduced metabolite of all three inhibitors could be detected with HPLC-MS analysis above These results indicate that the metabolites detected from the hepatocytes with the same m/z values are equivalent to these metabolites. The reduced derivatives are further hydroxylated either on the C-ring as in the case of blebbistatin (1b) and NBleb (2b), or on the A-ring as in the case of AmBleb (3c). Moreover, the C-ring hydroxylated reduced blebbistatin (1b) is further hydroxylated on the A-ring too, thereby forming a reduced dihydroxy-blebbistatin metabolite (1c). Dihydroxyl derivatives could not be identified among the metabolites of NBleb and AmBleb.
Direct hydroxylation of blebbistatin, NBleb and AmBleb was also detected, but with characteristic differences in the hydroxylation patterns (Fig. 5). Blebbistatin is hydroxylated on the A-, C-, and D-rings (1d, 1e, 1f), from which two species could be detected with HPLC-MS analysis above with m/z = 309 [M + H] + . Direct hydroxylation of the C-and D-rings of NBleb and AmBleb did not occur (Fig. 5), indicating that nitroor amino-substitution on the D-ring not only hinders D-ring hydroxylation but also has an effect on the electron distribution of the C-ring as well, thereby preventing it from hydroxylation. However, the reduced NBleb derivative, that is, 4-OH-NBleb (2a), is hydroxylated on the C-ring as well (2b), which does not occur with the reduced AmBleb form. Instead, reduced AmBleb is hydroxylated on the A-ring (3c) similarly to direct hydroxylation of AmBleb.
We could also identify a metabolite of blebbistatin with m/z = 311 [M + H] + , which is the C-OH-blebbistatin with reduced keto-group on its B-ring (1b). From the formation end elimination kinetics of these metabolites with human hepatocytes (cf. Fig. 4G) we can assume that hydroxylation occurs first, which is further reduced on the B-ring.
The major metabolite formed from AmBleb in pharmacokinetic assays with m/z = 350 [M + H] + was identified as N-acetylated metabolite of AmBleb (N-Ac-AmBleb) (3a). Furthermore, this metabolite was also identified as the metabolite produced from NBleb with m/z = 350 [M + H] + (c.f. Fig. 4, E and F). This suggests that NBleb underwent reduction, forming AmBleb that was further acetylated. The formation of AmBleb from NBleb could not be detected, indicating that N-acetylation is a much faster reaction than the reduction of the nitro-group to amine-group. This fact was confirmed in the pharmacokinetic assays with AmBleb demonstrating the extensive formation of N-Ac-AmBleb. We also identified the metabolite with m/z = 352 [M + H] + as the N-Ac-AmBleb with reduced keto-group on its B-ring (3e). From the formation end elimination kinetics of these metabolites with rat hepatocytes (cf. Fig. 4F) we can assume that B-ring keto-group reduction occurs first, which is further acetylated to the amine-group.
We note that neither acetylated forms could be detected in human hepatocytes, indicating that this enzymatic route is missing from the human samples.
Time-Dependent Distribution of Blebbistatin, NBleb, and AmBleb in Rats. To follow the time-dependent distribution of the three compounds and their metabolites in living rats, we injected 1 mg each of blebbistatin, NBleb, and AmBleb intraperitoneally into 280 6 5 g animals, resulting in 3.6 mg/kg dose level with 2% accuracy due to weight differences in rats. Assuming an even distribution, the 1 mg inhibitor dose applied should result in ∼15 mM per animal concentration. At different time points after drug administration, samples were collected from heart, blood, skeletal muscle (m. quadriceps femoris), brain, kidney, lung, spleen, and liver tissues (Fig. 6).
Inhibitor concentrations and metabolites were analyzed by HPLC-MS. Stability of all three compounds in blood (Supplemental Fig. 1) was similar, and the metabolism of all three inhibitors was rapid. Their concentrations were well below 5 mM at all time points in all tissue types except for muscle. Interestingly, blebbistatin and AmBleb concentrations increased over time in muscle samples, suggesting an accumulation of these two inhibitors in skeletal muscle tissue (Fig. 6). This phenomenon was not observed for heart muscle tissue. The substantially lower tissue concentration of NBleb compared with that of blebbistatin and AmBleb may indicate insufficient NBleb solubility at the point of administration. Even though NBleb solubility in general is only slightly worse than that of blebbistatin (Supplemental Fig. 1), NBleb can precipitate into needle-like crystals (Várkuti et al., 2016), which may negatively affect its effective concentration in vivo.
From the MS spectra the three OH-bleb metabolites (1d-1f) could be identified with characteristically different retention times (RT 1 = 4.0 minutes, RT 1 = 5.3 minutes, RT 1 = 6.3 minutes) (Supplemental Fig. 2). The kinetics and tissue distribution of the three metabolites showed characteristically different properties (Fig. 6). The OH-bleb RT4.0 appeared in the kidney in the highest concentration, whereas OH-bleb RT6.3 was observed in the liver in the highest concentration. The OH-bleb RT5.3 metabolite was quickly formed but also quickly eliminated from all tissues; however, a slight accumulation of this metabolite was observed in the muscle tissue samples similar to the original blebbistatin.
4-OH-NBleb (2a) metabolite of NBleb predominantly appeared in the liver samples, remained high for 60 minutes, and showed a tendency to increase in the kidney 10 and 60 minutes after NBleb injection (Fig. 6). In agreement with the results from the hepatocyte experiments, AmBleb could not be detected in tissue samples after NBleb injections; however, the N-Ac-AmBleb metabolite (3a) was observed in high concentration in the liver. The presence of N-Ac-AmBleb in the liver samples and the lack of detectable amount of N-Ac-AmBleb in any other tissues suggest that AmBleb is quickly formed in the liver and further acetylates in place.
However, N-Ac-AmBleb (3a) could be detected in all tissue samples after AmBleb injection although with drastically higher concentrations in the liver. Similarly to that observed for N-Ac-AmBleb metabolite formed from NBleb, N-Ac-AmBleb formed from AmBleb did not accumulate in any tissue types, and its concentration was significantly higher in the liver. In both cases, N-Ac-AmBleb is almost completely eliminated at 60 minutes, suggesting quick elimination of this metabolite from the body.
Reverse Mutagenicity (Ames) Test of Blebbistatin, NBleb, and AmBleb. We tested the mutagenicity of all three compounds in a reverse mutagenicity Ames test using two Salmonella strains sensitive for frameshift (TA98 strain) and base-pair substitution (TA100 strain) mutations in the Ames microplate format mutagenicity assay (Flückiger-Isler et al., 2004;Flückiger-Isler and Kamber, 2012) in the absence and presence of liver S9 fraction (Fig. 7). A molecule is indicated Ames positive (i.e., probably mutagenic) if it shows drug concentrationdependent mutagenicity or any measured data point falls above the mutagenicity threshold defined as twice of the average plus S.D. of the solvent control (Fig. 7).
Importantly, whereas blebbistatin and NBleb were mutagenic, AmBleb was not mutagenic in the absence of S9 fractions. Under these conditions, both for the TA98 and TA100 strains, the slope of the linear fit to concentrationdependent mutagenicity profiles of AmBleb was either negative or close to zero, respectively, indicating that the compound does not reach the mutagenicity threshold (Fig. 7D). However, in the presence of liver S9 fraction all three compounds showed concentration-dependent mutagenicity, and some data points fell above the mutagenicity threshold for all three compounds (Fig. 7, B-D). The same metabolites were produced during incubation with S9 liver fraction as in the experiments with the isolated hepatocytes (cf. Fig. 4) confirmed by high-resolution MS and MS/MS. Among all investigated conditions, NBleb was the most mutagenic in the presence of S9 liver fraction in the TA100 strain. Whereas the nitro-group increased mutagenicity, especially base-pair substitution mutations, the amino group substitution significantly decreased blebbistatin's mutagenicity, supporting our previous findings that the nitro-group has an inverse effect on key biologic and pharmacological processes (such as ATPase inhibition, cell permeability, and pharmacokinetic profile) as compared with the amino substitution (cf. Figs. 1-6). This finding implies that substitution of the D-ring of blebbistatin at the para position with electrondonating groups can be used to significantly decrease its mutagenic properties.
Beside its effects on NM2 isoforms, blebbistatin has been in the focus of interest in developing myosin-2-specific drugs due to the involvement of myosin-2 motor proteins in several essential life processes. Myosin-2 isoforms are responsible for voluntary contraction of skeletal muscles (Szent-Gyorgyi, 1951;Geeves and Holmes, 1999) and that of the diaphragm (Johnson et al., 1994); cardiac myosin-2s drive the pumping of the heart (Szent-Gyorgyi, 1952;Tang et al., 2017), and smooth muscle myosin-2 is a key component of tension maintenance in blood vessels and the unconscious movements of our organs Brozovich et al., 2016). Thus, blebbistatin is a highly potent nonselective myosin-2 inhibitor, indicating a general mechanism of inhibition on all myosin-2 isoforms (Kovács et al., 2004;Limouze et al., 2004;Wilson et al., 2014;Tang et al., 2017;Zhang et al., 2017). NBleb and AmBleb have also been shown as potent inhibitors of Dictyostelium and fast skeletal myosin-2s (Képiró et al., 2014;Várkuti et al., 2016;Verhasselt et al., 2017a), but their inhibitory properties have not been studied on other myosin-2 isoforms.
Thus, the development of drugs that could regulate myosin-2 functions-especially in a selective manner-is an important approach especially because myosins are the most downstream effectors of signaling pathways, and this way side effects related to upstream regulators can be avoided. For drug development purposes, isoform selectivity is an important feature; therefore, we investigated whether inhibitory efficiency and myosin-2 isoform specificity profile can be tuned by D-ring substitutions and how the D-ring substitutions affect the inhibition of in vivo force generation or in vitro motility.
In the present study, we show that essential inhibitory and absorption, distribution, metabolism, excretion, and toxicology (ADMET) properties can be improved by chemical optimization.
Actin-activated ATPase measurements on seven myosin-2 isoforms, in vitro actin gliding measurements on cardiac myosin, and in vivo fiber studies on Drosophila skeletal muscle confirmed that D-ring-modified blebbistatin derivatives retain their inhibitory potential on the myosin-2 family, whereas it is possible to tune their selectivity toward the different myosin-2 isoforms. For skeletal muscle myosin NBleb showed a reduced IC 50 value in the actin-activated ATPase assay and in vivo fiber studies ( Fig. 1; Table 1), for the cardiac system, both NBleb and AmBleb have higher IC 50 values as compared with blebbistain, as verified both by actin-activated ATPase and in vitro actin gliding measurements ( Figs. 1 and 2A). Comparing the inhibitory efficiency of the three inhibitors on skeletal and cardiac muscle myosin-2s, we found that the ratio between skeletal and cardiac myosin-2s is 6 times and 10 times higher for NBleb than those for blebbistatin and AmBleb, respectively (Table 1). This feature highlights a very important indication for further drug development toward a skeletal muscle myosin-specific compound. This aspect is especially important as skeletal muscle myosin inhibition would be a novel approach for the treatment of spasticity in poststroke conditions or in patients with multiple sclerosis, cerebral palsy, or medication-induced spasms, where currently there is a huge unmet medical need for more efficient drugs without neurologic side effects (Chou et al., 2004;Chang et al., 2013).
The pharmacodynamics and metabolic stability studies indicate that although it is necessary to improve several properties of blebbistatin for drug development purposes, D-ring modifications can be very promising in terms of absorption-distribution-metabolism-excretion optimization. An important pharmacological parameter that needs to be improved is the ∼20-minute metabolic half-life of blebbistatin, as this rapid elimination could result in insufficient plasma concentration, hindering the medical application of the compound. Although rapid elimination of AmBleb similar to that Ames microplate format test plate triplicates with TA98 Salmonella strain in the presence of AmBleb (concentrations are indicated in micromolars). DMSO and 2-nitrofluorene were used as negative and positive controls, respectively, according to the manufacturer's protocol. Dark wells represent conditions with no bacterial growth, whereas white wells contain growing bacteria assuming a reverse mutation in their genome. Relative mutagenicity of blebbistatin (B), NBleb (C), and AmBleb (D) tested on TA98 (lighter circles) and TA100 (darker circles) Salmonella strains in the absence (open circles) and presence (solid circles) of rat liver S9 fraction. The dashed red line indicates the mutagenicity threshold, defined as twice the average plus S.D. of the solvent control. All three inhibitors showed concentration-dependent increase in mutagenicity, and all three compounds produced values (individual data points) over the mutagenicity threshold. *Linear fits excluded measurements where precipitation was observed. Importantly, AmBleb is not mutagenic in the absence of rat liver S9 fraction; however, in the presence of rat liver S9 fraction even AmBleb shows concentration-dependent mutagenicity in both strains. FDA, US Food and Drug Administration; OECD, Organization for Economic Cooperation and Development.
os blebbistatin was demonstrated in rat hepatocytes, much lower elimination half-life and Cl int was found in human cells. The rate of AmBleb metabolism would be close to sufficient for human uses with multiple doses per day prescription. In contrast, although NBleb was known to have increased chemical stability (Verhasselt et al., 2017a), and in rat hepatocytes, it showed increased biologic stability with elimination kinetics fivefold slower than that of blebbistatin ( Fig. 4; Table 2), human liver cells were as active in metabolism of NBleb as in that of blebbistatin. Moreover, given that a small change such as amino-to-nitro substitution can have this remarkable effect on elimination time, we may expect to be able to find substituents with further improved pharmacokinetic properties. These findings, however, emphasized the importance of species differences in drug metabolism and careful extrapolation from laboratory animals to human.
Furthermore, parallel metabolic routes and variations in the activities of drug metabolizing enzymes can also contribute to interindividual differences in pharmacological efficacy or adverse effects. Similarly to NBleb metabolic pathways, nitro-group reduction to an amino-derivative by CYP3A4 that is further acetylated by N-acetyl transferase 2 (NAT2) are well described steps in clonazepam metabolism (Peng et al., 1984). However, hydroxylation pathway is considered to be negligible in case of clonazepam. Clonazepam plasma concentrations seem to be influenced by CYP3A4 activity, whereas 7-aminoclonazepam concentrations depend on the activities of both CYP3A4 and NAT2 (Tóth et al., 2016). Interindividual variations in the activities of CYP3A4 and NAT2 can results in clinical consequences. The metabolite 7-amino-clonazepam is pharmacologically inactive; however, it can competitively modify the effect of clonazepam, primarily when the concentration decreases after discontinuation of clonazepam therapy. Therefore, clinical consequences are anticipated for the patients with high levels of 7-amino-clonazepam during clonazepam withdrawal. In contrast, hydroxylation at various positions was the major route of blebbistatin metabolism, N-acetylation of the amino-group was the primary reaction of AmBleb metabolism, and in NBleb metabolism, nitroreduction to amino-derivative (and further acetylation) and hydroxylation at various positions were observed. Various metabolic pathways and the activities of the phase I (mainly cytochrome P450 enzymes) and phase II N-acetyl transferase enzymes are assumed to be associated with different pharmacokinetic behavior of blebbistatin and two of its derivatives.
One of the most important tests at early stages of drug development is a mutagenicity assay. Due to the unambiguous correlation between mutagenicity and carcinogenicity (McCann et al., 1975), Organization for Economic Cooperation and Development, US Food and Drug Administration, and European Medicines Agency guidelines all require the clear demonstration that a lead compound is not mutagenic. The minimal requirement is to test mutagenicity in a reverse mutagenicity Ames test using two Salmonella strains sensitive for frameshift (TA98 strain) and base-pair substitution (TA100 strain) mutations. Guidelines also suggest repeating the test in the presence of induced S9 rat liver fraction to elucidate the mutagenicity of the potential metabolites produced in the liver during metabolism of the compound in the living organism. Although we found that blebbistatin and NBleb are clearly mutagenic, AmBleb is only mutagenic in the presence of S9 liver fraction, indicating that only AmBleb metabolites are mutagenic and AmBleb itself is not mutagenic. Although the genotoxicity of blebbistatin derivatives must be prudentially improved, given that the small difference between the amino-and nitro-substituted derivatives have drastic effect on mutagenicity, we may manage to design and synthesize future compounds with fully eliminated mutagenicity.
Conclusions
Blebbistatin derivatives are promising candidates for selective inhibition of myosin-2 isoforms. However, recently developed molecules do not meet the safety criteria to enter preclinical studies due to their harmful properties of cytotoxicity and genotoxicity described in this paper. Despite these current limitations, our results provide useful and promising indications for further development of drug candidates targeting myosin's blebbistatin-binding site. Through different para substitutions on the D-ring, we could either increase the inhibitory efficiency, improve skeletal muscle selectivity, modify biologic stability, or drastically reduce mutagenicity-properties that are necessary for the development of a potentially useful lead compound to advance to clinical trials for severe medical indications. Based on these observations we believe that it is feasible to develop clinically applicable drug compounds. all three inhibitors were higher than the applied concentrations in the ATPase assays, confirming that results demonstrated in Figure 1 and Table 1 are reliable.
We also measured the solubility of the inhibitors in Ames mutagenicity assay buffer in the absence and presence of liver S9 fraction. We note that the determined maximal solubility of blebbistatin and NBleb is lower than those used for analysis. However, we did not observe precipitation during the mutagenicity test except for the data points labeled with an asterisk in Figure 6. We note that at higher concentrations precipitation is obvious as it forms a filmlayer at the edge of the solution on the sidewall of the 24-well plate's wells. We assume that assay conditions containing Salmonella sp. cells improves solubility of the inhibitors by uptaking inhibitors from the solution and by aspecifically binding the inhibitors on the cell membrane. Moreover, relative mutagenicity values follow linear dependence on inhibitor concentrations up to 100 μM with blebbistatin and NBleb, which further suggest that solubility measured in cell-free conditions underestimate the real, effective solubility of the inhibitors during assay conditions. This is further supprted by the lack of precipitation in pharmacokinetic experiments containing 30 μM starting inhibitor concentrations both in the presence of rat and human hepatocytes. Figure S2 -MS chromatograms of blebbistatin, NBleb and AmBleb, and the major metabolites. Representative MS chromatograms of blebbistatin (m/z= 293) with retention time 6.96 min form rat hepatocyte sample (dark blue); NBleb (m/z= 338) with retention time 13.16 minute from form rat hepatocyte sample (green); AmBleb (m/z= 308) with retention time 6.79 minute from form rat hepatocyte sample (dark red); 3 species of OHblebbistatin (m/z= 309) with retention times of 4.02 minute, 5.27 minute and 6.27 minute from 8-minute kidney sample (upper light blue); 2 species of OH-blebbistatin (m/z= 309) with retention times of 5.35 minute and 6.35 minute form rat hepatocyte sample (lower light blue); 4-OH-NBleb (m/z= 340) with retention time 6.45 minute from form rat hepatocyte sample (dark green)); N-Ac-AmBleb (m/z= 350) with retention time of 5.92 minute (light green) or 6.01 minute (brown) from rat hepatocyte samples from AmBleb or NBleb incubations, respectively; and the 4-OH-N-Ac-AmBleb with retention time 5.36 minute from rat hepatocyte sample after AmBleb injection (pink).
|
v3-fos-license
|
2018-12-14T03:27:04.476Z
|
2004-12-01T00:00:00.000
|
55829782
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.scielo.br/j/jbchs/a/hWLJkTC4HG7KhJbzy8N4h6p/?format=pdf&lang=en",
"pdf_hash": "364a21c058737a256e56ee577a5b6dff46abd275",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2689",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"sha1": "364a21c058737a256e56ee577a5b6dff46abd275",
"year": 2004
}
|
pes2o/s2orc
|
Palladium-Catalyzed Double Cross-Coupling of E-vinylic Dibromides with PhZnCl and the Synthesis of Tamoxifen
Compostos (E)-1,2-dibromo vinílicos 11a-f foram preparados estereosseletivamente via reação de bromação de alcinos empregando-se tribrometo de piridínio em MeOH/CCl 4 a baixa temperatura e utilizados na reação de duplo acoplamento com PhZnCl catalisada por Pd(0), segundo protocolo de Negishi, fornecendo as respectivas olefinas tri e tetrassubstituídas 14a-e. Tamoxifeno, um agente antiestrogênico de uso clínico na terapia do câncer de mama, foi preparado na forma de uma mistura Z:E de proporção 2.3:1, em 7 etapas em um rendimento global de 30% a partir do 4-iodofenol (3).
Introduction
The stereoselective synthesis of tri-and tetrasubstituted olefins is of great interest in the domain of biologically active compounds where the E and Z isomers may have completely different biological properties as represented by tamoxifen, a tetrasubstituted stilbene which acts as a selective estrogen receptor modulator (SERM): while (Z)tamoxifen (1) displays antiestrogenic activity and is prescribed as the corresponding citrate as an adjuvant for breast cancer therapy, (E)-isomer 2 has estrogenic activity and stimulates the proliferation of hormone-responsive breast cancer cells (Figure 1). 1 Methods are available for the separation of (E)-and (Z)-isomers as well as for isomerization of estrogenic (E)-tamoxifen (2) 2 to antiestrogenic (Z)-tamoxifen (1), [3][4][5] including large scale operations.Among the routes already described in the literature, the method by Knochel and coworkers 6 is by far the more efficient one: starting from 1-phenyl-1-butyne, (Z)-tamoxifen (1) is prepared in three steps and 71% yield via Ni(II)-catalyzed syn carbozincation, followed by Negishi coupling of the corresponding vinylic iodide with 4-triisopropylsililoxyphenylzinc bromide.
1,2-Dihaloalkenes are potentially useful starting materials for the preparation of tri-and tetrasubstituted olefins via transition metal catalyzed cross-coupling with organometallic species. 14Accordingly, Rossi and coworkers 15 described the utilization of tetrasubstituted (E)- 2,3-dibromopropenoates in the palladium-catalyzed crosscoupling with aryl and alkynylzinc species.The stereoselectivity of these reactions was found to be dependent upon the substituent present at C-3 of the propenoates.
Results and Discussion
Due to the availability of E-1,2-dibromoalkenes from the bromination of the corresponding acetylenes, we were attracted to the possibility of preparing (Z)-tamoxifen (1) from (E)-9 via the palladium-catalyzed tandem crosscoupling with phenylzinc chloride (Scheme 1).4-Iodophenol (3) was straightforwardly protected as the corresponding chloroethylether 4 before the Sonogashira coupling with trimethylsilylacetylene which afforded acetylene 5 in 99% yield.TMS-deprotection followed by alkylation of the terminal acetylene 6 with ethyl iodide provided disubstituted acetylene 7 (83% yield, 2 steps).
At this juncture, the stereoselective conversion of 7 to the corresponding (E)-dibromoalkene 8 was needed.While our preliminary attempts with molecular bromine in either CHCl 3 or CCl 4 afforded mixtures of (E)-and (Z)-1,2dibromoalkenes as well as the corresponding tribromoderivatives when alkylacetylenes were employed, the use of pyridinium tribromide 16 in a 1:1 mixture of CCl 4 -MeOH at -10 o C provided (E)-1,2-dibromoalkene 8 in 86% yield without formation of the corresponding 1,1-dibromo-2,2-dimethoxyalkane as previously reported when the reaction was carried out in methanol. 17The protocol above proved to be equally efficient for the bromination of either alkyl-or aryl-substituted alkynes 10a-f (Table 1).The (E)-configuration of 1,2-dibromo alkenes 11a-e was assigned based on literature data 18 while the stereochemistry of 11f was assumed by analogy.
With a stereoselective route to (E)-9 secured, we explored its Pd(0)-catalyzed double cross-coupling with PhZnCl (8 equiv.)generated in situ by transmetallation of a 0.5 mol L -1 solution of phenyllithium in THF with ZnCl 2 .The cross-coupling was carried out in refluxing toluene with 10 mol% of Pd(PPh 3 ) 4 and afforded a 2.3:1 mixture of (Z)-tamoxifen (1) and (E)-tamoxifen (2) in 52% yield. 19he diastereoisomeric ratio was determined by capillary GC analysis and the configuration of the major diastereoisomer established by comparison of the NMR data of the synthetic mixture with an authentic sample of (Z)-tamoxifen (1).The loss of stereochemical integrity during the Pd(0)-catalyzed cross-coupling is reminiscent of the observations by Rossi and coworkers 15 who described similar behavior during the cross-coupling of tetrasubstituted (E)-2,3-dibromopropenoates with arylzinc chlorides.However, Rathore and coworkers described that the Pd(0)-catalyzed coupling of aryl Grignard reagents bearing ortho methyl groups and (E)-1,2-dibromoalkenes efficiently provides (Z)-tetrasubstituted alkenes. 20While (Z)-tamoxifen (1) is formed through a double Negishi coupling, the competitive formation of (E)-tamoxifen (2) seems to involve syn-carbopalladation of alkyne 13 which is observed to be formed from (E)-9 (GC analyses) under the reaction conditions employed (Scheme 2).In fact, although attempts to carry out Pd(0)-catalyzed coupling with alkyne 13 under the reaction conditions employed failed, we were able to observe exclusive formation of (E)-tamoxifen (2) when 13 was treated with Pd(PPh 3 ) 4 /PhBr, followed by addition of PhZnCl (Scheme 2).
We have also examined the formation of tri-and tetrasubstituted olefins 14a-e from the corresponding (E)-11a-e.As depicted in Table 2, in all cases good yields of the corresponding tri-and tetrasubstituted olefins 14a-e were obtained with retention of the double bond configuration being observed when alkyl substituted 1,2-dibromo alkenes (E)-11a,b were employed.
The low level of diastereoselection reported in the double Negishi coupling of vinylic dibromide (E)-9 with phenylzinc chloride calls for a more efficient catalytic system in order to carry out the coupling reaction.The highest stereoselective methodologies described so far in the literature for the total synthesis of (Z)-tamoxifen (1) rely on either syn carbometallation, 6,12 hidroxymethyl directed anti carbometallation 11 or anti stannylcupration. 8However, when compared with the routes based on dehydration 4 or McMurry coupling, 5 the results described herein display about the same level of diastereoselection.Considering the availability of methods for separation of (E)-and (Z)-tamoxifen and for the interconversion of (E)-to (Z)-tamoxifen, our results are an useful asset to those already known as it allows the preparation of tamoxifen as a 2.3:1 mixture of Z-and E-isomers in 7 steps and 30% overall yield from commercially available 4-iodophenol (3).
Experimental
General All reactions of air-and water-sensitive materials were performed in flame dried glassware under an atmosphere of argon.Triethylamine was distilled from CaH 2 , tetrahydrofuran was previously treated with CaH 2 and distilled from sodium, toluene was distilled from sodium.DMPU and bromobenzene were previously treated with CaH 2 , distilled from CaH 2 and stored over molecular sieves.4-iodophenol (3), 1,2-dichloroethane,trimethylsilylacetylene,dichloro(triphenylphosphine)palladium(II), ethyl iodide, pyridinium tribromide, tetrakis(triphenylphosphine) palladium(0) and alkynes 10a-e were commercially available.The compounds were by column chromatography on silica gel (70-230 mesh).The 1 H-NMR and 13 C-NMR spectra were recorded on a Varian Gemini (7.05T), Varian Inova (11.7T) spectrometers.Chemical shifts (δ) are recorded in ppm with the solvent resonance as the internal standard and coupling constants (J) recorded in Hz.The infrared spectra were recorded as films in KBr cells on a Nicolet Impact 410 (FTIR).High resolution mass spectroscopy (HRMS) were performed on a VG Autoespec-Micromass-EBE.The melting points were measured on an Electrothermal 9100 apparatus.The gas chromatography analyses (FID detector) were performed using a Hewlett Packard 5890-II equipament.Gas chromatography-mass spectrometry (GC-MS) analyses were performed on a Hewlett Packard 5890/ Hewlett Packard 5970 MSD.
General procedure for the bromination of alkynes 10a-f with PyHBr 3 (Table 1)
To a solution of alkyne 10a-f (1.00 mmol) in CCl 4 (5.0 mL) at -10 ºC pyridinium tribromide (1.20 mmol) was added, followed by MeOH (5.0 mL).The reaction mixture was kept at -10 o C for 30-60 min and quenched with 10% aqueous sodium thiosulfate.After extraction with CH 2 Cl 2 , the combined organic phase was extracted with brine and dried over anhydrous MgSO 4 .The crude mixture was chromatographed on silica gel to afford dibromo alkenes 11a-f.
General procedure for the palladium-catalysed double cross-coupling of (E)-vinylic dibromides 11a-e with PhZnCl (Table 2)
To a solution of bromobenzene (8.0 equiv.) in dry THF at -78 ºC was added butyllithium (8.2 equiv.).After 15 min., a solution of ZnCl 2 (9.0 equiv.) in dry THF was added then the temperature was allowed to room temperature.After 30 min, a solution of vinylic dibromide 11a-e (1.0 equiv.)and tetrakis(triphenylphosphine)palladium(0) (0.1 equiv.) in dry THF was added and the conditions described in Table 2 were followed.The reaction was periodically monitored by GC analysis of samples previously hydrolysed with an aqueous NH 4 Cl solution and extracted with Et 2 O.After completion of the reaction, the mixture was treated at room temperature with an aqueous NH 4 Cl solution and extracted with Et 2 O.The organic was extracted with brine and dried over
|
v3-fos-license
|
2019-03-26T17:29:11.752Z
|
2012-12-26T00:00:00.000
|
85557757
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://www.banglajol.info/index.php/JSR/article/download/10519/9467",
"pdf_hash": "3f8727220349aeedef91e7928c0fe6860acc575f",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2690",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "3f8727220349aeedef91e7928c0fe6860acc575f",
"year": 2012
}
|
pes2o/s2orc
|
Effect of Fiber Orientation on the Tensile Properties of Jute Epoxy Laminated Composite
Jute, the pride of Bangladesh, has gained interest in the composite field due to its superior specific properties compared to artificial manmade fib rs like glass, kevlar, etc. In this study, jute composites made with the vacuum assiste d resin infiltration (VARI) techniques were investigated. Jute fiber preform stacking sequ ences were (0/0/0/0), 0/+45°/-45°/0 and 0/90°/90°/0. For all cases, a total of 25% volume f raction of jute fiber was incorporated. The developed composites were characterized by tens ile te ts and the experimental results thus obtained were compared with that of the theore tical values. After tensile tests, fracture surfaces were cut and observed under high resolutio n FEG SEM. In the case of 0/0/0/0 and 0/+45°/-45°/0 lamina composites, longitudinal tensi le trength has been found to be higher than that of the transverse direction. However, for 0/90°/90°/0 lamina composites, tensile strengths in both directions were very close to eac h other. For all developed composites, experimental results revealed that the tensile prop erties of the developed composites strongly depend on the tensile strength of jute fib r and that the tensile properties of jute fiber are very much defect-sensitive. Finally, a di scussion of the tensile behaviors of the composites is initiated in terms of the fracture mo rph logies observed under the SEM.
Introduction
Jute, a growing sector in Bangladesh, has occupied a place in composite field quite a decade ago.Its low cost, versatility in textile field, eco-friendly nature and moderate mechanical properties have outnumbered the applications of some artificial fibers like glass, kevlar, etc. in many composite applications.However, biodegradability and environment friendly behaviors of jute are just interrupted with the hydrophilic nature, which in turn affects the composite mechanical properties as well as the applications of jute fiber reinforced composites [1,2].
Jute like natural fiber has good specific mechanical properties, although its tensile strength is extremely defect and span sensitive.One of the most sensitive defects that affect the tensile strength of the jute fiber is its lumen or hollow space in it.Lumen present in the BWB jute fiber can act as a source of defect in the composite and initiates failure.The severity of these effects on tensile strength depends on the geometry and volume fraction of the lumen.At the same time, the volume fraction of lumen or availability of lumen of critical size and shape also depends on the span size of the jute fibers.As a result, tensile properties are usually corrected for getting their average values [3][4][5][6].Jute fiber bundle has a lot of entanglement.So, it is very difficult to make unidirectional (UD) preform of the jute fiber manually with bare hand under dry condition [7].On the other hand, hackling under dry or wet condition introduces more defects in the fiber.At the same time, jute fiber becomes gradually thinner [8].For this reason, woven jute fabric is usually preferred.However, in this case, anisotropic properties might also arrive [8,9].Due to natural twist and entanglement in jute like natural fibers, they are stuffed with linseed oil.These stuffed jute fibers are then hackled by special type of machine and yarns are made prior to woven fabric preparation [8].But, the hydrophilic nature of jute is interfered in the presence of oil.Moreover, the presence of oil gives very inferior interfaces during the reinforcement of both thermoplastic and thermoset polymers.So, additional washing and drying steps become very essential before composite preparation [10,11].As a result, UD jute preform or roving preparation has become a valuable step, which is gaining a great importance nowadays.
To achieve multidirectional isotropic behaviors, proper fiber orientation in different angle is necessary, which can only be done by multiply laminate preparation [12].Stacking the UD ply in different angles gives composite with anisotropic physical and mechanical properties [13].However, multiply composites of superior and moderately superior mechanical properties, with up to 50% volume fraction of fiber reinforcement, are possible to fabricate through conventional procedures like compression molding and hand-lay-up for jute like natural fiber [14].
Prepegging, resin transfer molding (RTM) and vacuum assisted resin infiltration (VARI, similar to RTM, but differs in infiltration pressure) for making thermoset polymer based composites [14,15].Although, these processes are quite a decade old for artificial fiber reinforced composite, but its versatility still attracted the natural fiber composite researchers to utilize these techniques [16].Therefore, a combination of techniques to make UD jute fiber preform along with suitable composite fabrication is necessary for making continuous jute-thermoset prepreg or finished product for various applications.
Materials and methods
In this research work retted, water washed and sun dried Bangla White Grade B (BWB) jute was collected from Bangladesh Jute Research Institute (BJRI).From the bunch of the collected jute, single jute fibers were separated and tensile tests were carried out.The strength values obtained from single jute fiber tensile test are not identical from fiber to fiber.As a result, the scatter band is very wide.To avoid this problem, many researchers in this field corrected the experimental values by some mathematical relationships [6].In this research work, single fiber tensile test results were also corrected following them.For the fabrication of the jute fiber reinforced composites, four layer laminate preforms of size 400mmX400mm were made with jute fiber bunch and stacked them in the following sequence 0/0/0/0, 0/+45°/-45°/0 and 0/90°/90°/0 as shown in Fig. 1.It is to be mentioned here that the jute fibers were wetted with water to make the preforms.After making the preforms, they were dried at 60°C overnight prior to composite fabrication.It is to be noted that VARI is a well accepted technique for composite fabrication.In this research work, the preform was put inside the vacuum bag that was kept fixed with the mold surface.Then vacuum was applied to remove inside air along with free moisture.In order to accelerate the vacuum process, the preform was heated to 40°C and the process was run for half an hour.Then resin was infiltrated under vacuum.As soon as the infiltration was completed, both sides of the vacuum bag were clamped and b c b a temperature was increased to 135°C at a heating rate of 5 -10°C/ min for necessary curing.At this temperature, the composite was fully cured.Following this technique, 25% volume fraction of BWB jute fiber composites were made for different fiber orientations.Tensile specimens were prepared following ASTM D3039 standard (dimensions of specimen: length 250mm, thickness 4±0.5mm, width 15±1.5mm,gage length 100mm).The specimens were cut using small toothed table saw and finishing was done with 1200 grade emery paper and stored over night in an oven at 50°C prior to test.Fig. 3 shows the loading direction of the tensile test specimens.All tensile tests were carried out with the help of Instron universal testing machine (model 4467) having 30kN load cell attached in it and extensometer gage length 50mm.It is to be mentioned that all tensile tests were performed at a cross-head speed of 0.85mm/min.For all cases at least 5 specimens were tested.
After tensile tests, composite fracture surfaces were cut off and they were observed under a very high resolution FEG (field emission gun) SEM of model PHILIPS XL30 FEG.
Results and Discussion
Tensile tests of BWB jute fiber epoxy composite were carried out in the computer controlled (Instron data acquisition software) universal testing machine.The stress strain curves thus generated during the tensile tests are represented in Fig. 4.
a b c
Table 1 shows the summary of the longitudinal tensile test results.A common remark is that the strength and strain to failure in the principal (0°) loading direction has a decreasing trend with increasing lamina angle.The tensile properties of the developed composites in the transverse directions are presented in Table 2.The common remark from Table 2 is that the strength values in the transverse direction have an increasing trend with the increasing jute fiber angle.
Mechanical properties of UD and 0-90 composites
Before going to in-depth discussion for the UD and 0-90 composites we must know the tensile properties of BWB jute fiber and the epoxy matrix separately, which are shown in Table 3.The strength of jute fiber is dependent on the fiber structure, its flaw density, griping pressure and slippage during tension test and strain rate.As a result, jute fiber shows a wide scatter band in tensile strength as like as other natural fibers.In order to obtain the maximum possible tensile strength value of the fiber, average tensile strength values of various fiber spans were plotted first.From this plot, the maximum possible tensile strength value was obtained by means of extrapolation on to the Y-axis.Consequently, during the test a range of stiffness values for BWB jute fiber were obtained.But, for a single material the stiffness value should be one unique value rather than a range.In order to eradicate the effect of these flaws and additional factors on stiffness values of jute fiber a correction procedure developed by other [5,6] was followed.
In the case of composites two or more materials of different properties are mixed together to get required properties, which are usually different from that of the constituent materials.For the determination of mechanical properties of composites, rule of mixture provides very useful idea for the researchers.One of the important mathematical relations for this is given below: where subscript c, f, and m stand for composite, fiber and the matrix and V is the volume fraction.
As per the rule of mixture the calculated composite strength for UD composite should be 272.47MPa for 25 volume percentage BWB jute reinforced epoxy.But the experimental value is 112.69MPa, which is only 41% of the theoretical value.This type of low efficiency of fiber strengthening has also been mentioned by other [17].The reasons behind the decreased value of tensile strength of the composite are the presence of defects in both the matrix and fiber of various concentrations and geometries.Fig. 5 indicates the types of defects that were observed in BWB jute fiber during the research work.Fig. 6 represents the strength of BWB jute fiber as a function of span lengths.From this figure, it is very clear that the tensile strength of the jute fiber decreases with increase in the span length.Defoirdt et al. [6] also observed this type of effect in the case of different natural fibers.From this figure, another observation is that the scatter for each span is relatively higher for lower span length compared to that of the higher span length.In Fig. 6, the range of strength values were plotted for each span length.From this trend line (also from the trend equation given in Fig. 6), it is clear that the average strength for 5mm span is around 800MPa, but the extrapolated maximum value is 844.72 (when the span size is very small, i.e. close to zero).
Here, it is important to mention that it is very difficult and in many cases, impossible to develop engineering product to be defect free and that the possibility of having higher proportion or larger size defects in long span test specimen is also high.Moreover, it is also obvious that the surface conditions of jute fibers are not always identical.As a result, compatibility and adhesion between jute fiber and the matrix vary, which also contributes to lower tensile strength of the developed composites.
Similar to the longitudinal tensile strength, tensile strengths in the transverse direction is also lower than that of the theoretical values.From Table 2, it is clear that, in transverse direction the tensile strength is significantly lower than that of the longitudinal direction.The reason behind this is that the fiber-matrix interfaces and defects inside the jute fiber mostly dominate the tensile strength of the composites.In this type of composites, especially with inhomogeneous fiber content, lack of bonding between matrix/fiber interfaces, voids, inherent defects of the jute fiber, etc. seriously degrade the tensile strength of the composite [18,19].These defects mainly generate during the fabrication process and are accumulated mostly around the fiber-matrix interface [20,21].As a result of the combined degrading effects, the experimental tensile strength of the composites in the transverse direction becomes significantly lower than that of the longitudinal direction.So, in any direction, the maximum fiber strength efficiency has not been achieved [22,23].The higher values of tensile strengths in the longitudinal direction can also be explained by its fracture morphologies.For 0/0 lamina composites, two step type of fracture morphology has been observed.At first, debonding at the matrix/fiber interfaces took place.Then matrix was broken because of its relatively lower tensile strength.At last, jute fiber having relatively higher tensile strength value was broken.This phenomenon is shown in Fig. 7.As the jute fiber has a high tensile strength, so the composite showed higher tensile strength in the longitudinal direction.In the case of transverse direction, tensile failure of 0/0 lamina jute fiber composites fiber slicing (indicated by circle Fig. 8a) and debonding (indicated by arrow Fig. 8b) at fiber-matrix interfaces have been found to be dominant modes of fracture, Fig. 8. Here, a significant proportion of the load bearing section is covered by the weak fiber/matrix interface.As a result, for 0/0 lamina of jute fiber composites, a drastic decrease in tensile strength was observed.The summary of tensile failure steps are shown schematically in Fig. 9.
Mechanical properties of 0/+45/-45/0 composite
In the case of 0/+45/-45/0 composites, the longitudinal tensile strength are inferior to that of UD composite.The reason behind this is that in UD both the relatively high strength jute fiber and weaker epoxy matrix control the tensile strength.However, in the case of 0/+45/-45/0 fiber-matrix interfaces mainly dominate the composite strength and that the concentration of defects are higher on these interfaces.As a result, for 0/+45/-45/0 composite, the tensile strengths are poor in longitudinal directions.
On the other hand, the transverse strength of the 0/+45/-45/0 composite, showed higher value than the UD composite.The reason behind this is that in UD most of the defects are at the fiber matrix interface.But, in the case of 0/+45/-45/0 composite the +45/-45 ply acts as a source of resistance in +45° and -45° directions.As a result, for 0/+45/-45/0 composite, the transverse tensile strength is slightly higher than UD in the transverse direction.
For laminates, the simple rule of mixture is not applicable.In case of 0/+45/-45/0 composite, there are two interior layers (respectively, +45° and -45°), where the interior layer behaves differently under stress.Their responses are also different and complex.In order to avoid the complex behavior of reinforcing fibers in the composite, the experimental results have been explained in terms of physical morphologies of the fibers and fracture surfaces of the composites.Fig. 10 indicates the typical fracture surface of 0/+45/-45/0 composite.From this figure, it is clear that failure is dominated by fiber-matrix and matrix-matrix shearing, matrix and fiber failure, and fiber-matrix interface failure.Fiber-matrix interface failure is indicated by shear-lip type wavy fracture surface (indicated by triangle) [24].Some fiber pullout in ±45° direction is also observed (marked by arrows).Since there is fiber matrix shearing, so fiber debris is also observed on the fracture surface (indicated by square).Fig. 10 also indicates a large island of matrix (indicated by circle), which indicates that the fiber matrix distribution is non uniform.This non uniform fiber matrix distribution is also responsible for lower tensile property of the jute epoxy composite.
Spherulitic type of matrix failure around fiber indicates the presence of compressive force as shown in Fig. 11a (indicated by circle).This compressive zone is more brittle than the surrounding matrix.When tensile stress is applied this compressive zone shows the tendency of matrix cracking around the fiber (indicate by black arrow Fig. 11a).The presence of compressive force is confirmed by crazing zone around fiber, (indicated by rectangle in Fig. 11b).Additionally some brittle fiber failure was also observed as indicated with triangle in Fig. 11c.
Conclusions
In this research work, jute fiber reinforced epoxy matrix composites were developed by vacuum assisted resin infiltration (VARI) techniques with preformed stacking sequences (0/0/0/0), 0/+45°/-45°/0 and 0/90°/90°/0.These composites were characterized by tensile tests and observation of fracture surfaces under high resolution FEGSEM.From this research work, the following conclusions are made.a.In the case of 0/0/0/0 and 0/+45°/-45°/0 lamina composites, longitudinal tensile strength have been found to be higher than that of the transverse direction.However, for 0/90°/90°/0 lamina composites, directional difference in tensile strength was not observed.b.For all developed composites, experimental results revealed that the tensile properties of the developed composites are strongly dependent on the tensile strength of fiber and that the tensile properties of jute fiber are very much defect sensitive.c.Concerning the tensile properties of composites, the theoretical values obtained from the rule of mixture deviate from that of the experimental values and that this deviation is more significant in the case of transverse direction.d.Compressive fracture mode is attributed to spherulitic type appearance and crazing around jute fiber.e.For UD jute epoxy composite the sequences of failure that were matrix cracking, matrix crazing at fiber-matrix interface, partial fiber breaking, fiber slicing and pullout from matrix.However, in transverse direction, it is composed of fiber slicing and formation of fiber debris.
Fig. 2 .
Fig. 2. VARI setup and resin front during jute epoxy composite fabrication process; a) VARI setup and b) resin front (indicated by arrow).
Table 1 .
Longitudinal tensile behavior of laminates.
Table 2 .
Transverse tensile behavior of laminates.
|
v3-fos-license
|
2018-04-03T04:47:17.626Z
|
2000-09-22T00:00:00.000
|
22522436
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/275/38/29238.full.pdf",
"pdf_hash": "01a7c69d605d64e2511ed2222c34b517ad88a776",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2692",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "5b7688ede5deb04385a46dc06fab7630c1d245df",
"year": 2000
}
|
pes2o/s2orc
|
A Single Amino Acid Change in Subunit 6 of the Yeast Mitochondrial ATPase Suppresses a Null Mutation inATP10 *
In an earlier study, the ATP10 gene of Saccharomyces cerevisiae was shown to code for an inner membrane protein required for assembly of the F0 sector of the mitochondrial ATPase complex (Ackerman, S., and Tzagoloff, A. (1990) J. Biol. Chem. 265, 9952–9959). To gain additional insights into the function of Atp10p, we have analyzed a revertant of an atp10 null mutant that displays partial recovery of oligomycin-sensitive ATPase and of respiratory competence. The suppressor mutation in the revertant has been mapped to theOLI2 locus in mitochondrial DNA and shown to be a single base change in the C-terminal coding region of the gene. The mutation results in the substitution of a valine for an alanine at residue 249 of subunit 6 of the ATPase. The ability of the subunit 6 mutation to compensate for the absence of Atp10p implies a functional interaction between the two proteins. Such an interaction is consistent with evidence indicating that the C-terminal region with the site of the mutation and the extramembrane domain of Atp10p are both on the matrix side of the inner membrane. Subunit 6 has been purified from the parental wild type strain, from the atp10 null mutant, and from the revertant. The N-terminal sequences of the three proteins indicated that they all start at Ser11, the normal processing site of the subunit 6 precusor. Mass spectral analysis of the wild type and mutants subunit 6 failed to reveal any substantive difference of the wild type and mutant proteins when the mass of the latter was corrected for Ala → Val mutation. These data argue against a role of Atp10p in post-translational modification of subunit 6. Although post-translational modification of another ATPase subunit interacting with subunit 6 cannot be excluded, a more likely function for Atp10p is that it acts as a subunit 6 chaperone during F0 assembly.
In an earlier study, the ATP10 gene of Saccharomyces cerevisiae was shown to code for an inner membrane protein required for assembly of the F 0 sector of the mitochondrial ATPase complex (Ackerman, S., and Tzagoloff, A. (1990) J. Biol. Chem. 265, 9952-9959). To gain additional insights into the function of Atp10p, we have analyzed a revertant of an atp10 null mutant that displays partial recovery of oligomycin-sensitive ATPase and of respiratory competence. The suppressor mutation in the revertant has been mapped to the OLI2 locus in mitochondrial DNA and shown to be a single base change in the C-terminal coding region of the gene. The mutation results in the substitution of a valine for an alanine at residue 249 of subunit 6 of the ATPase. The ability of the subunit 6 mutation to compensate for the absence of Atp10p implies a functional interaction between the two proteins. Such an interaction is consistent with evidence indicating that the C-terminal region with the site of the mutation and the extramembrane domain of Atp10p are both on the matrix side of the inner membrane. Subunit 6 has been purified from the parental wild type strain, from the atp10 null mutant, and from the revertant. The N-terminal sequences of the three proteins indicated that they all start at Ser 11 , the normal processing site of the subunit 6 precusor. Mass spectral analysis of the wild type and mutants subunit 6 failed to reveal any substantive difference of the wild type and mutant proteins when the mass of the latter was corrected for Ala 3 Val mutation. These data argue against a role of Atp10p in post-translational modification of subunit 6. Although post-translational modification of another ATPase subunit interacting with subunit 6 cannot be excluded, a more likely function for Atp10p is that it acts as a subunit 6 chaperone during F 0 assembly.
The F 0 component of the proton translocating ATPase consists of a set of hydrophobic proteins that are embedded in the mitochondrial inner membrane. This important constituent of the larger F 1 -F 0 complex catalyzes vectorial transfer of protons across the inner membrane, the direction being dependent on whether the enzyme is functioning in an ATP synthetic or hydrolytic mode (1). In bakers' yeast, three subunits of F 0 are encoded in mitochondrial DNA (2). The other F 0 subunits are all products of nuclear genes. Most of the F 0 subunits are required for binding and conferral of oligomycin sensitivity on the F 1 -ATPase (3,4). The exception are three recently described subunits (5) that have been proposed to be involved in dimerization of the F 1-F 0 complex in the membrane. Mutations in these subunits do not appear to influence the basic ATPase activity of the complex (5).
Maintenance of functional ATPase depends not only on the expression of mitochondrially and nuclearly encoded subunits of the enzyme but also on nuclear gene products that promote essential events during ATPase assembly. Some factors such as Atp11p and Atp12p have been shown to interact with the ␣and -subunits and to render them competent to oligomerize into the F 1 -ATPase (6 -8). Other factors are required for transcription/translation of subunit 9 of the complex (9,10). In an earlier study we reported that the product of the ATP10 gene does not affect assembly of F 1 or synthesis of subunit 9 but is essential for expression of functional F 0 (11). Mutations in ATP10 resulted in a loss of oligomycin sensitivity and a more labile interaction of F 1 with the membrane. Both of these properties are hallmarks of a defect in F 0 . Atp10p is localized in the mitochondrial inner membrane but is not a constituent of the ATPase complex. As with so many factors that have been implicated in assembly of ATPase and of respiratory chain complexes, its precise function has remained obscure.
To learn more about the role of Atp10p in F 0 assembly, we have extended the analysis of the atp10 null mutant and have studied an extragenic suppressor that rescues the respiratory defect of the mutant. The suppressor has been mapped to mitochondrial DNA and identified as a single amino acid substitution in the OLI2 gene for subunit 6 of F 0 . These data suggest a functional interaction of Atp10p with subunit 6. The location of the suppressor mutation near the C-terminal region of subunit 6 argues against a role of Atp10p in processing of the subunit 6 precursor. This is also supported by the presence of mature subunit 6 in the atp10 mutant. Mass spectrometric analysis of subunits 6 and 9 purified from wild type and mutants have also excluded a role of Atp10p in post-translational chemical modification of these ATPase constituents. Atp10p, therefore, is more likely to be a chaperone for subunit 6.
MATERIALS AND METHODS
Yeast Strains and Growth Media-The genotypes and sources of the wild type and pet 1 and mit Ϫ strains of Saccharomyces cerevisiae used in this study are listed in Table I. The compositions of the media for growth of yeast have been described elsewhere (13).
Preparation of Yeast Mitochondria and ATPase Assays-Mitochondria were prepared by the method of Faye et al. (14) except that Zymolyase 20,000 instead of Glusulase was used to convert cells to spheroplasts. ATPase activity was assayed by measuring release of inorganic phosphate from ATP at 37°C in the presence and absence of oligomycin (15).
Cloning and Sequencing of the oli2 Gene-Mitochondrial DNAs purified from W303-1A, aW303⌬ATP10, and three independent revertants 10R1, 10R2, and 10R3 (16) were used as templates for polymerase chain reaction amplification of the OLI2 gene. One of the two synthetic primers had the sequence matching the sense strand from nucleotides Ϫ65 to Ϫ42 (17) except for one base change that was introduced to create the BglII site. The second primer was complementary to sense strand from nucleotides ϩ867 to ϩ893 of the sequence except for two base changes to form a HindIII site. The products obtained from the synthesis were digested with BamHI and HindIII and were ligated to YEp352 (18) linearized with the same restriction enzymes.
Purification of Subunits 6 and 9 of ATPase-Proteolipids were extracted as described by Michon et al. (19). Mitochondria (80 -250 mg) were suspended at a protein concentration of 12-18 mg/ml and extracted in 10 volumes of chloroform/methanol (1:1) by stirring the mixture at room temperature for 18 h. The organic extract was clarified by centrifugation and was washed by addition of water and chloroform (final proportion, chloroform/methanol/water, 8:4:3 v/v/v). The organic phase was dried down in rotary evaporator and dissolved in 2 ml of chloroform/methanol (1:1), and proteins were precipitated by addition of 4 volumes of ether at Ϫ80°C for 10 min. Two different methods were used to purify subunits 6 and 9. In the first method the ether precipitate was suspended in 1 ml chloroform/methanol (2:1) and chromatographed on a Primesphere 5 C4 high pressure liquid chromatography column (Phenomenex). The column was equilibrated in a solution containing 0.1% trifluoroacetic acid in methanol/water (3:1). The column was developed over 30 min at a flow rate of 1 ml/min with linear gradient from 0 to 100% chlororoform/methanol (2:1) containing 0.1% trifluoroacetic acid. Subunit 6, which eluted at 23 min, was collected and dried down under vacuum. This preparation of pure subunit 6 was used for protein sequencing. In the second method the ether precipitate was dissolved in 2 ml of chloroform/methanol (1:1) and chromatographed on a 1.5 ϫ 45-cm column of Sephadex LH60 equilibrated with chloroform/methanol/0.1 M HCl (1:1:0.05) (20). Fractions of 2 ml were collected and checked for protein by SDS-PAGE 1 on a 15% polyacrylamide gel. Fractions enriched for subunit 6 were precipitated with ether as above. Fractions containing subunit 9 were extracted with 0.5 volumes of chloroform and 0.37 volumes of water. The organic phase was used for mass determinations. The samples were used either directly or concentrated under vacuum before mixing with the matrix (1% sinapinic acid in acetonitrile containing 1% trifluoroacetic acid). Spectra were obtained with a Voyager DE-PRO (PerSeptive Biosystems).
Miscellaneous Procedures-Standard methods were used for the preparation and ligation of DNA fragments and for transformation and recovery of plasmid DNA from Escherichia coli (21). The method of Maxam and Gilbert was used to sequence 5Ј-end labeled single stranded DNA fragments (22). Proteins were separated on SDS-PAGE in the buffer system of Laemmli (23). Immunodetection of proteins on Western blots was carried out with 125 I-labeled protein A (24). Protein concentrations were determined by the method of Lowry et al. (25).
RESULTS AND DISCUSSION
Isolation and Genetic Characterization of atp10 Revertants-a W303⌬ATP10, abbreviated as ⌬ATP10 in this text, is a haploid strain of yeast with an atp10 null allele (11). This mutant grows very poorly on nonfermentable carbon sources such as ethanol and/or glycerol. The compromised respiratory activity of mitochondria in the null mutant as well as in atp10 point mutants was previously attributed to a defect in the F 0 component of the ATPase (11). To learn more about the biochemical lesion responsible for the F 0 assembly defect, spontaneous revertants of ⌬ATP10 were isolated. Such revertants appear frequently (10 4 -10 5 reversion frequency) on medium containing ethanol and glycerol as carbon sources. Three independent revertants (10R1, 10R2, and 10R3) were chosen for further study. The generation time of the revertants in liquid medium containing glycerol was estimated to be about two times longer than the parental wild type (Table II). The revertant phenotype was found to be transmitted in a stable manner after propagation of the cells on glucose or galactose.
The revertants were further characterized by crosses to E103, an atp10 mutant obtained by mutagenesis of the respiratory competent strain D273-10B/A1 with ethylmethane sulfonate (26). Diploid cells issued from the cross grew on respiratory substrates with approximately the same generation time as the haploid revertant indicating that the suppressor(s) were either nuclear dominant or mitochondrial mutations. To distinguish between these two possibilities, spontaneous Ϫ derivatives were isolated from each revertant and were crossed to E103. Diploid cells formed in these crosses failed to grow on nonfermentable carbon sources, indicating extragenic mutations in mitochondrial DNA. This was confirmed by segregation tests. The revertants were crossed to E103 in glucose containing medium for 6 h. Diploid cells were prototrophically selected on minimal glucose. Following 20 -30 generations they were spread for single colonies on rich glucose medium, and after 2 days of growth at 30°C were replicated on rich medium containing glycerol. Two distinct growth phenotypes were noted on the glycerol medium. In all cases 30 -50% of the colonies displayed the revertant phenotype, whereas the remaining cells showed the very slow growth characteristics of the mutant. Several respiratory competent diploid cells from the first segregation were grown on glucose and tested a second time for mitotic segregation as described above. In every instance all of the segregants displayed revertant properties. The possibility that the revertant harbored a second nuclear suppressor that, together with the mitochondrial mutation, was responsible for the respiratory competent phenotype was excluded from the results of a cross of revertant 10R3 to the atp10 null mutant. Respiratory competent diploid cells produced from this cross were sporulated, and the meiotic spore progeny were analyzed by tetrad dissections. In nine complete tetrads all the spores exhibited the revertant phenotype. These data together with the results of the crosses of the Ϫ derivatives of the revertants to the atp10 point mutant indicated that the suppressor is inherited as a mutation in the mitochondrial genome. The mitotic and meiotic segregation results also exclude the suppressor from being a rearrangement of mitochondrial DNA that coexists as an independently replicating Ϫ genome in an otherwise ϩ background (27,28).
The mitochondrial suppressor was transferred to a wild type nuclear background by crossing 10R3 to a o derivative of W303-10B. The diploid cells were sporulated and Leu Ϫ meiotic progeny with the ATP10 gene were obtained (10R3/ATP10). 1 The abbreviation used is: PAGE, polyacrylamide gel electrophoresis. These cells grew on glycerol as well as the wild type strain at 30°C but were partially temperature sensitive at 37°C (Fig. 1). The temperature-sensitive phenotype was also detected in the atp10 mutant and revertant. The normal growth of 10R3/ ATP10 on glycerol at 30°C indicates that suppressor does not affect the ATPase in cells expressing Atp10p.
Properties of the Mitochondrial ATPase in the Revertant Strains-In earlier studies atp10 mutants were found to have normal F 1 -ATPase (11). The larger F 1 -F 0 complex, however, had altered properties, one of which was decreased oligomycin sensitivity. Assays of mitochondrial ATPase activity from different strains indicated that sensitivity to oligomycin is partially restored in the revertants (Table III). The ATPase activities in the three revertants 10R1, 10R2, and 10R3 were inhibited 25-33% by oligomycin. In the same assay the mitochondrial ATPase of the wild type was inhibited by 75%, whereas in ⌬ATP10, the ATPase was completely insensitive to the antibiotic. The partial restoration of oligomycin-sensitive ATPase in the revertants is consistent with the ability of the revertants to grow on respiratory substrates.
The absence of oligomycin sensitivity in the atp10 mutant has previously been ascribed to the failure of F 1 to correctly interact with F 0 (11). The oligomycin sensitivity observed in the revertants therefore indicated that the suppressor permits some F 1 to be assembled with F 0 . This was confirmed by sucrose gradient sedimentation analysis of detergent extracts of wild type and mutant mitochondria. In agreement with previous results (29), all the ␣and -subunits of F 1 in the wild type extract co-sedimented as part of the larger F 1 -F 0 complex (Fig. 2). This was also true of the extract from 10R3/ATP10, which contains the suppressor in the context of the wild type ATP10 gene. Even though the F 1 subunits also co-sedimented in the extract from the atp10 mutant, their slower sedimentation indicated that they were part of the F 1 oligomer but not of the F 1 -F 0 complex (29). In the case of the revertant extract, two separate peaks were observed. Approximately 30% of the ␣and -subunits sedimented as the F 1 -F 0 complex, whereas the remainder of the subunits sedimented as the F 1 oligomer (Fig. 2). Similar results were obtained when the sedimentation analysis was extended to subunits 4 and d of the F 0 (data not shown). In this case also, only a fraction of the F 0 subunits in the revertant extract co-sedimented with the F 1 -F 0 complex. F 0 Proteolipids in atp10 Mutants and Revertants-Subunits 6, 8, and 9 of the F 0 sector are encoded in mitochondrial DNA (2). These hydrophobic constituents are products of OLI2, AAP1, and OLI1, respectively. Two different approaches were used to estimate the levels of these proteins in the mutants and revertants. The chloroform/methanol extraction conditions of Michon and Velours (19) were used to isolate subunits 6 and 9 from mitochondria of the apt10 null mutant ⌬ATP10, from the three revertants and from the parental wild type strain. The extracts were analyzed by SDS-PAGE, and the proteolipids were visualized by silver staining. Quantitation of the stained gel revealed about 16 times less subunit 6 in the mutant than in the wild type extract (Fig. 3). The amount of subunit 6 in the revertant extracts was significantly increased in the mutant, although it was still lower than in the wild type. It is interesting that the oli2 point mutant, which is able to grow slowly on glycerol, also has a low level of subunit 6 that can be extracted with chloroform/methanol. The decreased steady-state concentration of subunit 6 in the mutant could be because of an effect of the mutation either on synthesis or on turnover of the protein.
The synthesis of the ATPase proteolipids in the different strains was estimated by in vivo labeling of the mitochondrial translation products with 35 SO 4 2Ϫ in the presence of cycloheximide. Subunit 6 was found to be synthesized in all the atp10 mutants (Fig. 4), indicating that the low level of this protein in the chloroform/methanol extract of ⌬ATP10 mitochondria was not a consequence of a translational defect but rather of an increased turnover of the protein in the mutant. Similar results were obtained when the mitochondrial translation products were synthesized in isolated mitochondria (data not shown). The lability of subunit 6 is not unique to atp10 mutants and has also been reported in other strains that are blocked in F 0 assembly because of mutations in the structural genes (31-33).
Significantly, subunit 6 detected in the 10R3 revertant had 1. Growth of wild type and mutant cells at 30 and 37°C. The respiratory competent parental strain W303-1A (ATP10/OLI2), the atp10 null mutant ⌬ATP10 (atp10/OLI2), the revertant 10R3 (atp10/oli2), and the wild type strain with the mitochondrial genome of the revertant (ATP10/oli2) were diluted serially and spotted, starting from 10 5 cells, on two YPD (rich glucose) and two YEPG (rich glycerol plus ethanol) plates that were incubated for 3 days at 30 and 37°C. No differences in growth at the two temperatures were found on the YPD medium (not shown). Only the YEPG plates are shown. an altered electrophoretic mobility (Fig. 4). The slightly faster migration of subunit 6 was also discerned in the other two revertants, 10R1 and 10R2, (data not shown). The faster migration of subunit 6 from the revertant is probably due to an increased capacity of the protein to bind sodium dodecyl sulfate as a result of the C-terminal mutation (see below). Localization of the Suppressor and Sequencing of OLI2 from the atp10 Mutant and Revertants-To map the mitochondrial suppressor, the three revertants were treated with ethidium bromide, and the resultant Ϫ derivatives were collected. The Ϫ clones of each library were crossed to the atp10 mutant, and the diploid cells that formed in the crosses were tested for appearance of the suppressed phenotype. The regions of mitochondrial DNA conserved in several Ϫ clones that were able to suppress the respiratory defect of the atp10 mutant were determined by physical analysis of their Ϫ genomes. In each case the Ϫ genomes were ascertained to contain the OLI2 gene for subunit 6 of the ATPase (17).
The mitochondrial OLI2 gene was amplified from mitochondrial DNA of W303-1A, ⌬ATP10, and the three revertants 10R1, 10R2, and 10R3 were analyzed by polymerase chain reaction. The sequences of the genes cloned from the wild type strain and from the ⌬ATP10 null mutant were identical to the sequence of OLI2 previously reported for the respiratory competent strain D273-10B/A1 (17). The sequences of the genes obtained from the three revertants, however, showed a single identical C 3 T base change at nucleotide 746. The C 3 T transition replaces the the alanine at residue 249 near the C terminus of the protein with a valine. In view of the identical mutation in the three revertants all subsequent experiments on the revertant made use of 10R3.
An alignment of the C-terminal 16 residues of subunit 6 from fungal, plant, and animal sources shows that the Ala 249 is not a conserved amino acid (Table IV). It is also interesting that some fungi lack the C-terminal sequence corresponding to the region of the yeast protein with the mutation.
N-terminal Sequence of Subunit 6 -Subunit 6 of yeast ATPase is synthesized as a precursor with a 10-amino acid extension at the N terminus (19). The mature protein starts with the serine at residue 11 of the primary translation product (19). To determine whether subunit 6 is correctly processed in the mutant and the revertants, the protein was purified from the different strains by reverse-phase chromatography of chloroform/methanol extracts of mitochondria. No significant difference was noted in the elution times of the protein obtained from the wild type and the ⌬ATP10 mutant or revertant. The sequences of the first 10 residues indicated that the proteins purified from the mutant and revertant strains begin with Ser 11 as did the wild type protein. This result indicates that Atp10p is not involved in processing of the precusor. Mitochondria were prepared from the wild type haploid strain W303-1A, from the atp10 null mutant ⌬ATP10 (⌬ATP10), from the revertant 10R3(10R3), and from 10R3/ATP10, a respiratory competent strain with the mitochondrial DNA of 10R3. Mitochondria, adjusted to a protein concentration of 8 mg/ml in 5 mM Tris acetate, pH 7.5, were extracted by addition of 10% Triton X-100 to a final concentration of 0.25%. After centrifugation at 105,000 ϫ g av for 20 min, 0.5 ml of the supernatant was layered on top of 5 ml of a 5-20% linear sucrose gradient prepared in the presence of 5 mM Tris acetate, pH 7.5, and 0.1% Triton X-100. The gradient was centrifuged in a Beckman SW65 rotor at 65,000 rpm for 3 h. Eleven fractions were collected from the bottom of the gradients by gravity flow. The fractions (20 l) were separated on a 12% SDS-PAGE gel. Following transfer to nitrocellulose, the blots were first treated with a mixture of antisera against the ␣ and  subunits of F1 and then visualized by a second reaction with 125 Iprotein A. The migration of the ␣ and  subunits are marked in the left-hand margin. (19). A sample of the extract corresponding to 0.2 mg of starting mitochondria was dissolved in depolymerization buffer (23) and separated on a 12.5% SDS-PAGE gel prepared in the presence of 6 M urea and glycerol (23). The location of subunits 6 and 9 in the silver-stained gel are indicated in the right-hand margin.
FIG. 4. Mitochondrial translation products in wild type and mutants. The parental wild type (W303-1A), the atp10 null mutant ⌬ATP10, the revertant 10R3, and the oli2 mutant M28 -82 were grown in YPGal and labeled with 35 SO 4 2Ϫ in the presence of cycloheximide (30). Mitochondria were isolated and 40 g of mitochondrial protein was separated on a 12.5% polyacrylamide gel containing 6 M urea and 6% glycerol. The gel was dried prior to autoradiography. The following mitochondrial translation products are identified in the margin: ribosomal protein (Var1); subunit 1 (Cox1), subunit 2 (Cox2), and subunit 3 (Cox3) of cytochrome oxidase; cytochrome b (Cytb); and subunit 6 of ATPase (Atp6).
Localization and Topology of Atp10p-Atp10p was previously found to be associated with the mitochondrial inner membrane. It was solubilized with NaBr, suggesting that it may be an extrinsic membrane protein (11). This could not be confirmed in the present study. When submitochondrial vesicles were extracted with carbonate, the alkaline conditions failed to release Atp10p from the membrane, indicating that it is an intrinsic membrane constituent (Fig. 5A).
The ability of a single amino acid substitution in the Cterminal region of subunit 6 to partially rescue the atp10 null mutant could indicate that Atp10p interacts with the C-terminal region of subunit 6. Subunit a, the E. coli homolog of mitochondrial subunit 6, has been proposed to have five transmembrane domains with its N terminus on the periplasmic and the C terminus on the cytoplasmic side of the plasma membrane (34,35). An alignment of the E. coli and yeast subunit 6 sequences suggests a similar number of transmembrane domains in the latter protein. Moreover, based on the E. coli model (35), the 17 C-terminal residues of the yeast protein, including the site of the mutation, are predicted to lie outside of the phospholipid bilayer in the matrix compartment.
The topology of Atp10p was probed by testing its sensitivity to proteinase K digestion. Mitochondria and mitoplasts prepared by hypotonic swelling of mitochondria were treated with proteinase K under conditions that digest proteins exposed to the intermembrane space. Western blots disclosed that Atp10p is completely protected against proteinase K in both intact and hypotonically treated mitochondria (Fig. 5B). Antiserum against cytochrome b 2 , an intermembrane marker, was used as a control. As expected, cytochrome b 2 is detected in proteinase K treated and untreated mitochondria but is severely reduced in mitoplasts (Fig. 5A). Thus, the location of the C-terminal region of subunit 6 is consistent with the topology of Atp10p, which, based on its resistance of proteinase K digestion, faces the matrix side of the inner membrane.
Molecular Masses of Subunits 6 and 9 of the ATPase-There are two ways in which Atp10p could interact with subunit 6 during F 0 assembly. The more obvious function is that Atp10p modifies subunit 6 post-translationally. As indicated above Atp10p is not involved in proteolytic removal of the presequence from the subunit 6 precursor. Other types of modifications were also excluded on the basis of mass measurement on subunit 6 isolated the wild type, the mutant, and the revertant. The apparent masses of subunit 6 from the atp10 mutant and revertant (corrected for the Ala 3 Val mutation) differed by less than 11 daltons from the wild type protein (Table V). This difference, which lies within the accuracy of the instrument, is too small to be due to a chemical modification. The mass of subunit 9 obtained from the same strains agreed well with the known sequence of the protein (data not shown), thereby excluding a role of Atp10p in chemical modification of subunit 9. Rather these data suggest the alternative explanation that FIG. 5. Location and topology of Atp10p. A, mitochondria from the respiratory competent strain W303-1A were converted to submitochondrial particles by sonic irradiation. The submitochondrial particles extracted in the presence of 0.1 M sodium carbonate at a final protein concentration of 10 mg/ml. After incubation on ice for 10 min the extrinsic membrane proteins were separated from the membranes by centrifugation at 400,000 ϫ g av for 30 min. Equivalent volumes of mitochondria, submitochondrial particles (SMP), sodium carbonate extract, and pellet were separated on a 12% polyacrylamide gel and transferred to nitrocellulose paper. The Western blot was reacted with antiserum against Atp10p using the Super Signal detection system (Pierce). B, mitochondria were prepared by the method of Glick (36) from W303-1A. The mitochondria, at a protein concentration of 8 mg/ ml, were diluted with 8 volumes of 20 mM Hepes, pH 7.5, containing 0.6 M sorbitol (Mit). Mitoplasts (Mpl) were prepared by dilution of mitochondria in 20 mM Hepes, pH 7.5, without the sorbitol. One half of each sample was treated with proteinase K (prot K) at a final concentration of 100 g/ml and incubated for 60 min on ice. Phenylmethylsulfonyl fluoride was added to a final concentration of 2 mM to stop the proteolysis, and the mitochondria and mitoplasts were recovered by centrifugation at 20,000 ϫ g av . The pellets were suspended in 20 mM Hepes, pH 7.5, 0.6 M sorbitol, and proteins were precipitated by addition of 0.1 volume of 50% trichloroacetic acid. The samples were heated at 65°C for 10 min and centrifuged, and the pellets were dissolved in Laemmli depolymerization buffer (23). Total mitochondrial and mitoplast proteins (25 g) were separated on a 12% polyacrylamide gel and transferred to nitrocellulose, and the Western blots were treated either with antiserum to cytochrome b 2 or to Atp10p. The migration of molecular mass standards are marked in the left-hand margin. Cytochrome b 2 (B2) and Atp10p are identified in the right-hand margin. The experimentally determined masses of subunit 6 from the two strains containing the Ala 3 Val mutation were normalized to that of the wild type and atp10 null mutant by subtracting 28 daltons, the mass difference between alanine and valine. The values reported are averages with the ranges next to them.
Atp10p acts as a subunit 6-specific chaperone that may confer an assembly-competent conformation on subunit 6 or facilitate its insertion into the inner membrane. Attempts to detect a complex of Atp10p and subunit 6 by cross-linking experiments have so far failed.
Atp10p could also be involved in modification of a neighboring subunit. In the absence of the modification, interaction with subunit 6 would be weakened, causing a defect in F 0 assembly. The presence of a bulkier and more hydrophobic residue in the C-terminal region of subunit 6 might act to stabilize the protein-protein interface. Such an interaction would need to occur outside the phospholipid bilayer on the matrix side of the inner membrane. This follows from the site of the mutation in subunit 6. There is evidence for a contact of the N-terminal regions of subunits 4 and the 6 on the intermembrane space (37). An interaction of the first transmembrane ␣ helix of subunit 6 and of the transmembrane helix of subunit i has also been described (38). At present, however, information concerning possible interactions of the C-terminal tail of subunit 6 with other F 0 or stalk constituents is lacking.
|
v3-fos-license
|
2018-05-23T01:48:14.439Z
|
2018-05-21T00:00:00.000
|
29157313
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmccardiovascdisord.biomedcentral.com/track/pdf/10.1186/s12872-018-0832-2",
"pdf_hash": "7ebd07f04696b8f6eb9bc5ef18efa4628c47cdaf",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2694",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"sha1": "7ebd07f04696b8f6eb9bc5ef18efa4628c47cdaf",
"year": 2018
}
|
pes2o/s2orc
|
Effect of the patient education - Learning and Coping strategies - in cardiac rehabilitation on return to work at one year: a randomised controlled trial show (LC-REHAB)
Background Personal resources are identified as important for the ability to return to work (RTW) for patients with ischaemic heart disease (IHD) or heart failure (HF) undergoing cardiac rehabilitation (CR). The patient education ‘Learning and Coping’ (LC) addresses personal resources through a pedagogical approach. This trial aimed to assess effect of adding LC strategies in CR compared to standard CR measured on RTW status at one-year follow-up after CR. Methods In an open parallel randomised controlled trial, patients with IHD or HF were block-randomised in a 1:1 ratio to the LC arm (LC plus CR) or the control arm (CR alone) across three Danish hospital units. Eligible patients were aged 18 to ≤60 and had not left the labour market. The intervention was developed from an inductive pedagogical approach consisting of individual interviews and group based teaching by health professionals with experienced patients as co-educators. The control arm consisted of deductive teaching (standard CR). RTW status was derived from the Danish Register for Evaluation of Marginalisation (DREAM). Blinding was not possible. The effect was evaluated by logistic regression analysis and reported as crude and adjusted odds ratios (OR) with 95% confidence interval (CI). Results The population for the present analysis was N = 244 (LC arm: n = 119 versus control arm: n = 125). No difference in RTW status was found at one year across arms (LC arm: 64.7% versus control arm: 68.8%, adjusted odds ratio OR: 0.76, 95% CI: 0.43-1.31). Conclusion Addition of LC strategies in CR showed no improvement in RTW at one year follow-up. Trial registration www.clinicaltrials.gov identifier NCT01668394. First Posted: August 20, 2012.
Background
During the last decades, mortality from ischaemic heart disease (IHD) and heart failure (HF) has decreased due to improved primary and secondary prevention [1][2][3]. Thus, along with changed age composition, many people worldwide are living with these conditions which cause disability on several levels [4,5]. Health promotion and risk factor reduction are typically managed in cardiac rehabilitation (CR) and CR is known to improve clinical outcomes [2]. Integrated patient education in CR programmes may also reduce fatal and/or non-fatal cardiovascular events and improve health related quality of life (HRQoL) [3,6]. Patient education is recommended to focus on the individual's personal resources rather than only increasing knowledge on disease management [7]. However, CR interventions have primarily been evaluated on clinical outcomes and less evaluated on the ability to promote level of function including return to work (RTW).
Work plays an important role for psychological and social wellbeing, and loss of productivity has economic costs for society [4,8]. Clinical guidelines across nations therefore intend to cover vocational counselling in CR however, RTW internationally still seems to remain suboptimal [9,10]. In Denmark it has been estimated that 21 and 25% of people with IHD and HF, respectively do not RTW 1 year after engaging in CR [11,12]. Moreover, some patients struggle to balance workplace demands with the individual resources and health status and therefore experience recurrent sick leave episodes after RTW [13][14][15]. Problems in the work reintegration process has recently been emphasised since a study showed that detachment from employment was threefold higher among post myocardial infarct patients 1 year after return to work compared with a matched population [16]. Personal resources as coping and self-care are important aspects in a successful RTW process [17]. CRinterventions aiming the ability to cope with and engage in everyday life evaluated on RTW have provided inconsistent results [18,19]. Thus, the evidence of the pedagogical approaches and methods to promote the RTW process is unknown.
Learning and coping strategies (LC) is a patient education method that aims to facilitate personal resources through inductive teaching with a high level of patient involvement and includes supplement of individual clarifying interviews. The health professionals and experienced patients jointly perform the group based CR sessions [20]. The LC-REHAB trial was conducted in a hospital setting in Denmark, and aimed to assess the effect of LC strategies on various outcomes and have shown to promote patient adherence to CR [20,21]. In the present trial it was hypothesised that the LC strategies also promoted RTW compared with usual CR by enabling patients to use acquired LC skills in the RTW process. Furthermore, that those receiving LC strategies would reduce the number of sick leave relapses by the gained insight of health condition and how to cope with that.
The primary aim was to assess the effect of adding LC strategies in CR on RTW 1 year after inclusion of patients diagnosed with IHD or HF. Secondary to assess if addition of LC strategies in CR reduced sick leave relapses during one-year follow-up.
Design
The trial was conducted on a subpopulation from the open randomised parallel group controlled trial, LC-REHAB [20]. Patients were randomly allocated to the intervention arm (LC strategies in addition to standard CR) or to the control arm (standard CR) in a 1:1 ratio stratified for hospital unit, gender and diagnosis (IHD or HF) in blocks of two to four using a web-based system [20]. The allocation procedure was generated independently by the research team. Additional eligibility criteria in the present trial were applied after randomisation to exclude patients assessed with permanent work disability at inclusion. Information on inclusion dates was retrieved from the LC-REHAB trial [20]. Follow-up was defined as the week in which the date equivalent to 12 months after inclusion appeared. The trial was conducted and reported according to the CONSORT standardsextension for randomised trials of non-pharmacological treatment [22].
Patients and recruitment
Patients were recruited between 30th November 2010 and 20th December 2012. Trial information was sent by postal mail to eligible patients referred to CR. Information of the trial was provided by telephone by the last author of this paper. Written informed consent, enrolment and randomisation were performed by health professionals at the CR units [20]. A total of 827 patients hospitalised for either IHD or HF were included in the LC-REHAB trial (Fig. 1). Two patients were excluded due to error in randomisation procedure. Of the remaining 825, 413 were randomised to the LC arm and 412 to the control arm.
Patients were enrolled at the CR unit and were eligible for the LC-REHAB trial if they were aged above 18, referred to, and motivated for CR after hospitalisation for IHD or HF. If the patients were diagnosed with both IHD and HF, they were classified as having HF. For specific ICD-10 codes included see the initial protocol for the LC-REHAB trial and previous study publication [20,21]. The exclusion criteria were acute coronary syndrome within the last 5 days before inclusion; active peri-, myo-, or endocarditis; symptomatic and/or untreated heart valve disease; severe hypertension with blood pressure > 200/110 mmHg; other severe cardiac or extra cardiac disease; planned revascularisation; senile dementia, assessed as having poor compliance for participation in and completion of the trial; or previously participation in the trial.
Eligibility for the subpopulation was assessed after trial completion by pre-defined criteria independent of allocation arms. Inclusion criteria applied for the present trial were: aged > 18 to ≤60 years, being self-supported or received either State Educational grants, labour marketrelated benefits or health-related benefits that did not indicate a permanent job incapability except for patients in jobs on modified conditions (flexi jobs). Exclusion criteria were: aged above 60, receiving disability pension or passive social assistance that indicated pre-existing, long-term work disability. Patients were assessed eligible based on public transfer payments by the week of inclusion.
Interventions
Patients in both arms received a phase II CR program lasting 8 weeks based on the national Danish guidelines on CR starting the first workday after inclusion [20,23]. The programme was delivered in a hospital setting and [37]. 2 Attending at least 75% of scheduled sessions corresponding to 18 exercise sessions and 6 educational sessions. The limit of 75% was set in accordance with recommendations for reducing mortality [38] consisted of group based sessions all lasting 1.5 h with a weekly education session and three exercise sessions per week. The content of the education sessions in both arms were split in eight topics, which were chosen in collaboration with experienced health professionals in CR and experienced patients who previously had undergone CR [20]. The topics were: Function and symptoms of the heart, lifestyle effects on the development of IHD and HF, emotional reactions, medication, tiredness, the importance of relatives or other networks, importance and types of exercise, and future life with a chronic disease. The education sessions were primarily managed by a nurse. Exercise sessions consisted of aerobe exercise and muscle strength training managed by a physical therapist.
Both arms received CR by the same pair of a nurse and a physiotherapist throughout the CR programme; the pairs were designated to either the intervention or control arm throughout the trial. Due to the nature of the intervention, blinding of health professionals or patients was not possible. Sessions in the two different arms were performed at different times of the day [20].
Intervention arm (LC arm)
In addition to the described CR intervention, LC strategies took a situated, reflexive, and inductive approach to education and exercise. The rationale behind the pedagogical approach was based on theories behind LC strategies and was described in the initial protocol for the LC-REHAB [20]. The rationale was applied through practical implications consisting of: Two individual clarifying interviews (before and after CR), experienced patients as co-educators, material developed for each topic including background literature and questions to facilitate discussions. The approach was ensured by health professionals completing an 8 days competenceeducation in LC strategies with experienced patients participating the last 4 days, and 1 h evaluation meetings between the pair of health professionals and experienced patient once a week [20].
Control arm (standard CR)
The CR programme in the control arm was the formerly used in the hospital units (standard CR) [23]. The rationale was not described and education and exercise consisted of structured deductive teaching. Identical prewritten, educational slide-shows were used as material for the education sessions.
Outcomes
The primary outcome was RTW status at 1 year follow up. After trial completion, information on the RTW outcome was retrieved from DREAM, which is administered by The Danish Ministry of Employment [24]. The register includes all Danish citizens, who at some point since 1991 have received public benefits. Patients were identified in DREAM by their social security number. Each person is registered once a week with a code representing the type of transfer payment received that particular week [25]. For patients to be categorised as having a RTW status (yes) at 1 year follow up, the four consecutive weeks prior to the week of one-year follow up were either codeless (self-support) or had codes representing State Education Fund grants or flexi jobs. For the secondary outcome, each patient was categorised as having an event of RTW during one-year follow up (yes/no). The first event of four consecutive weeks of either self support or codes representing State Education Fund grants or flexi jobs was categorised as RTW (yes). The patients who experienced the event of RTW during follow up but were not registered RTW at 1 year follow up were identified. They were referred to as "relapsed patients".
Baseline characteristics
Baseline variables concerning: age, gender, height, weight, diagnosis (IHD or HF), presence of diabetes, smoking, civil status and former participation in CR were reported to dedicated databases in the LC-REHAB trial by the nurses at the CR units [20].
At the first CR session, the health professionals handed out self-reported questionnaires for assessing: presence of depression, level of education, and the annual household income [20]. Depression (yes/no) was assed by Major Depression Inventory (MDI) [26]. Level of education was classified by low, medium, or high. Household income was classified by low, medium, or high. For elaborated classification on level of education and household income, see the initial protocol [20].
Since work status prior to CR was assumed associated with RTW, additional information on employment was retrieved from DREAM after allocation to the trial arms [14]. No self support prior to CR was dichotomised (yes/ no) encompassing whether the patient within 6 months prior to inclusion were self-supported for at least 1 week.
Statistical methods
Descriptive statistics were used to compare the baseline characteristics -Chi-squared for the binary and categorical outcomes, and Studen's t-test for the continuous. RTW status at 1 year was compared between the two arms using a logistic regression model. The result was presented both as unadjusted and adjusted odds ratios with 95% confidence interval (CI). Adjustments were carried out for the stratification variables: gender, cardiac diagnosis and hospital unit. Additional adjustment for age was performed, as age was expected to be associated with RTW [11]. To address the secondary aim of the trial, frequencies and percentages described the number of patients who experienced the event of RTW during follow up across trial arms. A comparison of the relapsed patients across arms was performed by chi-square test. All analyses were performed based on the intention-to-treat principle. Analyses were performed using Stata 14 software [27]. Data management was performed blinded from allocation.
Power calculation
Suggesting a 14% difference in RTW proportions between arms estimated from Kruse et al. [11], the given sample size on 244 patients left this trial with a power on 89% testing on a 5% level of significance.
Baseline data
A total of 526 patients were excluded due to age criteria, 55 due to receiving disability pension (n = 47) or passive social assistance (n = 8) on the date of randomisation. Twohundred-and-forty-four patients were included for the present trial on RTW; 119 and 125 in the LC arm and control arm, respectively. Mean age was 51.8 years and the majority of the patients were men (77.0%). A small fraction (16.8%) had no self support prior to CR (Table 1). Baseline variables balanced across arms with no statistically significant differences between the two arms on baseline variables except for gender (males accounting for 83.2% in the LC arm vs. 71.2% in the control arm (p = 0.03)) ( Table 1). Nonresponders balanced across arms in baseline variables containing missing values (results not shown).
Comparison of RTW status at one year and RTW during follow up
Registering RTW during the 1 year follow-up resulted in a slightly increased proportion of patients that experienced the event of RTW: 80% (95% CI: 72-87) in the LC arm and 83% (95% CI: 76-89) in the control arm. Regarding the secondary outcome, Thirty-six patients were identified as relapsed patients during follow up and they were equally distributed across the LC arm and control arm (p = 0.87, Table 2).
Discussion
The present trial showed that addition of LC strategies in hospital based phase II CR did not improve RTW status at one-year follow-up compared to standard CR. Nor did addition of LC strategies seem to reduce sick leave relapses during one-year follow-up.
Prior to this trial, comparable CR-interventions aiming at facilitating personal resources to improve RTW had consisted of patient-involving educational sessions, making an individual worksheet plan, including the role of the spouse, shared-decision making and a partnership-based approach [18,19]. The evidence for these approaches and methods is inconsistent and rely on a sparse basis [18,19]. In cancer rehabilitation no comparable educational interventions established effect regarding RTW neither [28]. Comparing the effect of patient educational approaches across studies is however complicated by methodical differences as well as heterogeneity in patients, compared interventions and CR-delivery. The trial benefitted from being able to measure the effect of a patient education method developed from a specific pedagogical approach in patient education. Despite the lacking effect of the intervention on RTW, the intervention provides knowledge for further research in CR patient education. According to the first guidance for evaluating complex interventions by the Medical Research Council (MRC), lack of effect may reflect implementation failure rather than genuine ineffectiveness of the intervention and has been identified as problems concerning development, implementation, and evaluation [29].
The intervention in the LC arm was developed to aim at promoting the individuals' personal resources rather than targeting multiple factors affecting level of function [30]. A more comprehensive approach has however been suggested with beneficial effects on RTW by both a review on CR and a Cochrane review on cancer rehabilitation [10,28]. LC strategies in this trial lacked of involvement of contextual factors in general and workplaces in particular. The relapsed patients had jobs in health-care, service jobs and manual labour. Job type and support from the employer is found important for CR patients and recently contextual factors at the workplace and organisational practices have been identified to constrain the margin of manoeuvre in work reintegration [15,31]. This may imply that the physical demanding jobs, that also are low level educational job types among relapsed patients are more difficult to reintegrate into after sick leave due to IHD or HF. Thus, the findings in this trial together with emerging evidence suggest development of interventions that foster accommodation and support from involvement of contextual factors like workplaces in integrated CR programmes [31].
The implementation of the theoretical understanding in LC strategies may be questioned, since it was not described to what extent the illness perspective on work resumption was approached by the health professionals in the CR sessions. In the present study low household income, low educational level, no self support prior to CR and higher age at baseline were all statistically significant risk factors for not adhering to the CR exercise sessions (results not shown). These socioeconomic factors are in line with frequently reported predictors for not only poor adherence to CR but also detachment from employment [16,32]. It was likely to assume that the poorer adherence in high risk patients contributed to the absent effect of LC strategies. Implementation of future interventions to improve RTW should therefore ensure: adequate RTW-aimed interventions, include practical implications that in particular aim the process of RTW, and optimise adherence to CR for patients in high risk of detachment from employment. Evaluation in this trial was done using an outcome that only accommodated a paid job or education and neglected possible enhanced participation in e.g. volunteer work or social relations. This standardised outcome may have conflicted with the individualised approach in LC strategies. Alternative evaluation of CR that measures more participation-related outcome might be relevant to reflect the aim of rehabilitation and furthermore to address the important well-known participation restrictions in patients with chronic IHD [5,33].
Study limitations and strengths
Information bias was considered minimal; DREAM has been validated against workplace-registered job attendance and long-term sick listing and found to have high sensitivity and specificity [34]. Classification of RTW based on transfer payments from DREAM has elsewhere been defined based on various numbers of weeks (ranging from 1 week to 5 weeks) [12,13,16]. The chosen definition of four consecutive weeks in this trial might have affected the frequencies of RTW but was not expected to be differentiated between arms. Selection bias was furthermore not considered as there was complete follow up; therefore, threats to the internal validity of the trial were not assumed.
The trial was carried out in western Denmark where the population in general is lower educated than the total population of Denmark and the trial enrolled patients with HF [35]. Both level of education and living with HF are associated with increased risk of not returning to work and may have caused the overall lower RTW proportion (61-73%) in this trial compared to other studies [11,12,16].
According to the power calculation, a 14% difference in RTW was expected between the two trial arms; however the trial detected a 4 percentage point difference. It was expected that lacking practical implications in the LC strategies for improving RTW rather than a low sample size was the reason of no difference.
The estimate may have been affected towards the null hypothesis as mutual interaction between the arms was plausible due to lack of blinding of the health professionals. Moreover, patient education was delivered in both arms and the effect of the patient education in the control arm may have contributed to even out the effect of LC strategies.
Approximately, 50% of the population with IHD and HF asked to participate declined to participate in the trial [21]. However, no knowledge of the patients that declined was accessible and it was thus unknown if selection of the patients was present at enrolment. This caused limitation of the generalisability of the results and the trial was not able to provide answers about the effect on the total population of people with IHD or HF.
Temporal and contextual factors affect the ability to RTW and influence the external validity of the outcome measure. Also, this means that comparing results in an international context should be done carefully due to heterogeneity in RTW definitions and occupational systems.
Conclusion
Addition of LC strategies in CR showed no improvement of RTW compared to CR alone after 1 year. Implications for further development and research of patient education methods in CR to improve RTW are: involvement of contextual factors in development of the intervention, and implementation that ensures practical implications targeting RTW like workplace involvement and job type. Lastly, evaluation should address the interventions' ability to improve participation among patients living with IHD and HF. Frequencies and percentages analysed using logistic regression. Crude and adjusted odds ratios (OR) 2 Adjusted for stratification variables: gender, diagnosis and hospital unit 3 Adjusted for stratification variables: gender, diagnosis and hospital unit and age 4 RTW during one year follow up by frequencies and percentages 5 Comparison of relapsed patients across LC arm and control arm using chi-square test * P-value = 0.50, **0.37, ***0.32
|
v3-fos-license
|
2018-01-22T16:09:53.000Z
|
2017-07-30T00:00:00.000
|
119510627
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.97.016016",
"pdf_hash": "198d6effdc1732379042818a271033eaec68c54f",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2698",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "198d6effdc1732379042818a271033eaec68c54f",
"year": 2017
}
|
pes2o/s2orc
|
The Born-Oppenheimer approximation in an effective field theory language
The Born--Oppenheimer approximation is the standard tool for the study of molecular systems. It is founded on the observation that the energy scale of the electron dynamics in a molecule is larger than that of the nuclei. A very similar physical picture can be used to describe QCD states containing heavy quarks as well as light-quarks or gluonic excitations. In this work, we derive the Born--Oppenheimer approximation for QED molecular systems in an effective field theory framework by sequentially integrating out degrees of freedom living at energies above the typical energy scale where the dynamics of the heavy degrees of freedom occurs. In particular, we compute the matching coefficients of the effective field theory for the case of the $H^+_2$ diatomic molecule that are relevant to compute its spectrum up to ${\cal O}(m\alpha^5)$. Ultrasoft photon loops contribute at this order, being ultimately responsible for the molecular Lamb shift. In the effective field theory the scaling of all the operators is homogeneous, which facilitates the determination of all the relevant contributions, an observation that may become useful for high-precision calculations. Using the above case as a guidance, we construct under some conditions an effective field theory for QCD states formed by a color-octet heavy quark-antiquark pair bound with a color-octet light-quark pair or excited gluonic state, highlighting the similarities and differences between the QED and QCD systems. Assuming that the multipole expansion is applicable, we construct the heavy-quark potential up to next-to-leading order in the multipole expansion in terms of nonperturbative matching coefficients to be obtained from lattice QCD.
I. INTRODUCTION AND MOTIVATION
The discovery in the last decade of the XY Z mesons has brought into QCD challenges enduring since the early days of molecular physics in QED -for a recent overview, see Ref. [1]. A great variety of possible models have been introduced to explain the observed pattern of new mesons. A recent proposal [2,3] (see also [4]) advocates the use of the Born-Oppenheimer (BO) approximation [5][6][7][8], familiar to QED molecular physics, as a starting point for a coherent description of the new QCD structures. The rational for this being that many of the new mesons contain a heavy quark-antiquark pair, and the time scale for the evolution of the gluon and lightquark fields is small compared to that for the motion of the heavy quarks. Although the BO approximation has been used in the past to study heavy hybrids by means of quenched lattice data for gluonic static potentials [9][10][11] 1 , the new aspect of the proposal in Refs. [2,3] is the * nora.brambilla@ph.tum.de † gkrein@ift.unesp.br ‡ jaume.tarrus@tum.de § antonio.vairo@ph.tum.de 1 Models have also been used for the determinations of the gluonic static potentials and heavy hybrids in a BO framework, see for example Refs. [12,13] recognition that the BO approximation can also be applied to mesons with light quark and antiquark flavors when input from lattice simulations becomes available.
In the present paper we go one step further in this proposal and develop an effective field theory (EFT) that allows to calculate in a systematic and controlled manner corrections to the BO approximation for QED and QCD molecular systems. An EFT is built by sequentially integrating out degrees of freedom induced by energy scales higher than the energy scale we are interested in. For QED molecules, such a sequential process proceeds as follows: (A) integrating out hard modes associated with the masses of the charged particles leading to nonrelativistic QED (NRQED) [14,15], (B) integrating out soft modes associated with the relative momenta between electrons and nuclei in NRQED leading to potential NRQED (pN-RQED) [16,17], and (C) exploiting the fact that the nuclei move much slower than the electrons due to their heavier masses, modes associated with the electron and photon dynamics at the electron binding energy scale, the ultrasoft scale, can be integrated out leading to an EFT for the motion of the nuclei only. In QED these steps can be done in perturbation theory.
In the present paper we compute this ultimate EFT in the simple case of a QED molecule formed by two heavy nuclei and one electron, like the H + 2 ion molecule. Because the BO approximation emerges as the leading-order approximation in this EFT, we call it Born-Oppenheimer EFT (BOEFT). Furthermore we show how the EFT allows to systematically improve on the leading-order approximation by calculating corrections in the inverse of the mass of the nuclei as well as electromagnetic corrections. We give explicit analytical expressions, regularized in dimensional regularization when needed, for the different contributions to the binding energy of the two nuclei plus one electron molecule up to O(mα 5 ). It is at this order that the Lamb shift is generated.
The BOEFT that we construct is new, although NRQED has been applied in atomic and molecular physics for nearly two decades [15,18]. In particular, NRQED has been used for computing the leading relativistic, recoil and radiative corrections to the energy levels of the H + 2 molecule in Ref. [19] and for computing higher-order corrections in Refs. [20][21][22][23][24]. The new and distinctive aspect of our approach is that we carry out the full EFT program for the diatomic molecule, integrating out not only the hard scale, as in NRQED, but also the soft and ultrasoft scales. The advantage is that each term in the Lagrangian has a unique size and the scaling of Feynman diagrams is homogeneous. This greatly facilitates the determination of all the relevant contributions to a given observable up to a given precision, a feature that is particularly useful for higher-order calculations.
An analog EFT for QCD states containing a heavy quark-antiquark pair in a color-octet state bound with light quarks or a gluonic color-octet state can be built following a similar path. However, unlike QED molecules, the QCD states are determined by nonperturbative interactions. The hard scale set by the heavy-quark mass can always be integrated out perturbatively, leading to nonrelativistic QCD (NRQCD) [14,25]. At short enough distances the relative momentum of the heavy quarks can also be integrated out perturbatively resulting in potential nonrelativistic QCD (pNRQCD) [16,[26][27][28]. 2 . Similarly to the diatomic molecule case, the heavy quarks move slower than the light degrees of freedom, whose spectrum is assumed to appear at the scale Λ QCD . Thus, one can construct an EFT for these QCD "molecular" states by integrating out the scale Λ QCD . Since this is the scale of nonperturbative physics, the matching coefficients will be nonperturbative quantities to be determined, for instance, by lattice calculations. When light quarks are neglected, one regains in this way the EFT recently constructed for quarkonium hybrids [32].
The paper is organized as follows. In Sec. II we construct the pNRQED Lagrangian for two nuclei and one electron. In Sec. III we proceed with integrating out the ultrasoft scale and constructing the molecular EFT, BOEFT. Section IV is devoted to the power counting of the BOEFT, which we use to assess the importance of the nonadiabatic coupling and other corrections to the molecular energy levels. The EFT for the QCD analog of the diatomic molecule, quarkonium hybrids and tetraquark mesons built out of a heavy quark and antiquark, is developed in Sec. V. Section VI contains the conclusions and an outlook for future developments. The Appendix presents a detailed calculation of the Lamb shift for the H + 2 molecule.
II. pNRQED
We aim at building an EFT for a molecular system containing heavy and light particles: the heavy particles (nuclei) have electric charge +Ze and mass M and the light particles (electrons) have electric charge −e and mass m, with M ≫ m. Both kinds of particles are nonrelativistic. Such a molecular system has several wellseparated energy scales, as we will see more in detail in the following. From the highest to the lowest one the relevant energy scales are the masses of the heavy and light constituents (hard scales), the typical relative momentum p = p ∼ mv between heavy and light particles (soft scale) and the binding energy of the light particles E ∼ mv 2 (ultrasoft scale). For a Coulomb-type interaction it holds that v ∼ α with α = e 2 4π ∼ 1 137 the fine structure constant. Finally, specific of molecules an extra low-energy scale appears: the binding energy of the heavy nuclei.
The EFT suitable for describing QED bound states at the ultrasoft scale is pNRQED. In Ref. [17] it was worked out for the hydrogen atom, in this section we extend pNRQED to describe systems with two nuclei and one electron. In Sec. III we will integrate out the ultrasoft modes and build the EFT suitable to describe the molecular states.
The Lagrangian of pNRQED can be written in terms of the light and heavy fermion fields, ψ(t, x) and N (t, x) respectively, and the ultrasoft-photon field, A µ (t, x). The meaning of A µ (t, x) being ultrasoft is that it must be multipole expanded (e.g., about the position of the center of mass (c.m.) of the constituents). The operators of the pNRQED Lagrangian can be organized in an expansion in α and m M . In order to homogenize the counting in these two expansion parameters, we will use that m M is numerically similar to ∼ α 3 2 . Then, the pNRQED Lagrangian relevant to compute the spectrum up to order O(mα 5 ) reads where F µν = ∂ µ A ν − ∂ ν A µ and all photons are ultrasoft. Moreover we have used where D q is the covariant derivative, with q = −e for the electron and q = +Ze for the nuclei: The electron-nucleus potential V Ze (x, σ) is given by where LO (leading order) and NLO (next-to-leading order) refer to the order mα 2 and mα 4 contributions to the spectrum respectively. The LO potential is the Coulomb potential while the NLO one is the sum of a contact and spin-orbit interaction with where c D , c S and d 2 are matching coefficients that up to order α read The coefficient c D has been renormalized in the MS scheme. The scale µ is the dimensional regularization scale that in the case of c D acts as an infrared factorization scale. Finally, the V ZZ potential in Eq. (1) contains the LO nucleus-nucleus Coulomb potential: Further contributions to (5) and (11), which can be found in Ref. [33], are beyond our accuracy. Next, we project the Lagrangian in Eq. (1) on the subspace of one electron and two nuclei. This is similar to the pNRQED bound state calculations for the hydrogen atom [16,17], but since the projection for one light and two heavy particles with different charges has not been done so far in the literature, we present the procedure with some detail. The subspace of one electron and two nuclei is spanned by Fock-space states of the form where ϕ(t, x, y 1 , y 2 ) is the wave function of the system and US⟩ is the Fock-space state containing no hard particles (electrons or nuclei) and an arbitrary number of ultrasoft ones (photons). The corresponding projected Lagrangian, adequate for calculating the spectrum up to O(mα 5 ), is where we have promoted ϕ(t, x, y 1 , y 2 ) to a tri-local field.
To ensure that the photon fields A µ are ultrasoft one may multipole expand them about the c.m. of the system. The task is facilitated by defining an appropriate c.m. and relative coordinates. The c.m. coordinate R of the system is given by To describe the motion of the electron relative to the positions y 1 and y 2 of the nuclei we use and for the relative coordinate of the nuclei The multipole expansion spoils manifest gauge invariance. It is important, however to recall that we have an EFT for ultrasoft gauge fields, hence gauge transformations must not introduce into the EFT gauge fields with large-momentum components; that is, the allowed gauge transformations are those that produce fields that still are within the EFT. One can recover manifest (ultrasoft) gauge invariance at least for charge neutral systems by introducing the field redefinition: where U q is the Wilson line Under a gauge transformation A 0 (t, R) → A 0 (t, R) − ∂ t θ(t, R) and A(t, R) → A(t, R) + ∇ R θ(t, R), the field S(t, R, r, z) transforms as where e tot is the total charge: For a charge-neutral system, e tot = 0, and the field S(t, R, r, z) is gauge invariant.
The Lagrangian in terms of the field S is given by where with M tot being the total mass is the electric field and e eff is the effective charge: The sizes of the different terms that appear in the Lagrangian (21) are as follows.
1. Relative electron-nuclei momentum −i∇ z and inverse relative distance 1 z have size mα.
2. Photon fields, derivatives acting on photon fields, the time derivative, and c.m. momentum, −i∇ R , acting on S have size mα 2 .
3. As we shall discuss in Sec. IV, the inverse relative nuclei-nuclei distance is 1 r ∼ mα, whereas the radial part of the derivative ∇ r ∼ (M m) 1 4 mα ∼ mα 5 8 when acting on the nuclei, but ∇ r ∼ mα when acting on the electron cloud. This implies that the kinetic energy associated with the relative motion of the nuclei is −∇ 2 r M ∼ mα 2 m M ∼ mα 11 4 .
Using this counting, and disregarding operators that produce emission or absorption of photons that contribute only in loops, the leading-order operators in Eq. (21) are h 0 (r, z) + V LO ZZ (r), which are of O mα 2 . Since the kinetic energy associated with the relative motion of the two nuclei, −∇ 2 r M , is of O mα 11 4 , at leading order the nuclei are static and V LO ZZ (r) is just a constant. Therefore, at leading order, the Euler-Lagrange equation from the Lagrangian (21) is nothing else than a Schrödinger equation for the electronic energy levels with Hamiltonian h 0 (r, z). Corrections to these energy levels can be obtained in perturbation theory. Parametrically, the first of such corrections is given by the recoil term, ∇ 2 z 4M , which is O mα 7 2 , and the second one by ∇ 4 z 8m 3 + V NLO Ze , which starts at O mα 4 . The O mα 5 corrections include the Lamb shift, and originate from ultrasoft photon loops and subleading contributions to the NLO potentials.
To obtain the molecular energy levels we need to solve the dynamics of the r coordinate. In principle we could do this by adding subleading terms to the Hamiltonian, . . , and solving the corresponding Schrödinger equation. However, in this paper, following the logic of EFTs, we will integrate out from pNRQED the ultrasoft degrees of freedom to obtain an EFT at the energy scale of the two-nuclei dynamics. The Euler-Lagrange equation of this EFT provides a Schrödinger equation for the molecular energy levels. We will develop this EFT, which we call BOEFT, in the following section.
Since the c.m. motion does not affect the internal dynamics of the molecule, we can simply work in the c.m. frame and ignore the dependence on R of the field S. We also use the notation A 0 (t, 0) and E(t, 0) to indicate quantities defined at the origin of the coordinate system, i.e., R = 0.
III. BORN-OPPENHEIMER EFT FOR DIATOMIC MOLECULES
Our purpose is to build the BOEFT, an EFT for the diatomic molecule at the energy scale of the two-nuclei dynamics. This EFT is obtained by integrating out the ultrasoft scale, mα 2 , from pNRQED for two nuclei and one electron given in Sec. II. We will include effects that contribute to the binding energy of the molecule up to O(mα 5 ).
Since the electron dynamics occurs at the ultrasoft scale, integrating out this scale entails that all the electronic degrees of freedom are integrated out. Moreover, also ultrasoft photons are integrated out. Therefore, the degrees of freedom of the BOEFT are nuclei and photons with energies of O mα 11 4 or smaller.
The tree-level matching contributions can be easily obtained by expanding the field S(t, r, z) in the pNRQED Lagrangian of Eq. (21) in eigenfunctions of the leadingorder Hamiltonian h 0 (r, z) of Eq. (22). This corresponds in expanding the field S(t, r, z) as where φ κ (r; z) = ⟨z r, κ⟩ satisfy the electronic eigenvalue equation The eigenvalues V light κ (r) are the static energies, with κ representing the set of quantum numbers spec-ifying the electronic state for a fixed separation r of the nuclei. The r in the state vector r, κ⟩ emphasizes that eigenvalues labeled by κ refer to a given nuclei separation r. The eigenfunctions φ κ (r; z) are orthonormal: The static electronic energies V light κ (r) scale like mα 2 .
The set of quantum numbers κ is familiar from molecular physics and corresponds to representations of the symmetry group of a diatomic molecule [34]: the eigenvalue λ = 0, ±1, ⋯ of the projection of the electron angular momentum on the axis joining the two nuclei,r, traditionally denoted by Λ = λ and conventionally labeled by Σ, Π, ∆, . . . for Λ = 0, 1, 2, . . . ; the total electronic spin S, with the number of states (multiplicity) for a given S being 2S + 1, and indicated with an index, like 2S+1 Σ; additionally, for the Σ state, there is a symmetry under reflection in any plane passing through the axisr, the eigenvalues of the corresponding symmetry operator being ±1 and indicated as Σ ± ; and, in the situation of identical heavy nuclei, the eigenvalues ±1 of the parity operator of reflections through the midpoint between the two nuclei, denoted by g = +1 and u = −1. 3 In this way, a possible ground state is denoted by κ = 1 Σ + g . The tree-level matching is sufficient up to terms in the Lagrangian of O(mα 4 ). Ultrasoft photon loops start contributing at O(mα 5 ) and are responsible for the Lamb shift of the diatomic molecule. We detail the calculation of the leading ultrasoft loop in Appendix A.
The BOEFT Lagrangian up to O(mα 5 ) reads The photon fields carry energies and momenta of O mα 11 4 or smaller. The operator H (0) κ is the leading-order nuclei-nuclei Hamiltonian: and δE κ (r) is the sum of the tree-level and second order recoil, Breit-Pauli corrections as well as the one-loop ultrasoft one: The counting of H (0) κ will be justified in the next section, but we have already anticipated that the eigenvalues of H The different contributions to δE κ (r) read which is of order mα 2 m M ∼ mα 7 2 , which is of order which starts at order mα 4 , and where and ρ κ (r) is the electron density at the positions of the nuclei The ultrasoft contribution is of order mα 5 log(α) and mα 5 . Note that the ultrasoft contribution has been renormalized in the MS scheme and its µ dependence cancels against that one of the matching coefficient c D [see Eq. (10)] in the NLO potential of Eq. (33). Finally, C nad κκ ′ (r) is the nonadiabatic coupling [8,35]: The first integral in the second line is the matrix element of the kinetic energy operator of the relative motion of the nuclei, it is of order mα 2 m M ∼ mα 7 2 , and the second integral involves the momentum of their relative motion, it is of order mα 2 (m M ) 3 4 ∼ mα 25 8 . When the φ κ 's are real and κ = κ ′ , the second integral vanishes. We conclude by commenting on some general features of the BOEFT. First, we would like to notice that there is no extra approximation by writing S(t, r, z) as in Eq. (25), since the eigenfunctions φ κ (r; z) form a complete set and the Ψ κ (t, r) play the role of time-dependent expansion coefficients. However, as it is well-known in treatments employing the Born-Oppenheimer approximation, this is useful in practice only when the dynamics of the heavy degrees of freedom (with mass M ) is much slower than the dynamics of the light degrees of freedom (with mass m), a feature that permits to define an adiabatic dynamics for the heavy particles and to treat departure from adiabaticity using perturbation theory in the small parameter m M ≪ 1, as we have done above. Otherwise, when M ≃ m, the concept of adiabatic motion for one of the particles loses sense and an expansion like Eq. (25) would be useless. A way to see this is by noticing that mixing terms in the energy levels of the BOEFT would count like mα 2 , a fact that would prevent the separation of the electron from the nuclei dynamics.
Under the adiabatic assumption the molecular energy levels are distributed as sketched in Fig. 1. Electronic excitations define for each nuclei separation a potential V light κ (r). These potentials are separated by large gaps of order mα 2 . For each electronic excitation, the nuclei motion induces smaller excitations of order mα 2 m M . We can compute these smaller excitations in the BOEFT for each electronic potential V light κ (r). They are at lead- ing order the eigenvalues of H κ . It is astounding that the wave functions of these nuclear vibrational modes can not only be computed but experimentally directly visualized: for the H + 2 ground state potential V light 0 (r) see [36].
IV. POWER COUNTING IN THE BOEFT
In this section we examine in detail the power counting of the BOEFT that we have just developed. The main aim is to substantiate the starting assumption in the construction of the BOEFT, namely that the kinetic term −∇ 2 r M ≪ mα 2 . Also of interest is the size of the nonadiabatic coupling.
The derivative ∇ r can act on the nuclei fields Ψ κ (t, r) as well as on the electronic wave functions φ κ (r; z). The size of the derivative turns out to be different for nuclei and electrons. In the case of ∇ r acting on φ κ (r; z), it scales like ∼ mv. Since the electron is bound to the nuclei through Coulomb interactions, we have that v ∼ α. In the case that the derivative acts on Ψ κ (t, r), it scales like ∼ M w, where w is the relative velocity of the nuclei. Therefore, our goal is to asses the size of w.
Since the system is bound, the nuclei will have a stable equilibrium arrangement and oscillate around an average separation r 0 . Without the electron the two nuclei would not form a bound state, hence r 0 is an emergent scale, whose size needs to be determined. Let us consider the ground-state electron energy (κ = 0) and expand the total potential V (r) = V LO ZZ (r) + V light 0 (r) around the equilibrium position r 0 (we have adjusted the potential so that its minimum is zero): The Hamiltonian of the relative motion is that of a harmonic oscillator. The ground-state energy E 0 is given by The equilibrium position r 0 of the nuclei is determined from Because V light 0 (r 0 ) is the ground state energy of Eq. (26), it is of order mα 2 (O(Z 2 ) ∼ 1). Hence Eq. (40) implies That is, the average size of the nuclei separation is of the same order as the electron-nucleus separation. Clearly, this is a particular feature of the Coulomb interaction between the nuclei; for a different r dependence of the nucleus-nucleus interaction, r 0 may be not of the order of the Bohr radius. From the above result it follows that and that the ground-state vibrational energy is Transitions between low-lying vibrational states are also of order mα 2 m M . We note that the scaling behavior of E 0 implies a large cancellation between V LO ZZ (r) and V light 0 (r) near the equilibrium position, since each of these two potentials scales like mα 2 .
The virial theorem for the harmonic oscillator relates the expectation value of the kinetic energy with the total energy, from where the size of the kinetic-energy operator acting on Ψ follows Our initial assumption was that the kinetic energy associated with the relative motion of the nuclei is small compared to the ultrasoft scale, from there we integrated out the latter and matched pNRQED to the BOEFT. The above analysis shows that the energy scale associated with the relative motion of the nuclei is indeed largely suppressed by a factor m M ∼ α 3 4 ≈ 0.025 with respect to the ultrasoft scale, which justifies the initial assumption. The size of ∇ r acting on Ψ and the relative velocity of the nuclei follows from (45): w ∼ α m M 3 4 .
A more detailed look reveals, however, that the counting of Eq. (46) applies only to the radial component of ∇ r . Indeed, in spherical coordinates we have ∇ r = (∂ r , ∂ θ r, ∂ φ (r sin θ)), and since the angles are dimensionless variables, the size of the last two components is determined by r ∼ r 0 ∼ 1 (mα). This implies also that the counting (45) is appropriate for the radial part of the kinetic energy, whereas −2 (M r) ∂ ∂r ∼ mα 2 (m M ) 3 4 and the angular part L 2 (M r 2 ) scales like mα 2 (m M ).
The size of the kinetic term in Eq. (45) sets the energy scale for the BOEFT. Hence it determines the scaling of photon fields and derivatives acting on them. The last ingredient to complete the counting rules for the BOEFT is the scaling of ∇ z ∼ 1 z ∼ mα, which is inherited from pNRQED of Sec. II. The molecular energy scales are summarized in Fig. 2. We apply now the counting rules to the nonadiabatic coupling C nad (r) defined in (37). The largest contribution comes from the radial piece of the second term, which is of O mα 2 (m M ) 3 4 , while the first term and the angular piece of the second one are O mα 2 (m M ) . Therefore, at leading order the nonadiabatic coupling can be neglected and the equation of motion for the field Ψ κ (t, r) reads which is nothing else than the Schrödinger equation that describes the motion of the heavy particles in the Born-Oppenheimer approximation [5][6][7]. Equation (48) produces the leading-order energy eigenvalues for the diatomic molecule, but it does not describe well the angular wave functions [8]. This is a consequence of the angular piece of the kinetic term being of the same size as the angular parts of C nad κκ . The adiabatic approximation [8,35] corresponds to including in the above Schrödinger equation the diagonal term C nad κκ (r) One can use an iterative procedure to solve the problem: starting from the zeroth-order solution in which the nonadiabatic coupling C nad is neglected, one can treat C nad as a perturbation [37] since its contribution to the energy is suppressed by an amount (m M ) 1 4 ≈ 0.15 with respect to the zeroth-order energy. We emphasize again that this relies on the Coulomb nature of the nucleusnucleus interaction and on the smallness of the ratio m M . Let Ψ The leading-order correction E κn comes from the diagonal nonadiabatic coupling and reads It is of order mα 2 (m M ) 3 4 ∼ mα 25 8 . The nondiagonal nonadiabatic coupling provides mixing with different electronic excitations. The first contribution appears at order mα 2 (m M ) 3 2 ∼ mα 17 4 and reads More important than the mixing with states belonging to different electronic excitations is the mixing with states in the same one. The mixing is in this case suppressed by a mere factor (m M ) 1 4 ∼ α 3 8 . We will not display here explicitly this kind of contributions that follow straightforwardly from time-independent quantum-mechanical perturbation theory. We add that the recoil corrections to the electronic levels (31) and (32) contribute first at order mα 2 (m M ) ∼ mα 7 2 and mα 2 (m M ) 2 ∼ mα 5 respectively. Finally, the NLO corrections to the electronic levels (33) contribute first at order mα 4 , while the ultrasoft corrections (34) contribute first at order mα 5 log(α) and mα 5 . Let us now summarize the steps necessary for a numerical evaluation of the molecular energy levels using the BOEFT. First, the electronic static energies V light κ and wave functions φ κ are obtained by solving the eigenvalue equation (26) (see, for example, Ref. [38]). The BOEFT matching coefficients in Eqs. (31)- (34) and (37) can then be evaluated. The nuclei wave functions Ψ
V. THE BOEFT FOR QCD: HEAVY HYBRIDS AND ADJOINT TETRAQUARK MESONS
In the context of QCD, it exists a system analog to the QED diatomic molecule. It is the system formed by a heavy quark-antiquark pair and some light degrees of freedom that can be either gluonic or light quark in nature. Similarly to the QED bound state, the QCD system develops three well separated energy scales: the heavyquark mass M (hard scale), the relative momentum M w (soft scale), where w is the heavy-quark relative velocity, and the binding energy M w 2 . Furthermore, there is the scale associated with nonperturbative physics, Λ QCD that plays the role of the ultrasoft scale in the hadronic case. Restricting ourselves to the case M w ≫ Λ QCD , we can use weakly-coupled pNRQCD [16,27] to describe the heavy quark-antiquark pair, which is called quarkonium if bound, pretty much in the same way as pNRQED, described in Sec. II, can be used to describe electromagnetic bound states. However, a situation that has no analog in pNRQED, the heavy quark-antiquark fields can appear in pNRQCD either in a color-octet or in a color-singlet configuration.
At energies of the order of Λ QCD , the spectrum of QCD is formed by color-singlet hadronic states that are nonperturbative in nature. An interesting case it that one of exotic hadrons made of a color-octet heavy quarkantiquark pair bound with light degrees of freedom. Such a system can be studied similarly to the QED diatomic molecules. The heavy quarks play the role of the nuclei and the gluons and light quarks play the role of the electrons.
In a diatomic molecule the electrons are nonrelativistic with energies of the order of the ultrasoft scale, mα 2 , whereas, as we have seen, the nuclei have a smaller energy due to their heavier mass. In a hadron made of a color-octet heavy quark-antiquark pair, the light degrees of freedom are relativistic with a typical energy and momentum of order Λ QCD . This implies that the typical size of the hadron is of the order of 1 Λ QCD . If the mass of the heavy quarks is much larger than Λ QCD , there may be cases where also the typical momentum M w of the heavy quarks in the hadron is larger than Λ QCD . The scaling of the typical distance of the heavy quarkantiquark pair depends on the details of the full interquark potential, which has a long-range nonperturbative part and a short-range Coulomb interaction. It may therefore happen that the heavy quark and antiquark are more closely bound than the light degrees of freedom. This situation is interesting because the hadron would present a hierarchy between the distance of the quarkantiquark pair and the typical size of the light degrees of freedom that does not exist in the diatomic molecular case where the electron cloud and the two nuclei have the same size. A consequence of this is that while the molecule is characterized by a cylindrical symmetry, the symmetry group of the hadron would be a much stronger spherical symmetry at leading order in a (multipole) ex-pansion in the distance of the heavy quark-antiquark pair. This modifies significantly the power counting of the hadronic BOEFT with respect to the molecular one leading to new effects. In order to emphasize the difference between the hadronic and molecular case, we will assume in the following that the typical distance between the heavy quark and antiquark is of order 1 (M w).
The kinetic energy associated with the relative motion of the quark-antiquark pair scales like M w 2 . If we look at hadrons that are in the ground state or in the first excited states only, we may require that M w 2 ≪ Λ QCD . As we have seen discussing the diatomic molecule, in order for a Born-Oppenheimer picture to emerge and for the BOEFT to provide a valuable theory it is crucial that the excitations between the heavy particles happen at an energy scale that is smaller than the energy scale of the light degrees of freedom. In summary, we will require the following hierarchy of energy scales to hold true: M w ≫ Λ QCD ≫ M w 2 [27]. The different energy scales are shown in Fig. 3. After integrating out the hard and soft scales from QCD and projecting on quarkonium states, one arrives at the pNRQCD Lagrangian in the weakly-coupled regime, which at leading order in 1 M and at O(r) in the multipole expansion is (we neglect the light-quark masses and higher-order radiative corrections to the dipole operators) where S and O are the heavy quark-antiquark colorsinglet and color-octet fields respectively normalized with respect to color. They depend on t, r, the relative coordinate, and R, the c.m. position of the heavy quarkantiquark pair. All the fields of the light degrees of freedom in Eq. (54) are evaluated at R and t; in particular, G µν a = G µν a (R, t), q i = q i (R, t) and The field E is the chromoelectric field, G µν a the gluonic field strength tensor and q i are light-quark fields appearing in n f flavors. The singlet and octet Hamiltonians read (in the c.m. frame) where V s (r) = −4α s (3r) + . . . and V o (r) = α s (6r) + . . . are the color-singlet and color-octet potentials respectively; α s is the strong coupling. The Lagrangian (54) is the analog of the Lagrangian (21) for diatomic molecules. The difference is that in the Lagrangian (54) the number of gluons and light quarks is not fixed as the number of electrons is in (21). This stems from the fact that the electrons are nonrelativistic, which implies that their number is conserved at the low energy of pNRQED, while gluons and light quarks are massless relativistic particles and thus their creation and annihilation are still allowed in the Lagrangian (54).
The Hamiltonian density corresponding to the light degrees of freedom at leading order in 1 M and in the multipole expansion is It plays the same role as the Hamiltonian density of Eq. (22) does for the diatomic molecule. As anticipated, the symmetry groups of the two Hamiltonians are, nevertheless, different: the Hamiltonian density in Eq. (22) has a cylindrical symmetry, while Eq. (57) has a spherical symmetry. The color-octet G ia κ (R) operators that generate the eigenstates of h 0 (R) form a basis of octet light degrees of freedom operators, labeled by the light-flavor f and J P C quantum numbers, and an extra label i for states belonging to the same J P C representation. Note that the energy eigenvalue Λ κ is in general a complex number, whose imaginary part accounts for the possible decay of the state. If we introduce the states which are eigenstates of the octet sector of the pNRQCD Hamiltonian at leading order in the multipole expansion with eigenvalues h o + Λ κ , we can now project the Lagrangian of (54) onto the Fock subspace spanned by This step is the equivalent for the hadronic system to the projection on the state of Eq. (12) and the expansion (25) for the diatomic molecule. Using Eq. (61) and integrating out light degrees of freedom of energy of order Λ QCD we derive the BOEFT Lagrangian that describes the heavy quark-antiquark pair physics at the scale M w 2 . Since we are interested in bound states we will not consider sectors of the Lagrangian that describe transitions between states with different κ and decays into singlet states. Up to next-toleading order in the multipole expansion the Lagrangian reads where P i κλ are projection operators along the heavyquark axis of the light degrees of freedom operator (an implicit sum is understood over repeated i, j indices). There is one projection operator for each − j ≤ λ ≤ j . These operators select different polarizations of the wave function Ψ iκ . For example, in the case of J = 1 the operators are given by withr = (sin(θ) cos(φ), sin(θ) sin(φ) , cos(θ)) T , θ = (cos(θ) cos(φ), cos(θ) sin(φ) , − sin(θ)) T , For higher J the projection operators can be built by multiplying j powers of (63) and (64) with appropriate symmetrization of the indices (see also [39]). The projection operators are necessary to organize the states in Eq. (60) according to the quantum numbers of the exotic hadron. In particular they project the light degrees of freedom operator onto the heavy quark-antiquark axis. The quantum numbers of the exotic hadron are the same as the ones of the diatomic molecule presented in Sec. III plus charge conjugation: as we discussed, at leading order in the multipole expansion the symmetry of the hadron is spherical, hence the projectors commute with the eigenstates of h 0 (the equivalent statement is not true in the molecular case), but higher-order terms break this symmetry to the original cylindrical one. In Eq. (62), the next-to-leading order term in the multipole expansion is P i κλ b κλ r 2 P j † κλ , whereas the dots stand for higher-order terms.
The specific value of the next-to-leading-order term, P i κλ b κλ r 2 P j † κλ , depends on nonperturbative physics and is unknown, however some of its characteristics can be determined on general grounds. This term has its origin in the chromoelectric dipole interactions of Eq. (54), which couple the light degrees of freedom operator G ia κ to the octet field giving corrections to the (static) energy of the system. That this kind of corrections shows up for the static energy is a specific feature of QCD [26,27], however, for nonstatic nuclei dipole interactions are also responsible for the Lamb shift of the diatomic molecule, as we have seen. The r 2 dependence arises from the necessity of having at least two chromoelectric dipoles in order to conserve the J P C quantum numbers of G ia κ . Cylindrical symmetry and charge conjugation also imply b κλ = b κ−λ = b κΛ . In Fig. 4 we show static potentials for the case of quarkonium hybrids, that is, for the case in which the considered light degrees of freedom are purely gluonic. The potentials correspond to κ = 1 +− and are compared to the static energies computed on the lattice in the quenched approximation. The values of b κλ are fitted to the lattice data for r ≲ 0.5fm. Figure 4. Comparison of the hybrid quarkonium static energies generated by the lowest mass gluelump (κ = 1 +− ) computed on the lattice in Refs. [40] (red squares) and [41] (green dots) compared to the BOEFT static potential up to next-toleading-order (solid black line), V κλ = Vo(r) + Λκ + b κλ r 2 . The octet potential is taken in the Renormalon Subtracted (RS) scheme and up to α 3 s . The mass of lowest laying gluelump is computed also in the RS scheme Λ RS 1 +− = 0.87 GeV [40]. The b κλ coefficients are fitted to the lattice data for r ≲ 0.5 fm yielding the values b10 = 1.112 GeV/fm 2 and b1±1 = 0.110 GeV/fm 2 . For lattice determinations of higher laying gluelump masses and static energies see Refs. [9,10,[41][42][43][44][45][46][47].
Defining the projected wave function as and using we can rewrite Eq. (62) as The last term can be split into a kinetic operator acting on the heavy quark-antiquark field and a nonadiabatic coupling with being the nonadiabatic coupling analog to Eq. (37) for the diatomic molecule. At this point it is important to review the sizes of the different terms appearing in Eq. (68). All dimensional quantities that arose from integrating out Λ QCD are of order Λ QCD to their dimension. Hence Λ κ is of order Λ QCD and b κλ is of order Λ 3 QCD . The temporal derivative, the kinetic term and the potential up to the constant shift Λ κ are of order M w 2 . Unlike in the diatomic molecule case, ∇ r has the same size for radial and angular pieces, because the momentum of the heavy quark is taken to scale like the inverse of the distance, r, between the quark and the antiquark. For the nonadiabatic coupling C nad κλλ ′ , the radial piece of the derivative ∇ r acting on the projection operators P i κλ ′ vanishes, since they do not depend on r . According to our counting, the size of the angular piece L 2 (M r 2 ), P i κλ ′ is M w 2 , i.e., of the same order as the kinetic operator of the heavy quarks. This is different from the diatomic molecular case.
The equations of motion for the fields Ψ κλ (t, r, R) that follow from the Euler-Lagrange equation at leading order are nothing else than a set of coupled Schrödinger equations By solving them we obtain the eigenvalues E N that give the masses M N of the states as In summary, the spectrum of exotic hadrons that are sufficiently tightly bound that our hierarchy of scales, and in particular the multipole expansion, applies is similar to that one of diatomic molecules illustrated in Fig. 1. The quantum number κ identifies, through different shifts Λ κ , different excitations of the light degrees of freedom.
The gap between different excitations is (at least for the lower states) of order Λ QCD . In the case of the diatomic molecule the different electronic excitations are separated by a gap of order mα 2 . For each BO potential the vibrational modes of the heavy quark-antiquark pair generate a fine structure of levels, E N , separated for fixed κ by small gaps of order M w 2 . Similarly, in the molecular case the vibrational modes of the nuclei induce small splittings of order mα 2 m M . There are, however, also noteworthy differences. In the hadronic case, if the size of the hadron is much larger than the distance between the heavy quark and antiquark, then κ labels spherically symmetric states. Because the symmetry of the hadron is cylindrical, this means that at short distances some excitations of the light degrees of freedom turn out to be degenerate. As a consequence the equations of motion are the coupled Schrödinger equations of Eq. (71) that mix different excitations, labeled by λ, λ ′ , with the same κ. The mixing happens through the nonadiabatic coupling, which under our assumptions counts like the quark-antiquark kinetic energy. A physical consequence of the mixing is the so-called Λ-doubling, i.e., a lifting of degeneracy between states with the same parity [32]. In the molecular case, the size of the molecule and the typical distance between the nuclei is of the same order. Because there is no special hierarchy between these two lengths there is neither a special symmetry at short distance nor a corresponding degeneracy pattern. The equation of motion for the molecular case is the simple Schrödinger equation (48) [or (49) in the adiabatic approximation]. In this case, different electronic excitations do not mix at leading order. Moreover, the nonadiabatic coupling is subleading with respect to the relative kinetic energy of the nuclei.
The masses for heavy hybrid states have been obtained in Ref. [32] following the method just described. There, the light-quark part of h 0 was omitted. In Fig. 5 we reproduce the results of Ref. [32] compared with an updated list of possible experimental candidates. Tetraquarks were discussed in Ref. [3] in the context of the BO approximation (see also [39]). In [3], preliminary estimates for their masses were given assuming that the tetraquark static energies have the same shape as the hybrid ones and using values for Λ κ from Ref. [48]. One major difficulty is the lack of knowledge of the static energies carrying light-quark flavor quantum numbers. One expects that lattice QCD will soon provide results on these and other crucial nonperturbative matrix elements to be used in the BOEFT developed here.
VI. CONCLUSIONS AND PERSPECTIVES
The Born-Oppenheimer approximation is the usual tool for solving the Schrödinger equation of molecules. It relies on the movement of the nuclei being much slower than that of the electrons, a circumstance that allows to study the electronic eigenstates and energy levels for fixed positions of the nuclei, the so-called static energies. The wave functions of the molecule can then be expanded in terms of these electronic eigenfunctions resulting in a Schrödinger equation describing the molecular energy levels. We have used this hierarchy of scales to build an EFT that systematically describes the energy levels of the simplest diatomic molecule, H + 2 . Our starting point has been an EFT of QED for the ultrasoft scale, pNRQED, adapted to the case of two nuclei and one electron. Since pNRQED for two heavy and one light particle has not been presented in the literature before, we have worked out its derivation in some detail. Particular care has been put in including all the relevant operators suppressed in powers of m M , where m and M are the electron and nuclei masses respectively. Counting m M ∼ α 3 2 we have derived the pNRQED Lagrangian relevant to compute the spectrum up to O(mα 5 ).
The assumption that the nuclei move slower than the electrons, which is at the basis of the Born-Oppenheimer approximation, is equivalent to take the kinetic term of the nuclei to be of a smaller size than the energy scale of the electron dynamics, the ultrasoft scale. Being these two scales well separated, it is natural in an EFT framework to integrate out the ultrasoft degrees of freedom in order to obtain an EFT that describes the molecular degrees of freedom only. We have carried out this integration obtaining a molecular EFT that we have named Born-Oppenheimer EFT (BOEFT). Up to O mα 4 it is sufficient to match pNRQED and BOEFT at tree level, or equivalently, to expand the matter field in the pN-RQED Lagrangian in eigenfunctions of the leading-order Hamiltonian for the electron, as it is done in the Born-Oppenheimer approximation of the Schrödinger equation. Loop diagrams involving ultrasoft photons start contributing at O mα 5 , the first of such contributions being responsible for the H + 2 molecular Lamb shift. We have computed the leading ultrasoft loop and obtained the BOEFT Lagrangian relevant to compute the spectrum up to O mα 5 .
The precise size of the nuclei kinetic operator has been obtained using the virial theorem to relate it to the potential acting on the nuclei. At leading order this potential is formed by the repulsive Coulomb potential between the nuclei and the attractive electronic static energies. Since the system is bound, the nuclei do not move over the whole size of the molecule, but oscillate around the minimum of the potential. The size of the kinetic operator of the nuclei is of the order of mα 2 m M , which is smaller than the ultrasoft scale mα 2 . This is consistent with the original statement that the two nuclei dynamics occurs at a lower energy scale than the electronic one. The size of the nonadiabatic coupling could also be assessed resulting in the conclusion that for diatomic molecules its contribution to the energy levels is suppressed by a factor (m M ) 1 4 .
In the present paper we have derived the BOEFT Lagrangian for the H + 2 molecule up to operators relevant for the spectrum up to O mα 5 . This can be system- atically improved by including higher-order operators in the power counting detailed in Sec. IV, and computing their corresponding matching coefficients. Similarly, all the relevant contributions up to a certain precision to a specific observable can be determined with the help of the power counting, which may be of crucial importance to handle high-precision calculations.
Having set the general framework for constructing the BOEFT in QED, we have analyzed systems in QCD analog to the diatomic molecule. These are systems made of a heavy quark-antiquark pair, which plays the role of the heavy degrees of freedom, bound with light-quarks or excited gluonic states, playing the role of the light degrees of freedom. In particular, we have studied the case in which the quark-antiquark pair appears in a color-octet state. In the short distance regime, r ≪ 1 Λ QCD , the multipole expansion is applicable and the system can be described using weakly-coupled pNRQCD.
The energy scale of the leading-order light degrees of freedom dynamics is Λ QCD , while, as in the molecular case, the heavy degrees of freedom dynamics, in this case that of the heavy quark-antiquark pair, takes place at the lower energy scale M w 2 . We have identified the leading-order Hamiltonian in the multipole and 1 M expansions for the light degrees of freedom, h 0 , and defined a basis of color-octet light degrees of freedom operators, which, together with the heavy quark-antiquark octet field, generate hadronic (color-singlet) eigenstates of the pNRQCD Hamiltonian. The Λ QCD scale has been integrated out and pNRQCD matched into a QCD version of the BOEFT. At LO in the multipole expansion the matching can be done by just projecting the octet sector of the pNRQCD Lagrangian on the basis of eigenstates of h 0 . At NLO the matching requires a full non-perturbative computation, nevertheless, some constraints on the form of the NLO term can be obtained from the multipole expansion itself and the cylindrical symmetry that the system possesses at finite separation between the heavy quarks. As in the diatomic molecular case, a nonadiabatic coupling between the heavy quarks and the light degrees of freedom arises from the matching procedure, however, unlike in the molecular case, this does not need to be suppressed with respect to the kinetic operator. Furthermore, the nonadiabatic coupling mixes states that in the short distance limit have degenerate potentials, therefore the mixing has to be taken into account when solving the set of Schrödinger equations that result from the Euler-Lagrange equations of the BOEFT. As a result the phenomenon known as Λ-doubling in molecular physics [34] is more prominent in the QCD case [32].
The BOEFT has been used to obtain the masses of the quarkonium hybrids in Ref. [32] (see also [49]). Preliminary studies on quarkonium tetraquarks using a similar framework based on the BO approximation were carried out in Ref. [3]. A further analysis is in preparation [39]. The EFT presented here could be straightforwardly extended to describe any system made of two heavy quarks bound adiabatically with some light degrees of freedom. An example are doubly heavy baryons, i.e., states with two heavy quarks and one light-quark. Experimentally, doubly heavy baryons have been first observed at the LHCb [50]. For a study of this system in the framework of pNRQCD, we refer to [51]. Another example are pentaquark states made of two heavy quarks and three lightquarks. Candidates have been observed at the LHCb [52], but a pNRQCD based study of these systems is still to be done.
|
v3-fos-license
|
2023-08-11T06:17:19.757Z
|
2023-08-10T00:00:00.000
|
260772726
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "CLOSED",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00259-023-06356-4.pdf",
"pdf_hash": "64f39c2492120d4f0d2b593a7c90252f8015759b",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2699",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "9670110054d8078f7d0835aae0dd87d80484779c",
"year": 2023
}
|
pes2o/s2orc
|
Impact of cardiac history and myocardial scar on increase of myocardial perfusion after revascularization
Purpose We sought to assess the impact of coronary revascularization on myocardial perfusion and fractional flow reserve (FFR) in patients without a cardiac history, with prior myocardial infarction (MI) or non-MI percutaneous coronary intervention (PCI). Furthermore, we studied the impact of scar tissue. Methods Symptomatic patients underwent [15O]H2O positron emission tomography (PET) and FFR before and after revascularization. Patients with prior CAD, defined as prior MI or PCI, underwent scar quantification by magnetic resonance imaging late gadolinium enhancement. Results Among 137 patients (87% male, age 62.2 ± 9.5 years) 84 (61%) had a prior MI or PCI. The increase in FFR and hyperemic myocardial blood flow (hMBF) was less in patients with prior MI or non-MI PCI compared to those without a cardiac history (FFR: 0.23 ± 0.14 vs. 0.20 ± 0.12 vs. 0.31 ± 0.18, p = 0.02; hMBF: 0.54 ± 0.75 vs. 0.62 ± 0.97 vs. 0.91 ± 0.96 ml/min/g, p = 0.04). Post-revascularization FFR and hMBF were similar across patients without a cardiac history or with prior MI or non-MI PCI. An increase in FFR was strongly associated to hMBF increase in patients without a cardiac history or with prior MI/non-MI PCI (r = 0.60 and r = 0.60, p < 0.01 for both). Similar results were found for coronary flow reserve. In patients with prior MI scar was negatively correlated to hMBF increase and independently predictive of an attenuated CFR increase. Conclusions Post revascularization FFR and perfusion were similar among patients without a cardiac history, with prior MI or non-MI PCI. In patients with prior MI scar burden was associated to an attenuated perfusion increase. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-023-06356-4.
Introduction
Current revascularization strategies for patients with chronic coronary syndrome are aimed at detecting ischemia-causing coronary stenoses, restoration of myocardial perfusion by percutaneous coronary intervention (PCI) or coronary artery bypass grafting (CABG) and, subsequently, relief of symptoms [1].To maximize the effect of revascularization therapy, guidelines recommend to assess the presence of inducible ischemia beforehand [1].Fractional flow reserve (FFR) is an important vessel-specific tool to assess myocardial ischaemia and since post-procedural FFR and myocardial blood flow (MBF) have been linked to superior outcomes, the use of FFR and MBF has been extended to a post-revascularization tool [2,3].A previous study by Driessen et al. included patients without prior myocardial infarction (MI) or PCI and showed that an increase in FFR was paralleled by improvement of hyperemic MBF with a Ruurt Jukema and Ruben de Winter shared first authorship.
Extended author information available on the last page of the article strong correlation between these two indices (r = 0.74) [4].However, the relationship between FFR and absolute myocardial perfusion is not only determined by epicardial coronary stenoses, but also by microvascular resistance and the subtended myocardial mass.As such, a supposedly increase in FFR is not necessarily commensurate with an equivalent increase in hyperemic blood flow (hMBF).Microvascular disease is encountered in up to 50% of patients with prior infarction and the revascularization benefit in terms of perfusion increase and as such symptom reduction has been questioned in this group of patients [5].Remarkably, data on the restoration of myocardial blood flow after revascularization therapy in patients with prior MI or PCI is lacking.As such, we investigated the influence of revascularization on FFR and absolute myocardial perfusion in patients without a cardiac history, with prior MI or non-MI PCI using serial FFR and positron emission tomography (PET) MBF measurements.Furthermore, we investigated whether the restoration of myocardial perfusion is attenuated by scar tissue burden or conventional risk factors.
Patient selection
This is a sub study of the Comparison of Coronary CT Angiography, SPECT, PET, and Hybrid Imaging for Diagnosis of Ischemic Heart Disease Determined by Fractional Flow Reserve (PACIFIC 1) and Functional stress imaging to predict abnormal coronary fractional flow reserve: the PACIFIC 2 study (PACIFIC 2), which were prospective clinical singlecentre, head-to-head comparative studies conducted from 2012 to 2020, at the Amsterdam UMC, VU University Medical Centers, Amsterdam, the Netherlands [6,7].All patients were suspected of having stable obstructive coronary artery disease (CAD), were referred for a clinically indicated diagnostic ICA, and underwent a 2-week protocol in which patients underwent [ 15 O]H 2 O PET prior to invasive coronary angiography (ICA) with routine 3 vessel invasive FFR interrogation.Patients suspected for acute coronary syndrome were not included.Additionally, patients with a cardiac history, defined as prior MI or PCI, underwent cardiac magnetic resonance (CMR) imaging with late gadolinium enhancement (LGE) prior to invasive coronary angiography (ICA).This PACIFIC post-hoc analysis included patients in whom FFR interrogation or [ 15 O]H 2 O PET perfusion imaging was repeated after coronary revascularization (PCI or CABG).Patients with events between revascularization and follow-up PET (n = 1) were excluded.The study complied with the Declaration of Helsinki.The study protocol was approved by the VUmc Medical Ethics Review Committee and all patients provided written informed consent.
PET
The PET scans were performed on a hybrid PET/CT device (Philips Gemini TF 64 or Ingenuity TF 128, Philips Healthcare, Best, The Netherlands).A 6-min dynamic scan protocol commencing simultaneously with an injection of 370 MBq [ 15 O]H 2 O during resting and adenosine (140 µg/kg/min) induced hyperemic conditions.The dynamic scan sequence was followed by a low-dose CT-scan for attenuation correction.Parametric images of quantitative hyperemic MBF in ml/min/g were generated by inhouse developed software (CardiacVUer, Amsterdam UMC, Vrije Universiteit Amsterdam, The Netherlands) [8] for each of the 17 left ventricle segments according to the standard American Heart Association model with standardized allocation of segments to the three vascular territories [9].Patients were instructed to refrain from the intake of xanthine or caffeine 24 h prior to the PET.Parametric MBF images were analyzed.Regional hMBF was defined as mean hMBF of the entire vascular territory in the absence of a perfusion defect or as the mean hMBF of the perfusion defect (≥ 2 adjacent segments with a hMBF ≤ 2.3 ml/min/g) when present [10].In the presence of an inferior perfusion defect the invasive angiography was used to determine coronary dominance and the perfusion defect was allocated accordingly.Regional hMBF of the predefined vascular territories was used for analysis.
CMR
Images were acquired on a 1.5-T whole body MR scanner (Magnetom Avanto, Siemens Healthineers).Left ventricular (LV) cardiac function was assessed in between stress and rest perfusion with steady-state free-precession cine imaging in the 2-, 3-, 4-chamber long-axis views and multiple short-axis views covering the LV from base to apex.Late gadolinium enhancement (LGE) was performed using a 2-dimensional segmented inversion-recovery gradientecho pulse sequence.If LGE was considered visually present, total and segmental infarct size (in grams) was calculated from the LGE images using the full width at half maximum method [11].Infarct mass was also expressed as a percentage of myocardial mass per segment according to the AHA 17-segment model excluding the apex [9].Additionally, in accordance with the AHA model, infarct size and percentage for each vascular territory was calculated.The segments used for PET were also used for scar analysis.LGE analysis was performed using Circle CVI42 (version 5.13, Circle Cardiovascular Imaging, Inc, Calgary, Canada) by a researcher blinded to clinical characteristics.
ICA and FFR
ICA was performed according to standard clinical protocols [6].Patients were instructed to refrain from the intake of xanthine or caffeine 24 h prior to the ICA.All major coronary arteries were routinely interrogated by FFR irrespective of stenosis severity and imaging results, except for occluded vessels or subtotal lesions with a diameter stenosis (DS) ≥ 90%.To induce maximal coronary hyperemia, adenosine was administered intracoronary as a 150 μg bolus.FFR was calculated as the ratio of mean distal intracoronary to aortic guiding pressure during hyperemia.The type of treatment (PCI, CABG or conservative) was left to the discretion of the operator and the heart team after consideration of symptoms, FFR and angiographic results.In case of PCI, post-procedural FFR was measured at the same location as the pre-PCI FFR.To evaluate the extent and diffuseness of atherosclerotic disease segment involvement scores were calculated according to Min et al., which is the sum of the number of segments with plaque irrespective of the degree of luminal stenosis [12].
Statistical analysis
Continuous variables are expressed as mean ± SD or median (interquartile range) where appropriate.Categorical variables are presented as frequencies with percentages.Baseline characteristics between two groups were compared by the independent sample's T-test for continuous variables and the chi-square test for categorical variables.The correlation between two variables (FFR, hMBF, CFR or scar) was analyzed using Pearson's correlation analysis.Paired FFR and regional perfusion measurements (before and after revascularization) were compared by the paired samples T-test.Regional perfusion and FFR analyses were stratified for patients without prior CAD, with prior MI or with a non-MI PCI.Patients with both a prior MI and with a non-MI PCI were grouped as prior MI.The change in FFR and perfusion was compared between these groups using an one way analysis of variance (ANOVA).In case the overall F-test for the ANOVA was significant, posthoc pairwise comparisons between patient categories were performed with Bonferroni correction for multiple comparisons.To analyze the importance of baseline FFR a sensitivity analysis stratified for vessels with FFR greater or less than 0.75 was performed.To identify predictors of regional perfusion improvement an analyses with regionally matched scar was performed using a mixed models with a random effect for subjects.Significant (p < 0.15) variables in the univariable analysis were included in the multivariable model.A two-sided P value < 0.05 was considered statistically significant.All statistical analyses were performed using IBM SPSS software package version 26 (IBM SPSS Statistics, IBM Corporation, Armonk, NY).
Results
The study population consisted of 137 patients with 200 revascularized vessels.The mean age was 62.2 ± 9.5 years and 119 were male (86.9%).A total of 84 patients (61.3%) had a history of myocardial infarction or PCI.Patients were revascularized by PCI (n = 116, 84.7%) or CABG (n = 21, 15.3%).Further patient baseline characteristics are shown in Table 1.Vessel specific characteristics are shown in Table 2.In general, patients with prior MI or non-MI PCI had a more extensive cardiovascular risk profile.The median interval between revascularization and post revascularization PET was 34 days (interquartile range 21 to 58 days).Detailed flow charts describing PET and FFR availability are shown in supplemental Figs. 1, 2 and 3. Serial FFR measurements were available in 94 (47%) of the revascularized vessels, whereas paired hMBF and CFR measurements were available in 189 (95%) and 184 (92%) of the revascularized myocardial territories (supplemental Fig. 3).Baseline and post revascularization regional perfusion indices and FFR (including rest MBF) are depicted in Table 3 and supplemental Table 1.
Change of perfusion and FFR after revascularization
Figure 1 exemplifies the study protocol by showing two case examples and their respective serial PET perfusion scans, ICA and CMR images.After revascularization mean FFR increased from 0.65 ± 0.15 to 0.90 ± 0.08 (p < 0.01, Fig. 2).An increase in FFR was observed for patients without a cardiac history, with prior MI or with non-MI PCI.However, it may be appreciated from Fig. 3 that FFR increased to a lesser degree in patients with a prior non-MI PCI compared to those without a prior history (∆ FFR 0.20 ± 0.12 vs. 0.31, p = 0.02).Regional hyperemic MBF and CFR improved after revascularization (hMBF: 1.73 ± 0.75 to 2.47 ± 0.88 ml/ min/g; CFR: 2.08 ± 0.80 to 2.85 ± 0.96, p < 0.01 for both, Fig. 2).Similar to FFR, Fig. 3 shows a trend of an attenuated regional hMBF increase in patients with a prior MI or non-MI PCI (p = 0.04, p = ns for subgroup differences).CFR increase after revascularization did not significantly differ between patients with or without a prior cardiac history (p = 0.23).Baseline FFR and hMBF were significantly lower in patients without prior CAD (p < 0.01, Table 3).Post revascularization FFR, hMBF and CFR were similar across all subgroups (p = 0.39, p = 0.68 and p = 0.49).A perfusion decrease after revascularization was seen in 37 territories (ΔhMBF: 0.36 ± 0.39 ml/min/g).Those territories were predominantly characterised by a relatively preserved hMBF at baseline (2.27 ± 0.95 vs 1.60 ± 0.64, p < 0.01) and were more prevalent in patients with prior MI or non-MI PCI (27.3% vs 11.1% of revascularized territories, p < 0.01).In a sensitivity analysis the absolute improvement in FFR and perfusion indices stratified for baseline FFR (lower or greater than 0.75) are shown in in supplemental Table 1.Patients with a FFR below 0.75 at baseline had a numerically greater FFR and perfusion improvement across all subgroups.
Relation between FFR and perfusion
At baseline, a moderate correlation was observed in patients without a cardiac history between perfusion indices (hMBF and CFR) and FFR (r = 0.59 and r = 0.56, p < 0.01 for both).A weak baseline correlation was observed between baseline FFR and perfusion in patients with prior MI or non-MI PCI (hMBF: r = 0.31; CFR r = 0.35 for patients with prior MI and hMBF: r = 0.30; CFR r = 0.22 for patients with prior non-MI PCI, p < 0.01 for all, supplemental Fig. 4).Hyperemic MBF and FFR were at baseline concordant in 72% of patients without cardiac history, whereas 58% and 57% of the measurements were concordant in patients with prior MI or non-MI PCI. Figure 4 shows the relation between % perfusion change and FFR following revascularization.The relationship between FFR and hMBF increase was strong in both patients with and without cardiac history (r = 0.60 and r = 0.60, p < 0.01 for both).The relationship between FFR and CFR was strong in patients without a cardiac history (r = 0.70, p < 0.01) and moderate (r = 0.57, p < 0.01) for patients with a cardiac history.A substantial overlap was observed between the segment involvement scores among patients without prior CAD, with prior MI or with prior non-MI PCI with more segments affected by atherosclerotic disease in patients without prior CAD than in patients with prior MI or non-MI PCI (supplemental Table 2).
The influence of scar on change of FFR and perfusion
Table 2 depicts the amount of LGE per revascularized territory.Figure 5 shows the impact of regional scar tissue
Predictors of perfusion improvement
In a univariable analysis (supplemental Table 3) including baseline characteristics, cardiovascular risk factors, prior PCI for stable CAD in the revascularized territory, left ventricle ejection fraction and scar (as % of revascularized territory) only prior PCI for stable CAD in the revascularized territory and scar had a p < 0.15 for hMBF increase.Scar and a history of smoking were negative predictors of CFR increase.In the multivariable analysis assessing hMBF increase, none of the predictors with a univariable p-value < 0.15 were significantly associated with hMBF increase.Scar and a history of smoking were independent negative predictors of CFR increase.
Discussion
This sub study of the PACIFIC 1 and 2 assessed the potential of coronary revascularization to restore myocardial perfusion as assessed by quantitative [ 15 O]H 2 O PET and is, to the best of our knowledge, the first to evaluate the effect of prior coronary revascularizations and myocardial infarctions on the improvement of absolute myocardial perfusion after revascularization.Moreover, this study provides insight into the relationship between FFR and absolute myocardial perfusion before and after revascularization.The main findings can be summarized as follows: 1) Successful coronary revascularization improved FFR and perfusion in patients without a cardiac history, with prior MI or with prior non-MI PCI 2) Post revascularization FFR and perfusion were similar in patients without a cardiac history, with prior MI or non-MI PCI 3) Changes in FFR and absolute perfusion were strongly associated 4) Regional scar and a history of smoking were independently negatively associated with CFR increase.Other cardiovascular risk factors were not independently predictive of an attenuated recovery of myocardial perfusion.
Functional treatment of CAD
Physiology-guided coronary interventions have been shown to confer prognostic benefit over a merely anatomical driven approach [13].Germane to this, post-PCI FFR contains prognostic information and a suboptimal restoration of intracoronary pressures (i.e.FFR < 0.90) has been associated with an increased risk of adverse cardiovascular events and higher rates of target vessel revascularization [14,15].Therefore, FFR has been considered the reference standard for discerning the functional significance of epicardial lesions [1].However, the question remains whether FFR and myocardial perfusion metrics are interchangeable and provide us with similar or complementary [16].Several studies confirmed that FFR provides an invasive measure of myocardial perfusion, although the relationship is governed by diffuse epicardial disease, and microvascular function.
Contrary to the validation study of the de Bruyne et al. we included symptomatic patients and did not exclude diffusely diseased patients, which is exemplified by median segment involvement scores ≥ 4. In this regard, we found a moderate correlation between baseline FFR, hyperemic MBF and CFR in patients without a prior cardiac history, which is consistent with earlier results [17].On the other hand, only a weak correlation was seen between baseline FFR and perfusion metrics in patients with prior MI or non-MI PCI.Patients with prior CAD had lower segment involvement scores suggesting more diffuse CAD in patients without prior CAD.The weaker baseline correlation between FFR and perfusion in patients with prior CAD may be ascribed to a higher occurrence of microvascular disease in this high-risk category.Also, the influence of plaque characteristics on FFR and perfusion cannot be neglected [18,19].Patients with prior CAD (I.e.prior MI or non-MI PCI) had a numerically lower FFR and perfusion increase.It should be noted that baseline perfusion and FFR of patients without prior CAD were lower but post revascularization FFR and perfusion were similar, suggesting a similar post revascularization potential.Therefore, non-invasive evaluation of myocardial perfusion by PET may play a role in identifying patients who might experience symptom reduction from revascularization therapy.Quantitative myocardial blood flow imaging is nowadays not only limited to PET MPI.There are promising results using CZT-SPECT devices.Although there are important differences in tracer kinetics, it is promising Acampa and colleagues have demonstrated the feasibility of quantifying myocardial perfusion using CZT-SPECT [20].Furthermore, Mannarino et al. showed that vessels with > 50% diameter stenosis on invasive angiography with regional perfusions deficits showed a trend towards more vessel-specific adverse events, underlying the potential of functional imaging to guide coronary interventions and to predict revascularization related outcome [21].
Integrating anatomical and functional information offers several advantages, while independently used may lead to disregarding valuable information on important aspects of the complex interplay between anatomy and ischemia.Notably, multiple perfusion metrics can be derived from PET.A study by Bom and colleagues demonstrated longitudinal perfusion gradients to reflect diffuse CAD, while Johnson and colleagues suggested the integration of hMBF and CFR into coronary flow capacity to more accurately identify ischemia [22].A serial PET study by De Winter et al. revealed coronary flow at baseline to predict improvement of perfusion after revascularization therapy [23,24].Most probably the abovementioned perfusion metrics are complementary and should all be evaluated in symptomatic patients [25].Further studies need to exploit these perfusion metrics to advance our understanding of the atherosclerotic spectrum and on the potential future applications of quantitative PET beyond its current role being merely a gatekeeper for the "cathlab".
The influence of scar and risk factors on restoration of perfusion
Restoration of myocardial perfusion following revascularization is dependent on the alleviation of coronary diameter stenosis.However, myocardial blood flow is regulated by both epicardial coronary flow and microvascular resistance.Traditional risk factors have been shown to induce microvascular disease, a condition that is associated with abnormal myocardial perfusion requiring specific conservative (i.e.noninvasive) treatment [26][27][28].Although microvascular disease represents a separate entity with unique therapeutic implications, there is a large overlap between epicardial atherosclerosis and microvascular dysfunction [29].Residual ischemia after successful coronary revascularization could be caused by residual CAD or microvascular dysfunction.We assessed the diffuseness of atherosclerosis by calculating visual segment involvement scores.Prior studies suggested that diffuse, potentially flow-limiting, CAD could be missed by visual assessment [30].These patients are at risk of being incorrectly classified as patients with microvascular disease, potentially attributing to a significant proportion of contemporary patients diagnosed with microvascular disease [29].
The amount of ischemia that is accounted for by diffuse CAD or microvascular disease is difficult to determine upfront and has been associated with persisting angina and even adverse outcomes.In other words, a successful stent placement is not necessarily commensurate with improved perfusion.Additionally, in a study from our institution, lower stress perfusion was seen in elderly and obese patients, despite focal obstructive CAD was excluded [31].
In the present study, of conventional risk factors only a history of smoking was predictive of perfusion increase.This may be attributed to a population with a relatively large atherosclerotic burden, an end-stage disease wherein the impact of traditional risk factors may be nullified.The vascular damage caused by traditional cardiovascular risk factors attenuates the relative contribution of these risk factors.Indeed, studies have shown that prediction models perform better in middle-aged populations than in elderly patients [32].In addition, risk scores appear to have only a modest discriminatory power in high-risk patients with typical angina [33].The presence and extent of regional scar in patients with a prior MI was associated to an attenuated perfusion increase and independently predictive of a attenuated CFR increase following revascularization.We found slightly stronger correlations between scar and CFR than between scar and hMBF.One explanation is that CFR incorporates rest perfusion.An increased rest perfusion is associated to MACE and related to microvascular dysfunction and epicardial coronary stenosis [34].It bears mentioning that scar burden in our population was rather small and FFR measurements account for scarred myocardium.Territories with extensive scar will by definition have a higher FFR and will be less likely revascularized [35].In addition, the use of a [ 15 O]H 2 O perfusion tracer is less suitable for visualizing non-viable tissue since MBF is only measured in viable tissue and this may have affected the present findings.Nevertheless, scar tissue is often heterogeneous and contains islands of viable tissue, which is reflected by [ 15 O]H 2 O PET as areas of abnormal perfusion.Interestingly, a recent animal study by Grönman and colleagues demonstrated resting MBF suitable for the assessment of viability with a similar diagnostic value as measures of perfusable tissue fractions and index [36].All-in all, epicardial atherosclerosis and coronary microvascular dysfunction are two different entities of the atherosclerotic spectrum with large overlap.Non-invasive quantitative perfusion imaging combined with FFR measurements may provide a comprehensive understanding of the complex interplay between FFR, microvascular function and its impact on myocardial perfusion.We studied the influence of revascularization on perfusion and FFR restoration but had no information on the post revascularization symptomatic status.As such we could not correlate our findings with (post)revascularization symptoms.Also the numbers do not allow a prognostic analysis.However, since ischaemia testing and FFR are the backbone of the revascularization strategy we think these derivatives are of sufficient quality to provide useful information.Further studies are warranted to determine whether a combined approach of pressure and flow measures may further distinguish between patients who may benefit from revascularization and patients in whom pharmacotherapy targeting endothelial and microvascular atherosclerosis is required in terms of perfusion increase and symptom reduction.
Limitations
This is a post hoc sub analysis of two prospective studies including a modest number of patients, and some limitations should be addressed.First, although coronary dominance was checked regional perfusion as assessed by PET is matched with FFR measurements upon standardized coronary anatomy and does not account for individual variations.Second, [15 O ]H 2 O was used as tracer.The tracer has the unique ability to be linearly related to myocardial blood flow, primarily in viable myocardium [37].Therefore the influence of nonviable tissue (i.e.scar) could be underestimated.Third, although patients with prior MI were included, regional extensive scar was scarce and results with regard to the influence of scar should be considered hypothesis-generating.This might be attributed to the excellent STEMI care in the densely populated Netherlands [38].Fourth, segment involve scores were calculated to analyze diffuseness of atherosclerosis.Pressure pull back curves were not routinely performed during ICA.Finally, patients underwent follow-up PET after a median of 34 days.It has been reported that myocardial perfusion increases further after 1 month post revascularization.Therefore myocardial perfusion might have increased further in our patients [39,40].
Conclusion
Successful coronary revascularization improved FFR and absolute myocardial perfusion in patients without a cardiac history, with prior MI or with prior non-MI PCI.Post revascularization FFR and perfusion were similar in patients without a cardiac history, with prior MI or non-MI PCI.An increase in FFR was paralleled by improvements in absolute perfusion.In patients with prior infarction scar burden was associated to an attenuated perfusion increase.
Fig. 1
Fig. 1 Case examples showing the effect of revascularization on FFR and perfusion in patients with and without a prior cardiac history.Abbreviations: FFR, fractional flow reserve; LAD, left anterior
Fig. 2 Fig. 3
Fig.2Change of regional perfusion and FFR after revascularization.Mean ± SD are displayed.Only revascularized vessels with measurements before and after revascularization were included for this sub analysis.Abbreviations: CAD, coronary artery disease; CFR, coro-
Fig. 4 3
Fig.4 Relationship between changes in FFR and perfusion.Only revascularized vessels with measurements before and after revascularization were included for this sub analysis.A cardiac history was defined as a history of PCI and/or MI.Analyses for patients with a
Fig. 5
Fig. 5 Relationship between scar and change in FFR and perfusion.The relation between scar and delta FFR (n = 57), delta hMBF (n = 94) and CFR (n = 92) split for patients with prior MI and prior non-MI PCI.Scar is depicted as a percentage of the revascularized
Table 1
Baseline characteristics Mean ± SD, median (inter-quartile range) or N(%) AP angina pectoris; ARB angiotensin II receptor blocker; BMI body mass index; CAD coronary artery disease; LVEF left ventricle ejection fraction; MI myocardial infarction; PCI percutaneous coronary intervention
Table 2
Depicts mean ± SD or median (inter-quartile range) for vessel-specific characteristics.Median (inter-quartile) infarct size (LGE) was only given for revascularized territories CABG coronary artery bypass; CAD coronary artery disease;LGE late gadolinium enhancement, LV left ventricle
Table 3
Perfusion indices and FFR
|
v3-fos-license
|
2017-07-06T20:09:55.505Z
|
2012-09-24T00:00:00.000
|
28995613
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=22594",
"pdf_hash": "b8456c4a8f531ca10177c0ac15b7d66a72747e0d",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2700",
"s2fieldsofstudy": [
"Business"
],
"sha1": "b8456c4a8f531ca10177c0ac15b7d66a72747e0d",
"year": 2012
}
|
pes2o/s2orc
|
Combined Analyses Procedure of Failure Modes and Risk Phenomena Using the Concept of Normal State Conditions
Normal failure or risk analyses procedure contains the following steps: 1) modeling process, 2) assessment process of its hazardous extent. 1) Modeling process is the considering procedure that sets the sequence of happening failures or risks. 2) Assessment process is the evaluation procedure that express its hazardous extent quantitatively or qualitatively (probability, seriousness of injury etc.). According to ISO14120 (Risk assessment process), ranking procedure of risks are established. However, there is no logical procedure for 1. Modeling process and these steps still highly depends on designer’s knowledge or experiences of failures and accidents. It is necessary to establish a logical guideline of failure modeling process for fresh designers in order to effectively conduct failure and risk analyses with their acceptable workloads. This study aims at proposing the logical failure modeling process based on the SSM (Stress-strength model) and the normal-state conditions. In the beginning, designers make a stress-strength model of considering components and its “normal condition”. Introducing “deviations” in normal conditions of stress-strength state and surrounding environmental conditions can lead the designers to easily predict failure modes caused by the proposed deviations. Similar steps are applied in the process of considering risk phenomena caused by failure modes. A case study of assessing the safety of micro windmill demonstrated the effectiveness of our proposed procedures.
Introduction
Normal flows of failure modes and risk analyses are composed by the two steps: 1) modeling process is setting hazard and failure scenario, 2) assessment process is a quantification of probability of failures or risks and its degree of damages for systems or humans [1,2].ISO safety code 14120 and another researches proposed the assessment processes both quantitative and qualitative categorizing methods.However, the modeling process, which is considered critical for valid assessment processes, have widely been based on practitioner's experiences.If the practitioner has insufficient knowledge, which will cause the insufficient quality of failure analyses because he or she can predict little failure modes.Even if the cases of veterans, they tend to focus on their experiences and then another failure modes are sometimes omitted.Both cases cannot produce sufficient result of failure analyses.Therefore, logical modeling guidelines to predict sufficient failure modes are indispensable.This paper proposes a logical modeling procedure for failure modes by combining SSM (Stress-Strength Model) and patterning of deviations from normal states of SSM in order to predict failure modes caused by the deviations [3][4][5][6][7].The proposed process is applied to assess the failure modes of windmill system.The authors then call the proposed model combined failure-risk prediction model.
Combined Failure-Risk Prediction Model from Normal States of SSM
Figure 1 shows the entire flowchart of FMEA and risk assessment.The process is composed of the following steps.
Specifying Target Systems
1) Decide components; mechanical or electrical components of the target system should be specified at first.Summary of component and structural links are expressed as shown in Figure 2. 2) Decide function and characteristics; functions for each element should be specified such as function names, dimensions of parameters, a definition range of parameters, connecting conditions to neighbor elements in order to set normal conditions of the element.Design characteristics, which is necessary to illustrate the normal functional states of the elements, are then identified.The design characteristics involves two components; internal one which is material properties or dimensions of the components etc. and outer one which is defined by environmental conditions such as wind speed.
3) Making functional deployment diagram (Figure 3); individual components which were defined in the previous step should be interconnected to construct structures.Joining link and components are simplified as shown in Figure 3.
4) Making mechanism models (Figure 3); all system have energy flow from input sources to output target in order to work something designed.The example of a windmill is shown in Figure 3.To transform kinetics energy of wind to electricity, the components in the windmill successfully connects with each other.
Failure Modes Effects Analyses
1) Choice of failure modes; in conventional method, setting failure modes is highly depending on experiences of practitioners [4,5].Figure 4 shows the logical procedure of identifications of failure modes from the normal state of functions shown in Figure 3. Introducing deviation patterns (plus, minus or inserting another conditions and deteriorating necessary conditions) into the normal state can determine the abnormal modes of the functional model.In this process, guide-word list as shown in Table 1 is helpful to set sufficient patterns of deviations.Table 1 includes a plenty of failure modes according to excessive loading (plus conditions) or insufficient minus conditions.For instance the column "tensile", which is connecting to plastic deformations or cracking of structural components, unusual higher value (+) of tensile loading can lead to "crack" and unusual continuous loading "∫+" can also result in "creep" damage mode, respectively.In the case of the column "heat", +conditions and -conditions can reach different unusual state such as "melting" or "solidifications".In specifying some abnormal modes, at first +/-conditions on normal state (Figure 3) are set.By searching same deviations on Table 1, abnormal mode relating to the introduced deviation can easily be determined.Failure modes are finally named by combinations of abnormal modes and functional failure, such as "Impossible in generating lift force due to deformation".
2) Choice mechanism or causes; in order to determine solutions for failure modes, mechanism of the failure modes and its cause should be analyzed.In current study, mechanisms of failure modes in Stress-Strength model (SSM) are summarized two forms; excessive loading (+) or insufficient strength (-), respectively.This mechanism could be prevented by treating during design or manu-facturing, maintenance stages.Therefore cause of mechanisms are design, manufacturing or operating failures etc.In analyzing failure modes, fault tree analyses (FTA) is much effective.
3) Risk assessment; in the case of hazard modeling process, similar process is applicable.At first, normal state of service is determined.By using the deviations patterns, predictable error scenario and hazardous conditions by failure modes are determined.
Table 2 shows the result of FMEA by using the proposed method.The proposed model will be applied in the phase of conceptual design and manufacturing design stage, which is the earlier periods of product developments.
Case Study in Failure Modes Modeling for Micro Windmill
The proposed process is applied to analyses of a microwindmill.The structure and the functional diagram of the micro windmill are shown in Figures 2 and 3, respectively.An undergraduate student of mechanical engineering has conducted the failure mode analyses and risk modeling.He had almost no knowledge about FMEA or risk assessment before participating in this study.Plus, he got no financial salary by participating in the study.At first, he processed the failure analyses after taking the classes of FMEA and risk analyses and reading the traditional FMEA textbook.Subsequently the analysis by the proposed process was conducted by him.Each process took about 5 days to complete the FMEA worksheet.Figures 5 and 6 show the result of predicted failure modes by using both the process.The result of the proposed process yield more failure modes, cause of failures.Furthermore, the proposed process can determine more specified risk scenarios with failure modes.
The result strongly indicates that the proposed process can effectively support a practitioner with less knowledge or experiences in conducting FMEA and risk assessment.In production design, shortening the period of designs are critical to severely compete among global business environment.However, the shorter the design reviewing became, the more latent failure were missed which yields huge amount of recalls or losses.The current achievement will be helpful in training fresh engineers or improving the quality of design review processes as a proactive prevention technique.The current procedure in Figure 1 is to be improved in more logical identification process of failure modes from various types of abnormal modes in stress, strength and environmental factors.Even in the cases of veteran's case, the proposed process can be helpful by leading the veteran's points of view on inexperienced failure modes/risk scenario, which can reveal other failure modes in advance.
The proposed model will be effective if the management system, which include the data base of failure modes, design drawing, design review results etc.In order to apply the model, such the data base is necessary.The authors tested the effectiveness of web-based database system in design review management [8].If the specific failure modes list is constructed, the proposed model can be applied not only the case study's product, but also the more normal products such as automobiles, manufacturing machines, robots etc.The specific failure modes list should be prepared in order to be selected with ease in their targeting products [5].The authors have also conducted the effective list of failure modes.
Conclusion
The authors proposed the logical process of failure modes modeling by patterning the deviations from normal states of SSM (Stress-Strength Model).The case study certified the effectiveness of the proposed method in predicting more failure modes and risk scenarios with less experiences compared with a conventional process.
Figure 1 .
Figure 1.Flow chart of FMEA and risk assessment.
Figure 2 .
Figure 2. Function deployment diagram of a micro windmill.Link model is defined to show the connections among elements.
Figure 3 .
Figure 3. Functional block diagram of the windmill.
Figure 4 .
Figure 4. Concept of failure mode analyses by patterning deviations from normal states of SSM.
Figure 6 .
Figure 6.Comparison of risk assessment result.
|
v3-fos-license
|
2021-02-17T05:07:43.332Z
|
2021-01-08T00:00:00.000
|
231934563
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/fsn3.2107",
"pdf_hash": "9078e250128afd737f5e8e1351c6183704b3f6a5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2701",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "9078e250128afd737f5e8e1351c6183704b3f6a5",
"year": 2021
}
|
pes2o/s2orc
|
Prunus armeniaca gum exudates: An overview on purification, structure, physicochemical properties, and applications
Abstract Prunus armeniaca gum exudate (PAGE) is obtained from the trunk branches of apricot trees. PAGE is a high‐molecular‐weight polysaccharide with arabinogalactan structure. The physicochemical and rheological characteristics of this gum have been investigated in various researches. PAGE offers a good potential for use as an emulsifying, binding, and stabilizing agent in food and pharmaceutical industries. It also can be used as an organic additive in tissue culture media, synthesizing of metallic nanoparticles, binding potential in tablets, antioxidant agent, and corrosion inhibitor. For desirable emulsifying, stabilizing, shelf life‐enhancing properties, and antioxidant activity of PAGE, it can be used as additive in many foods. We present here a comprehensive review on the existing literatures on characterization of this source of polysaccharide to explore its potential applications in various systems.
| PURIFI C ATI ON
Purification process is commonly performed as the primary step in the characterizing and exploring the potential application of polysaccharides in food and pharmaceutical systems (Cui, 2005). The elimination of proteins from the structure of gums is important since it leads to improving their thickening ability (Burkus & Temelli, 1998).
Furthermore, it has been reported that the presence of protein in the polysaccharides structure can induce an inflammatory response in tissues that may limit the biological utilization of the polysaccharides (Tučková et al., 2002). Figure 1 presents the scheme for purification of water-soluble polysaccharides.
It is often difficult to extract polysaccharides when oils, fats, and proteins are present. Therefore, lipid-soluble substances should be removed first (Nielsen, 2010). Plant materials are usually defatted by dispersing in hot aqueous ethanol (80%-90% (v/v)) and washing the residue with absolute ethanol and acetone or ether (Andersson et al., 2006). Proteins can be removed using Sevag method (Staub, 1965), or by enzyme-catalyzed hydrolysis. Any solubilized polysaccharides are precipitated by addition of four volumes of absolute ethanol (to give an ethanol concentration of 75%) to the cooled dispersion (Cui, 2005). The mixture is centrifuged, and the precipitate of water-soluble polysaccharides is dialyzed and freezedried (Stephen & Phillips, 2016).
The exudate gums are commonly obtained from the trunks and branches of trees in place of mechanical and microbial injuries (Phillips & Williams, 2000). The selection of efficient solvent to purify the exudate gums is different from species to species. In a recent study conducted by Fathi et al. (2016b), the effect of solvent type (acetone, methanol, and ethanol) and centrifugation process on yield, consistency coefficient (as a measure of total carbohydrate content), and lightness of PAGE were evaluated (Table 1). These authors reported that the precipitation of PAGE by acetone after centrifugation process led to producing the gums with maximum yield (51.3%), consistency coefficient (1.51 Pa s n ) and lightness (74.35). However, the obtained yield of PAGE was less than that reported for Prunus amygdalus gum (58.2%), extracted using 4% H 2 O 2 and 2 N NaOH (Bouaziz et al., 2015). The increase in lightness after the purification process is probably due to the elimination of coloring agents that were naturally present in the composition of crude hydrocolloids.
| CHEMI C AL AND S TRUC TUR AL PROPERTIE S
The chemical characterization, such as carbohydrate composition and content, is commonly used to evaluate the purity of gums (Chaplin & Kennedy, 1994;Cui, 2005 Hamdani et al. (2017) was considerably greater than that reported by Fathi et al. (2016b). The method of gum purification has not been reported in Hamdani et al. (2017) research. The observed differences are most likely due to different environmental growing conditions, time of collection, source, age of tree, and contamination of exudate gums (Fathi et al., 2016a). An increased the carbohydrate content indicates a high level of purity. Therefore, it seems that PAGE purified by centrifugation process followed by acetone precipitation could not sufficiently improve the purity of this gum. However, further analyses such as gel permeation chromatography should be conducted to confirm this.
The monosaccharide composition of hydrocolloids is very important because it can affect their rheological and functional properties (Cui et al., 1996). The monosaccharide constituent of PAGE is summarized in Table 2. According to Zitko et al. (1965), PAGE is a polysaccharide containing xylose (Xyl), L-arabinose (L-Ara), and D-galactose In another study, Fathi et al. (2016b) found that this gum is composed of Ara (41.52%), Gal (23.72%), Xyl (17.82%), Man (14.40%), and rhamnose (Rha; 2.54%). Since arabinose and galactose are the most abundant monosaccharides in the PAGE composition, it is suggested that this gum has an galactoarabinan-like structure. In a recent study, Babken et al. (2018) used 1 H and 13 C NMR analyses to elucidate the structure of PAGE. They found that this gum is a complex polysaccharide composed of α and β-L-Arap, α and β-D-Galp and α and β-D-glucopyranoses (β-D-Glcp).
These authors used chemical methods, such as partial acid hydrolysis and periodate oxidation, and proposed that PAGE has a backbone This structure was confirmed by Stephen and Shirley (1986) who used Smith degradation technique and methylation analysis to elucidate the PAGE structure. They showed that PAGE composed of glucuronic acid, mannose and galactose, and erythronic acid linked through glycolaldehyde.
Uronic acids are anionic components that are both carbonyl and carboxylic acid groups (Linhardt et al., 1991). The presence of these components in polysaccharide composition is an indicator of their acidic nature (Ueno et al., 2019). Acidic polysaccharides tend to interact with oppositely charged macromolecules, and at pHs below their dissociation constant, the carboxyl groups will be dissociated, subsequently making it negatively charged (Sherahi et al., 2018). The negatively charged polysaccharides can be used as a carrier for encapsulation of ingredients (Gbassi & Vandamme, 2012;Liu et al., 2008;Solghi et al., 2020). For instance, in coacervation technique, the electrostatic interaction between negatively charged polymers, for example, PAGE and positively charged polymers/ions, leads to the formation of a coacervate that can entrap the ingredients. Moreover, it has been reported that the charged polymers have greater solubility than neutral ones (Hu & Goff, 2018), demonstrating that the solubility Elements ( Not determined.
TA B L E 2 (Continued)
of PAGE is more than neutral polymers. However, further experiments should be carried out to confirm it.
Literature reviews (Table 2) show that PAGE has 1%-3% protein content (Table 2), evaluated by the Kjeldahl method. The protein molecular fraction of PAGE should be purified by Hydrophobic Interaction Chromatography as described previously by Renard et al. (2006) to show that this protein is part of the polysaccharide, as Arabic gum. The presence of protein in the PAGE composition could be also confirmed by FT-IR analysis (Fathi et al., 2016b). The existence of proteins in the polysaccharide structure has determinant influence on their physical and functional properties (Choi et al., 2010;Lan et al., 2020). For instance, the gums containing proteins such as gum Arabic can be used as an emulsifier in food formulations (Salehi et al., 2019). Accordingly, PAGE can be employed as an emulsifier in food and pharmaceutical industries. The emulsifying capacity of PAGE will be discussed in the following sections.
As presented in Table 2, PAGE has 0.5%-4% ash. This value is in the range of ash content of hydrocolloids reported in the literature.
According to the available literature, exudate gums have considerable amount of metal ions and neutralized cations (Jamila et al., 2020;Pachuau et al., 2012). These ions can change the physical and functional properties of the gums. For instance, gelling and viscosifying capacities of the gums depend on their mineral composition (Sherahi et al., 2017). The mineral constituents of PAGE were determined by some authors (Fathi et al., 2016b;Jamila et al., 2020). found that there is inconsistency in the content (Table 2), which may be due to geographical variation. However, the nutrients values of PAGE were higher than most of commercial gums (Fadavi et al., 2014;Jahanbin et al., 2012;Mahfoudhi et al., 2012;Mohammad Amini & Razavi, 2012;Yebeyen et al., 2009), and thus, this gum can be added to food products to enrich their nutrient value.
The molecular weight of polymers is a key factor in predicting their functional properties (Phillips & Williams, 2000). For example, the gelation-and viscosity-enhancing abilities of hydrocolloids mainly depend on their molecular weight. In general, following an increase in molecular weight of hydrocolloids, their viscosity increases (Nakayama, 1999;Liang et al., 2006). Furthermore, it has been reported that the polysaccharides with high molecular weight do not have tendency for absorption at the interface of water-air, and thus can be employed as stabilizer for protein foams (Martinez et al., 2005). Fathi et al. (2016b) reported that weight average molecular weight of PAGE was 5.69 × 10 5 g/mol. This value of molecular weight was higher than those obtained previously by Zitko et al. (1965) (1.92 × 10 5 g/mol), by Rosik et al. (1968) (0.92-1.37 × 10 5 g/mol), and by Stephen and Shirley (1986) (1.50 × 10 5 g/ mol). This probably resulted from different determination methods.
| Dilute solution behavior
Evaluation of dilute solution behavior of the hydrocolloids provides data on their fundamental properties (Mays & Hadjichristidis, 1991).
Various factors may affect the dilute solution behavior of biopolymers and consequently change their conformation and molecular properties (Rochefort & Middleman, 1987 can be computed from the extrapolation of ln η rel /C to infinite dilution (Kraemer, 1938): In Huggins equation (Huggins, 1942), η can be quantified from the extrapolation of η sp /C to zero concentration: In the above equations, k k , k H , and C are Kraemer's constant, Huggins constant, and the concentration of gum, respectively.
In several researches, it has been demonstrated that the equations in which η is computed by determining the slopes of plots are more efficient for intrinsic viscosity determination than those obtained from intercepts of plots (Behrouzian et al., 2014;Hesarinejad et al., 2015;Mirabolhassani et al., 2016;Razavi et al., 2012;Yousefi et al., 2014). Accordingly, the values of relative viscosity or specific viscosity versus polymer concentration were plotted, and then, the slope of the plot was used to calculate η: Tanglertpaibul-Rao's equation (Tanglertpaibul & Rao, 1987): Higiro's equations (Higiro et al., 2006): The results revealed that the values of determination coefficient (R 2 ) for all the models used were more than .89, exhibiting the appropriateness of these equations for determination of intrinsic viscosity at various temperatures. However, they reported that Tanglertpaibul-Rao's model had the highest R 2 and the lowest root mean square error (RMSE). Thus, this equation was introduced as the best model to describe the dilute solution properties of PAGE at tested temperatures (Fathi et al., 2016b. The literature review showed that when the solution temperature increased, the values of intrinsic viscosity decreased. This trend is probably due to the reduction in the stability of hydrogen binding between PAGE macromolecules and the molecules of water, and also, on the reinforcement of the interaction stability between polymer chains . (Draget, 2006): where A, η, R, E a , and T are constant numbers, the dynamic viscosity, the universal gas constant (8.314 kJ/kg mol K), the activation energy (kJ/kg mol), and the absolute temperature (K), respectively.
When the intrinsic viscosity is used instead of dynamic viscosity, the calculated slope of the resulting plot can be used to determine the chain flexibility of biopolymer macromolecules in the solution . The values of E a /R, known as chain flexibility factor and activation energy for PAGE, were 997.3 (1/K) and 0.83 × 10 7 (J/kg mol), respectively. The chain flexibility factor of PAGE is close to that obtained for Balangu seed gum (1,156.53 1/K; , exhibiting a similar chain flexibility for their macromolecules. Furthermore, due to low value of activation energy obtained for PAGE than most of hydrocolloids such as Balangu seed gum (1 × 10 7 J/kg mol; , chitosan (1.5 × 10 7 J/ kg mol; Wang & Xu, 1994), and sage seed gum (2.53 × 10 7 J/kg mol; Yousefi et al., 2014), it can be concluded that PAGE has low-temperature dependency. Overall PAGE can be used as a food additive for application in food processing that require high-temperature stability.
| Effect of ion type and ion concentration
As mentioned above, PAGE has a polyelectrolyte nature, and thus, it is expected that its macromolecular conformation changes in the
| Effect of sugar type and sugar concentration
Dilute solution properties of gums such as viscosity also depend on sugar concentration (Cui & Wang, 2005). Fathi et al. (2017) evaluated the effect of sugar type (lactose and sucrose) and sugar concentration on dilute solution properties of PAGE and found that the best model for determination of intrinsic viscosity at all sugar types and sugar concentrations was Higiro 2. When the sugar concentration increased, the value of intrinsic viscosity showed a decreasing trend and this effect was more pronounced for lactose than sucrose. This decreasing effect has been associated with the increase in competition between PAGE and sugar for interaction with water with elevation of sugar concentration which leading to decrease in availability of water molecules for interaction with the biopolymer (Durchschlag, 1989). (1)
| Steady-state properties
Steady shear measurements are broadly carried out to evaluate the potential application of gums as thickener and stabilizer (Williams & Phillips, 2004). Various factors such as gum concentration, solution temperature, ion and sugar addition, irradiation, and ultrasonic treatment can affect the viscosifying ability of hydrocolloids Zendeboodi et al., 2019).
| Effect of gum concentration
The influence of gum concentration on steady shear behavior of PAGE was evaluated by Fathi et al. (2016b). They recorded the values of shear stress against shear rate in the range of 14-300 s −1 . The tests were carried out using a rotational viscometer equipped with a heating circulator. They used C25 spindle for the rheological evaluation.
The steady shear evaluation at low shear rate provides useful information on the consistency of products in the mouth (Morris, 1990). On the other hand, the data obtained from steady shear measurements at high shear rate are useful for predicting the behavior of the gums in operations such as pumping fluids (Anvari et al., 2016). Since the steady shear behavior of PAGE was only investigated in high shear rate, the resulting data can be used for predicting the behavior of the gum in food processing opera-
| Effect of solution temperature
Due to the extensive range of temperatures encountered over food operations, the temperature sensitivity of gums should be evaluated (Wu et al., 2015). As reported by Fathi et al. (2016b), the gum solutions at various temperatures (10, 20, 30, and 40°C) had a non-Newtonian shear thinning behavior. As expected, the authors reported that when the solution temperature increased, the flow behavior index increased, indicating a tendency to Newtonian behavior at high temperatures. However, an increase in solution temperature resulted in a decrease in consistency coefficient. This decreasing effect can be considered as an advantage when the gum is used in high shear processing operations such as pumping (Fathi et al., 2016b). The decrease in consistency coefficient (as an indication of viscosity) is related to the increase in molecular mobility, and in turn decrease in flow resistance (Karazhiyan et al., 2009). With increase in the concentration of PAGE solution, the values of E a declined, revealing lower temperature sensitivity at higher solution concentration.
| Dynamic rheological properties
Dynamic rheometry measurement has been employed by many scientists to obtain valuable data on the viscoelastic properties of biopolymers without cleaving their structural elements (Gunasekaran & Ak, 2000). This experiment permits researchers to relate dynamic rheological parameters to the molecular structure properties of the gum solution systems (Choi et al., 2006). Before carrying out a frequency sweep experiment, a strain/ stress sweep test must first be done to determine the critical strain.
| Strain and frequency sweep tests
Critical strain is defined as the maximum deformation that a system can withstand without structural collapse. Hamdani et al. (2017) demonstrated that the linear viscoelastic range of the sample was 1 Pa, and hence, frequency sweep test was conducted at this region.
It was found that in the range of evaluated frequency, the value of loss modulus (G″) was always superior to the storage modulus (G′), showing viscose nature of this gum.
| Effect of gamma irradiation
Effect of various doses of gamma irradiation on viscoelastic properties of PAGE solution was evaluated by Hamdani et al. (2017). They reported that when the irradiation dose increase from 0 to 2.5 kGy, no change was observed in the viscosity of the sample. But, when the irradiation dose reaches to 5 kGy, a profound decrease in the viscosity of PAGE solution was observed. This decreasing trend has been attributed to the reduction of the area swept by molecules at higher irradiation dose which results in the decrease in viscosity.
| Flow properties
Flow properties such as bulk and tapped densities and repose angle are determinant factors for application of powders in food and pharmaceutical systems (Malsawmtluangi et al., 2014). Hamdani et al. (2017 reported that the values of tapped and bulk densities of PAGE were 0.66 and 0.82 (g/ml), respectively. The values of tapped density of PAGE were slightly greater than the data reported for other Rosaceae family gum exudates like Prunus dulsis (0.517 g/ml; Farooq et al., 2014). Moreover, the bulk and tapped density values of PAGE increased upon exposure to gamma irradiation (Hamdani et al., 2017).
Hausner ratio and Carr's index are two parameters that have been broadly employed to estimate the flow behavior of powders (Emery et al., 2009). The Hausner ratio is a measure of interparticle friction, whereas Carr's index is a measure of the potential powder arch or bridge strength and stability (Kumar et al., 2001). It has been reported that for powders with poor flow property, the value of Hausner ratio is >1.25; however, powders with good flowability have a Hausner ratio value lower than 1.25 (Wells, 1988 Accordingly, a better flowability is highly expected for PAGE than these gums. Carr's index value can be used to categorize the flowability of powders: 5-15 (excellent), 15-16 (good), 18-21 (fair), and 23-28 (poor flow properties; Carr, 1965). According to the literature described by Hamdani et al. (2017), the value of this parameter for PAGE powder was 19.38%, denoting that PAGE had fair flow characteristics. These authors also indicated that when PAGE was treated with gamma irradiation, Carr's index value decreased by 12.74%. It is evident that treated samples showed excellent flow properties.
It should be emphasized that the flowability of powders is a complex phenomenon and several factors such as powder properties and physicochemical characteristics of the system may affect the properties of the powder flow system (Kim et al., 2005). Some of these factors are summarized in Table 3. Particle size is one of the most important factors affecting powder flowability, and increasing this factor improves the flowability of powders. Solid particle flow is a complex interaction among particle size, shape, and density.
| Emulsion capacity and stability
Most of hydrocolloids are used to control the emulsion shelf life (Dickinson, 2009). Generally, to test the efficiency of gums as emulsifier, the gum concentration required to obtain an emulsifying system with the lowest droplet size is measured (Dickinson, 1988).
Based on physical law, contact angle between 0° and 45° means a hydrophilic environment, which increases dispersive phase sedimentation (Chen, 1988;Staicopolus, 1962). Chichoyan (2015) specified the contact angle of PAGE water solutions to elucidate the concentration needed for the gum can act as stabilizer. The contact angles of the gum solution with concentration of 5%-15% were <45°. But, for the gum with higher concentration (20%), the contact angle value exceeded 45°, indicating 20% of PAGE concentration is the starting minimal concentration for colloidal system stabilization.
| Effect of gamma irradiation
The influence of gamma irradiation on emulsion capacity of PAGE was investigated by Hamdani et al. (2017). Emulsion capacity of PAGE was 24.33% which increased up to 24.75% when the gamma irradiation was increased to 5 kGy. The suitable stabilizing ability of polysaccharides is commonly associated with their high molecular weight and gelation capacity (Liu et al., 2019). Similarly, a slight increase in emulsion stability was also observed with increase in gamma irradiation. The improved emulsion stability and capacity in the exposure of gamma irradiation can be related to the cleavage of glycoside interaction of polysaccharides that increases the exposure of both hydrophilic and hydrophobic groups.
Particle properties Intrinsic factors External factors
Chemical
TA B L E 3
Some of parameters affecting the flowability of powders 6.1.2 | Effect of polymer ratio The influence of PAGE/apple pectin complex as emulsifier on nano-emulsion stability was evaluated by Shamsara et al. (2017).
They determined creaming index, droplet size, and zeta potential and found that when the complex formed by PAGE/apple pectin in the ratio of 21.4:1 was used as emulsifier, the obtained values for maximum creaming index, zeta potential, and droplet size ere most stable. Their results revealed that PAGE/pectin with the ratio of 21.4 (W/W) showed the best creaming stability (CI ~ 84%) after 10 days.
| Effect of pH change
The impact of pH (2, 3, 4, 5, 6, and 7) on the stability of PAGE-lactoglobulin two-layer nano-emulsions was investigated by Shamsara et al. (2015). The lowest and highest diameter of particle size was observed at pH of 4 and 7, respectively. The surface charge of particles is a key factor that determines the emulsion stability. The authors reported that at pH 4, PAGE, and lactoglobulin had oppositely charge; thus, this pH was introduced as optimum pH to achieve an emulsion with the best stability. The influence of pH changes on the emulsifying ability of PAGE/apple pectin complex was also tested by Shamsara et al. (2017). The authors indicated that when the pH was increased from 2 to 7, the droplet size of prepared emulsion significantly increased from 763 to 836 nm. Furthermore, it was also reported that following an increase in pH emulsion range 2-7, the zeta potential of the fabricated emulation decreased from −17 to −21 mV.
This effect has been attributed to the presence of arabinogalactan proteins (AP) in PAGE structure which was confirmed by FT-IR and compositional analyses described above. At pH below the isoelectric point of AP, like pH 2, some parts of AG and the whole of pectin have positive and negative charge, respectively, and the electrostatic interaction between them leads to improvement of the emulsion.
| Effect of ultrasonic treatment
The impacts of sonication time (0, 5, 10, 15, and 20 min) and amplitude (25%, 50%, 75%, and 100%) on the emulsifying ability of PAGE/ apple pectin complex were analyzed by Shamsara et al. (2017). They found that when the long ultrasonic time with high voltage was employed, an emulsion with the smallest droplet size was obtained.
Comparatively, the droplet size of ultrasonic-treated emulsion was significantly smaller than control. Additionally, the emulsion stability of the treated emulsion over 10 days storage was more than the control. These authors also demonstrated that there was no profound difference between control sample and those treated with various ultrasound amplitudes. Overall, they reported that the fabricated emulsion treated with 10 min of ultrasonic treatment at an amplitude of 25% yielded optimum results in terms of droplet size, zeta potential, and creaming index.
| Effect of thermal treatment
The effect of temperature on creaming index, particle size, and zeta potential of prepared emulsion by PAGE/apple pectin complex was investigated by Shamsara et al. (2017). For this purpose, the prepared emulsions were incubated at 25, 37, 50, 60, and 80°C and their results showed that there is an increase in the droplet size of incubated emulsions with increase in temperature. This phenomenon is associated with the increase in kinetic mobility of polysaccharides at higher temperature which resulted in the formation of bigger droplets (Shamsara et al., 2017). Furthermore, the authors observed that with an increase in temperature, the negative charge decreases. On the other hand, the creaming index of the emulsion system increased as the temperature increased up to 60°C. However, with further increase in temperature, the creaming index decreased. with free radicals like DPPH to produce products with high stability (Salehi et al., 2019). Due to the ability of phenolic compounds to donate hydrogen and stable radical intermediates, these compounds can inhibit oxidation of food products especially that of oil and fatty acids (Cuvelier et al., 1992;Maillard et al., 1996). Based on previous studies, there is a positive correlation between the amount of phenolic compounds and antioxidant activity of vegetables and fruits (Jayaprakasha et al., 2008;Kornsteiner et al., 2006;Li et al., 2009;Martínez et al., 2012). Total phenol content (TPC) of PAGE, cherry gum, and gum Arabic were as follows: PAGE ˂ cherry gum ˂ gum
| Antioxidant capacity
Arabic. The positive relation of phenolic compounds and antiradical activity also was confirmed in this study. Accordingly, it is clear that phenolic compound present in PAGE is strongly involved in antioxidant activity as determined by DPPH assay. Babken et al. (2018) determined the phenolic profile of PAGE using GC-MS analysis. It was found that PAGE had, on average, 7.58% catechols, 4.27% hydroquinones, and 5.69% pyrogallol. Hamdani et al. (2018) compared the antioxidant activity of PAGE to acacia and karaya gums. These authors used two solvents (ethanol and methanol) to compare the extractability of antioxidants between the solvents used. Their results showed that the total phenolic content (TPC) of both ethanolic and methanolic extracts of PAGE was higher than those of acacia and karaya gums. Moreover, TPC of methanolic extract of PAGE was higher than that of ethanolic extract, which is related to the variable solubility of phenolic compounds in tested solvents.
| Pharmaceutical applications
Plant-based gums have very wide applications in pharmaceutical science (Seyedabadi et al., 2020). They are mostly used as suspending, stabilizing, emulsifying, thickening, and binding agents and also as matrices for sustained release of drugs (Efentakis & Koutlis, 2001).
Gums are interesting polymers because they often show unique biological and physicochemical activities at costs lower than synthetic polymers (Prajapati et al., 2013). Azam Khan et al. (2012) investigated the potential application of PAGE and P. domestica gum for their sustained release ability compared with hydroxypropyl methyl cellulose (HPMC). They showed that when PAGE and P. domestica gum were used in combination ratio of 1:1, release efficiency was improved and at optimum formulation, release profile was comparable to standard marketed formulation. These authors suggested that PAGE can be used as matrix former in tablet formulation. The synergistic binding potential of PAGE and P. domestica gum in tablet formulations was also investigated by Rahim et al. (2015) and Rahim et al. (2018). They reported that the gums used had better binding capacity for preparation of uncoated tablet dosage form than PVP K30. In a similar research, the gum binding features comparing gum Arabic and polyvinylpyrrolidone were studied (Şensoy et al., 2006).
Results illustrated that PAGE is a promising pharmaceutical binder in tablet formulations.
Additionally, in a patent described by Rhodes (1989), it has been found that the release rate of ingredient in water-soluble matrix may be decreased by changing the arrangement of the ingredients.
| Application in corrosion inhibition
There are some researches that studied natural inhibitor substances as corrosion inhibitors for iron and steel materials in acidic media (Abdallah, 2004;El-Etre, 2003;El-Etre & Abdallah, 2000;Oguzie, 2005). Alwaan and Mahdi (2016) evaluated the effect of PAGE concentration and different temperatures (17-40°C) on the ability of this gum for corrosion inhibition of mild steel. It was found that with addition of PAGE to the acid solution (1 M HCl), the weight loss of the mild steel decreased and due to the presence of carbonyl and hydroxyl groups in PAGE composition, PAGE can adhere on the iron metal. At low temperature (17°C), applying PAGE had no effect on the weight loss of the steel. This effect may be due to low solubility of the polymer at low temperature.
| Acting as organic additive in tissue culture media
In order to grow tissue, tissue culture media needs a supply of polysaccharides (Kozai, 1991). Commonly, some sugars such as sucrose, glucose, and sorbitol were added as carbon sources. To date, various studies have been carried out to evaluate the effect of adding new carbon sources on in vitro callus growth. In a recent study, Khorsha et al. (2016) investigated the usefulness of PAGE as an organic additive in the growth of carrot, stevia, and grapevine. With the incorporation of PAGE in the media, the fresh weight and volume increased and the pigmentation improved. Furthermore, with addition of PAGE, the shoot multiplication and rooting parameters in stevia and grapevine improved. Considering the mentioned positive effects, the application of PAGE in commercial tissue culture protocol is recommended.
| Application to fabricate nanoparticles with biological activity
Synthesis of nanoparticles using natural products, such as gums, resins, and medicinal plants, is a succeeding area of research, and in recent years, there has been an upsurge in interest in the use of diverse natural products for the synthesis of metallic nanoparticles (Kumar & Yadav, 2009). In a study conducted by Islam et al. (2019), PAGE was used to synthetize gold and silver nanoparticles (Au-and Ag-NPs) with diverse biological activity and high thermal stability. The biosynthesized Au and Ag nanoparticles had diameter range of 10-40 and 5-30 nm, respectively. The nanostructure of the fabricated particles was mostly spherical but a small number of anisotropic nanostructures such as nanotriangles. Disk diffusion test demonstrated that the developed nanoparticles had moderate antibacterial activity against Staphylococcus aureus, Escherichia coli, and Pseudomonas aeruginosa. It can be seen that the nanoparticles had a greater antimicrobial activity against S. aureus, as a gram-positive bacteria than E. coli and P. aeruginosa, as gram-negative strains. According to the literature, gram-negative bacteria have a thin peptidoglycan layer, an outer and inner membrane that protects bacteria against antimicrobial agents (Ahmad et al., 2020;May & Silhavy, 2017;Narita, 2011;Tortora et al., 2004).
| SUMMARY AND FUTURE TREND
In the present review article, the chemical composition, structure, rheological, and functional properties and potential application of PAGE as an attractive source of polysaccharide were reviewed.
PAGE is a heterogeneous polysaccharide with an arabinogalactan structure. Due to its specific structure, it can be introduced as a suitable candidate for fiber formation using electrospinning technique.
A future study is needed to evaluate the potential application of PAGE to the formation of nanofibers. PAGE has a polyelectrolyte nature, and thus can be used to encapsulate phytochemicals using coacervation method. To date, several studies have focused on the potential capacity of PAGE in food, pharmaceutical, and other industries; however, further studies should be carried out to explore other applications of this gum.
DATA AVA I L A B I L I T Y S TAT E M E N T
Research data are not shared.
|
v3-fos-license
|
2017-06-24T17:43:43.873Z
|
2015-08-12T00:00:00.000
|
7714182
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-015-2113-7",
"pdf_hash": "13d5c4cf5733c1307c302757596780208a103a37",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2702",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "b517c5b540f533a6f4e54e94d66dc97722a9c23d",
"year": 2015
}
|
pes2o/s2orc
|
Individual and collective empowerment and associated factors among Brazilian adults: a cross-sectional study
Background The empowerment embedded in the health area is defined as a process that can facilitate control over the determinants of health of individuals and population as a way to improve health. The aim of this study was to verify the association between individual and collective empowerment with sociodemographic conditions, lifestyle, health conditions and quality of life. Method A cross-sectional analytical study was conducted with 1150 individuals (aged 35 to 44 years). The empowerment was determined by questions from the Integrated Questionnaire for the Measurement of Social Capital (IQ-MSC). The quality of life was measured using the WHOQOL (World Health Organization Quality of Life-Bref). Lifestyle and health conditions were obtained by adapted questions from the Fantastic Lifestyle Questionnaire The DMFT Index was incorporated in the health conditions questions. Logistic regression or multinomial regression was performed. Results The practice of physical activity was related to individual (OR: 2.70) and collective (OR: 1.57) empowerment. Regarding individual empowerment, people with higher education level (5–11 years – OR: 3.46 and ≥12 years – OR: 4.41), who felt more able to deal with stress (OR:3.76), who presented a high score on quality of life (psychological domain) (OR:1.23) and that smoked (OR:1.49) were more likely to feel able to make decisions and participate in community activities. The increase in the DMFT Index represented less chance of individuals to feel more able to make decisions (OR: 0.96). Regarding the collective empowerment, being religious (catholic) (OR: 1.82), do not drink or drink just a little (OR: 1.66 and 2.28, respectively), and increased score of overall quality of life (OR: 1.08) were more likely to report that people cooperate to solve a problem in their community. Conclusion The two approaches to empowerment, the individual and collective are connected, and the physical activity showed to be a good strategy for the empowerment construction.
Background
The Lalonde [1] report published in 1974 is considered a starting point in the worldwide movement of Health Promotion. It brought a new understanding of the determinants of health and the need for more health care actions. In addition to clinical care, there is a need for interventions in the environment, risk moderation and better understanding of the complexity of the individual and their social context.
After this report, the resulting actions regarding investments in lifestyle and self-care brought changes in the 80s. The better understanding of the social context in which human being live, and their influence on behavior change for health, exposes a certain weakness in expected results [2].
Health promotion is seen as the main strategy for reducing morbidity and early mortality. This strengthens one guideline: the individual and collective empowerment to participate actively in the health-building process. With the need to involve all segments of society, the concept of empowerment is incorporated as a centerpiece of Health Promotion [3].
Historically, the term empowerment has its origin in movements for social justice in the 60s, such as the mobilization of blacks, women and homosexuals, all in defense of human rights and social justice [4]. The health empowerment is defined as a process that can facilitate control over the determinants of health as a fundamental strategy for achieving health [3].
Empowerment can occur at two levels: the psychological/ individual and the community/collective. The individual level refers to the greater ability of individuals to feel strong to make choices in their lives. The collective empowerment refers to the capacity of a community, through the participation process, to achieve collectively defined goals [5].
An effective empowerment process requires the involvement of individual and collective levels. The community with established social values can influence the lifestyles of its individuals, even among the empowered [6]. The lifestyle is one of the four determinants of health groups [1]. If worked isolated, it does not include social and cultural determinants.
While the empowerment processes is been significantly associated with health outcomes, including self-care behaviors [7,8], especially if the modifications include a supportive social environment [9,10], it must not be considered a solution itself and may present positive or negative results [7,10].
Some studies were dedicated to the empowerment theme, most of them by means to measure social capital, since it is considered an empowerment domain [11]. Dimensions evaluated in these instruments could determinate the capacity to act in own benefit or for the community. This ability was observed [12] on those who were classified with high social capital.
Some authors observed an strong inverse association between collective social capital and dental pain [13], dental caries [14] or dental loss [15], all relating the results to the power of the community.
However, others observed an inverse association between high social capital and benefits to health, such as the small dental care usage of adults with high social capital [16].
Few studies evaluated factors associated with empowerment, which facilitates or make in more difficult its construction. In a study with young refugees who participated in an integration program to newcomer youth in Canada [17], difficulties in the empowerment construction were observed. The authors identified that the sense of belonging, positive self-identity, emotional well-being and the auto-determination could allow or restrict the building of the individual empowerment. These factors can be found in population at different ages, but particularly among the marginalized.
Therefore, this subject is relatively unexplored. To better understand what contributes to the individual and community empowerment construction, the aim of this study was to verify the association between empowerment and sociodemographic characteristics, lifestyle health conditions and quality of life.
Methods
This study is part of a Project (The oral health of adults in the Metropolitan Region of Belo Horizonte (MRBH): objective and subjective aspects) started in 2010 and developed in the Public Health Graduate Program, at the Faculty of Dentistry of the Universidade Federal de Minas Gerais.
It is a cross-sectional study among adults, male and female, 35 to 44 years old, which is a standard age group for oral health conditions surveillance in adults [18], living in MRBH (32 municipalities).
Belo Horizonte is the capital of the Minas Gerais state, located in the southeast of Brazil and is the 6th largest city and the third largest metropolitan area in Brazil [19]. The MRBH is a political, financial, educational and cultural center of Minas Gerais representing around 25 % of the population and 40 % of the economy of Minas Gerais state.
The method used for sample size calculation was to estimate proportions with a correction for the finite population. The methodology and data collection instruments were tested in a pilot study involving 98 participants with the same age group, randomly selected in one municipality in the region, not included in the study.
The pilot study allowed to verify the distribution of adults in the parameters to be investigated and the issues related to empowerment [20]. The frequency of individuals who responded positively to the items was used in the calculation of the sample size. It was assumed an 80 % power, 5 % significance level and a design effect equal to 2.0, to compensate the minor variation in the cluster sample. The sample was calculated from 758 individuals, with a 20 % addition compensating possible losses, with a total of 934 individuals. Although this is a cross-sectional study, losses could be the fact that individuals could not accept to take part of the study or could not be located at home after three attempts.
For sample selection, it was considered the total of inhabitants in each municipality, grouped according quartiles of population size. Two municipalities in each quartile were randomly selected. The cluster sampling was used to select blocks from municipalities with up to 50 thousand (Cluster I) and census tracts and blocks from municipalities with over 50 thousand inhabitants (Cluster II). The number of adults examined was proportional to the population of the municipality. All residences located in randomly selected blocks were visited.
Data collection was performed by a structured questionnaire in the form of interviews and oral health evaluations in the participant's households between May and December of 2010. The oral health evaluations were conducted under natural light using mirrors, dental plans and wooden spatulas. The condition of the tooth crown was recorded according to the WHO criteria [18], excluding third molars. This was the only data that composed the "health conditions" variable group.
For the oral health evaluations, an expert performed the theoretical calibration of five researchers, presenting photographs of clinical conditions to be study. The agreement on the clinical examination was tested with 12 volunteers at a dental clinic in a teaching institution, yielding kappa values ranging from 0.81 to 0.92 interobserver, and 0.80 and 1.00 intraobserver.
The dependent variables for analyzes were individual empowerment and collective empowerment, built on issues of the Integrated Questionnaire for the Measurement of Social Capital (SC -IQ) [20]. The individual empowerment variable was created from the combination of two questions, components of empowerment dimension and political action: "in the last 12 months, have you or someone in your household participate in any community activity where people gather to do some work for the benefit of the community?" (Yes: No), and "you feel you have the power to make decisions that could change the course of your life?" (totally unable; able or unable; fully capable).
From the combination of the answers to these two questions, four categories were established in this variable: 1. did not participate in community activities and unable to make a decision; 2. did participate in community activities and feel unable to make a decision; 3. did not participate in community activities and feel able to make a decision; 4. did participate in community activities and feel able to make a decision. Thirty-seven (3.6 %) individuals who were in category two were excluded from data analyzes. Maintaining this category in bivariate and multivariate analyzes, resulted in very imprecise estimates. Another design would need to include this group with sufficient sample of those individuals who participates in community activities and feel unable to make a decision.
The collective empowerment was assessed through a question of scale collective action and cooperation "If there was a problem of water supply in this community, what is the probability that people cooperate to solve the problem?" This question was selected because it is a common problem in smaller municipalities and an uncomfortable issue to the population. The answer options were grouped into two categories for the variable composition: 1-Unlikely/ neither likely nor unlikely; 2-likely.
The intention was to be able to measure the empowerment of the community where the individuals live.
The independent variables considered were: Sociodemographic characteristics: sex (male or female), age (35 to 39 and 40 to 44 years old), total years of education (<4, 5 to 11 years), ethnicity (black/yellow/indigenous/ brown or white), marital status (with or without companion), religion (no religious, Catholic or otherwise -Protestant/evangelical/spiritualist/Jehovah witness), per capita income, time residing at this location; Lifestyle: physical activity (very little or no); smoker (very little or no); drinking (very little or no); Health conditions: general health perception (very bad/poor, fair, very good/good); health problem that causes pain (yes or no), ability to handle the stress of everyday life (very little or no), ability to relax and enjoy leisure (very, little or no), presence of toothache in the last three months (yes or no), the average DMFT Index (number of decayed, missing and filled permanent teeth).
Besides these data, we measured the quality of life using the WHOQOL (World Health Organization Quality of Life-Bref ) version with 26 items, validated in Brazil [21]: two general questions and 24 questions assessing physical, psychological, social and environmental fields. The quality of life scores were computed with range 4-24 points, and higher scores indicate better quality of life. Fifty-two participants who answered less than 21 questions of the WHOQOL were excluded from the analyzes, following the guidelines for implementation and analyzes of the instrument. Variables of lifestyle related to stress and ability to relax were obtained from questions adapted from the Fantastic Lifestyle Questionnaire [22].
The variable per capita income, time living in the same place, and DMFT Index scores of quality of life were included in the statistical and quantitative analyzes.
Statistical analyzes were conducted using Stata Statistical Software (StataCorp LP version). Initially, was performed a descriptive analyzes to obtain mean, standard deviation and proportions. Univariate analyzes were performed to identify factors associated with individual empowerment and collective factors. Logistic regression and multinomial regression was the performed analyzes. For the final model, only variables associated to empowerment with p < 0.25 were included. All analyzes were performed with design effect correction. The svyset command in Stata was used to analyze data of complex samples, considering the levels of sampling and sample weight svyset sector [pweith pesoamostral =]| |block. All further analyzes were conducted using the svy command.
The Regarding the life style, physical activities were still restricted (61.95 %; no), the alcoholic beverages consumption (61.13 %; no) and smoking (79.1 %; no) were low. In the questions related to health conditions, the participants considered themselves with very bad/poor, fair health (68.49 %), with presence of health problems that caused pain (40.46 %) and toothache in the last six months (24.10 %). Almost fifity seven percent (56.65 %) of the participants self-declared capable to relax and 62.97 % the ability to handle the daily stress The average DMFT Index was 16.91 (SE = 0.27) ( Table 1) Regarding individual and collective empowerment, most adults reported that even if they felt able to make a decision to change their course of life and declared nonparticipation in community activities (59.82 %), 62.91 % reported that the community probably would collaborate on the water supply problem ( Table 2).
The results of descriptive analyzes among independent variables and individual and collective empowerment, and the deff associated with each estimate is shown in Table 3. Regarding the choice for deff = 2, the sample representativeness is confirmed once it was sufficient for all variables, with a minor loss only to the relax and enjoy leisure time variable (deff = 2.21).
In the bivariate analyzes, variables associated with individual empowerment with p < 0.25 were: sociodemographic (years of schooling, time living in the same place, religion), lifestyle (physical activity, smoking, alcohol consumption), health (perceived health, ability to cope with stress, to relax and enjoy leisure), quality of life (physical, social, environmental, psychological and overall) and DMFT Index.
The multiple analyzes (Table 4), demonstrated that individual empowerment was positively associated in the two categories used (did not participate/feel capable) to higher education, plenty physical activity, tabagism, more ability to handle the daily stress and quality of life (psychic domain) and DMFT Index (only non set analyzes). The more DMFT the less the chance to feel capable to make decisions to change the course of life.
The same factors, except the DMFT Index, were associated with greater participation in community activities and the increased feeling of being able to make decisions that change the course of life. However, OR values are higher indicating the additional association of the independent variables on participation in community activities ( Table 4).
The variables associated with collective empowerment (p < 0.25) in the univariate analyzes were skin color, marital status, religion, physical activity, alcoholic consumption, general health perception, overall quality of life, social and environmental domains of the quality of life.
The multiple model remained significantly associated with collective empowerment: to be Catholic, do plenty physical activity, do not ingest alcoholic beverage or just a little, and higher overall quality of life (Table 5).
Discussion
Measuring empowerment of individuals and communities is challenging. The option to use a questionnaire as a tool was considered feasible and possible and was the choice in this study. The metropolitan region of Belo Horizonte was excluded from this study sample due of the size (approximately 2.5 million inhabitants) and the peculiarities of a large urban center, showing the highest social indicators compared to other municipalities in the region [19].
Since the sample selection was made according to population size (two municipalities in each quartile), in the 32 municipalities of this study were included municipalities with less than nine thousand inhabitants (quartile 1) and municipalities with more than seventy thousand (quartile 4).
The demographic characteristics of our study sample, is representative of the Brazilian population standard [11,23], not only in the range size of the population, but regarding aspects such as a high percentage of women in the study sample (66.63 %), lower education among adults (91.73 %; <12 years old), minority of White people (27.25 %), living with a partner (71.49 %) and average per capita income of less than $250 ( Table 1).
The study population could be considered with residence stability since it presents an average of 14.8 (SE = 0.70) years living in the same location [19]. This is a considerable time that probably allowed the population to bond socially, which has to be considered when analyzing empowerment, especially collective empowerment.
The participants consider themselves with poor/very poor health (68.49 %), and that could be explained by the higher presence of health problems that caused pain (40.46 %) and dental pain (24.10 %) in the last six months. Signs and symptoms are significant in the selfhealth evaluation.
The results related to the participation in community activities and in the variable to be able to decide about its own life, demonstrates that 77.83 % of the participantes feelt empowered to solve their personal issues but only for 18.03 %, the empowerment helped them to be involved with the community. However, when it was discussed a practical issue (lack of water supply), which is common to several Brazilian municipalities, the participants revealed the possibility of a considerable community participation (62.91 %). Table 6 briefly shows the observed significant associations.
Individual and collective Empowerment and its association
Among the sociodemographic variables used in this study, after multivariate analyzes, education was associated with individual empowerment and religiosity with collective empowerment. Individuals with more years of education compared to those with less education had a higher chance (aged 5-11 years and >12 years) to feel able to make decisions. The education influence cognitive resources, understanding of information and knowledge [24]. The Catholic religion was associated with collective empowerment. In the literature, studies have associated religion to individual empowerment. For the behaviour of prevention and cure of diseases, religion can influence values, lifestyle, cognitions, emotions, and behaviours. The power of faith has demonstrated an important facilitator of individual empowerment, however, for the collective empowerment; limitations and potential negative influences of religion in community settings are discussed. Beliefs and values may negatively affect the decision-making power, but religion has a good capacity to mobilize human and institutional positively or negatively resources. The association between empowerment and religion can be more related with group involvement than with the choice of the religious belief [25][26][27].
Regarding the lifestyle, physical activity was associated to individual and collective empowerment. Although there is a policy guideline of the Brazilian Ministry of Health from 2011, creating a Health Fitness Program (Programa Academia da Saúde) [28], the practice of physical activity among adults is not an installed habit. In this study, 61.95 % reported no physical activity at work or as a sport; this is a common practice for only 18 % of this population. The men reported more active with 54.2 % practicing physical activity. Among women, the percentage is 29.9 % (p < 0.001).
The municipalities encourage physical activity installing the Health Fitness Program in organized public spaces. All the municipalities included in the study when data collection occurred had one or more centers for physical activity (http://www.mg.gov.br). This is not a routine practice for the predominant gender in this sample (female) that could be explained by culture, poverty, social construction of gender or biological determinism [29].
The smoking habit was associated with individual empowerment. No smokers (79.42 %) were less likely to participate in community activities and to felt able to make important decisions for their life or their community. The research PETab: Brazil Report, published in 2011 by the National Cancer Institute -INCA in partnership with the Pan American Health Organization -PAHO reported a significant decline of smokers in Brazil (34.8 % in 1989 and 18.2 % in 2008) and a lower percentage of smokers among women, prevalent gender in our sample. Furthermore, there is a population awareness about smoking in Brazil, with a decrease of social acceptability of smokers [30]. Tobacco control has provoked among smokers a concern about the degree of harm that may be caused to passive smokers [31][32][33]. The anti-smokers norms influence social relationships. According to the Brazil report [30], the decrease of smoking is less in population groups with lower income and schooling, similar to this study population. The association observed in this study may be related to the psycho-emotional sensations caused by this habit. According to participants of smoking cessation activities, smoking calms, divert, gives pleasure, relieves the sadness and decreases anxiety. It is understandable the smoking habit association with the individual empowerment and no association with the collective empowerment [34].
The less or no consumption of alcoholic beverage (94,26 %) was associated with collective empowerment. Alcoholic beverage use was strongly associated with disarrangement and violent behavior than sociability or social support [35,36].
Regarding the health conditions variable, only the item ability to handle stress (62.97 %) was associated to individual empowerment, both for those who participated in community activities and for those who did not. Studies have demonstrated an association between stress caused by Loss of 98 individuals: 52 excluded because they answered less than 21 questions of the WHOQOL + 37 who answered participate in community activities and feel unable to make decisions that change the course of life + 9 adults did not answer the questions giving rise to individual empowerment variable. b Loss of 67 individuals: 52 excluded because they answered less than 21 questions of the WHOQOL + 15 adults did not answer the question giving rise to collective empowerment variable daily work [37,38] and need to know how to deal with it, diminishing its negative effects on health and quality of life. A measure of the quality of life was also common for individual and collective empowerment. For the individual, there was a significant association only with the psychological dimension related to aspects such as self-esteem, body/image appearance, feelings, and others. In the collective level, a correlation to the total index Lifestyles (WHO-QOL) with all domains (physical, psychological social and environmental) were observed. A study of institutionalized elderly with nursing care demonstrated an association between quality of life and empowerment perceived by this group [39].
The major limitation of this study is the cross-sectional methodology measuring subjective questions with quantitative methods. Since it is an underexplored theme, identifying hypothesis could bring significant value to new evaluations and empowerment guidance.
Conclusions
This study pointed out several hypotheses about the empowerment of the community. The data presented, and the observed associations lead us to some reflections directly related to training for the individual and collective empowerment.
The existence of an association between empowerment and physical activity, whether at an individual or collective level make us belive that such practice of health promotion must be seen as a valuable ally in the construction of empowerment.
Behaviorist approach that demonize sedentary, blaming the individual, showing the physical activity to reduce the epidemiological condition, solely [6] can ward off the individual from this habit with prejudice to the construction of health empowerment. Approaches that consider the physical activity can bring benefits to individuals and communities Individual empowerment was also associated with the higher educational level, to the capacity to handle stress and to the psychological domain of Quality of Life Index. These are plausible associations since it clarifies issues such as self-esteem and well-being were the first conditions for individual action.
Collective empowerment was associated with religion, no consumption of alcoholic beverages and to have a good quality of life. The lack of association of collective empowerment with the higher level of education is also the point of concern regarding the empowerment of individuals and community. The education, in Peter Demo words, "is the incubator of citizenship". This is a known association and one of the critical empowerment points.
The issue of smoking and its association with individual empowerment needs a better reflection. Despite government initiatives to control these behaviors, there is a great investment by the smoking companies in advertising, that in a way offers the power, pleasure, and happiness. This publicity is uneven, with high investment in more vulnerable people [4], disseminating a false power. The fact that smoking was associated with empowerment in this study, confirm the false power felt by a drug consumption that is a nonhealthy habit.
The two approaches to empowerment, the individual and collective were connected and the physical activity showed to be a good strategy for the building of empowerment It reinforces the need to understand individuals in their social context and the weakness to expect changes in behavior and lifestyle, with the sole effort aimed at changing behavior. This aspect is fundamental in a training of individuals and communities.
|
v3-fos-license
|
2019-06-15T13:07:31.470Z
|
2019-06-13T00:00:00.000
|
189817226
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-019-6858-2",
"pdf_hash": "9a20861eaeeee0c1e024d26e91b986129999ec80",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2704",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "9a20861eaeeee0c1e024d26e91b986129999ec80",
"year": 2019
}
|
pes2o/s2orc
|
A four-decade analysis of the incidence trends, sociodemographic and clinical characteristics of inflammatory bowel disease patients at single tertiary centre, Kuala Lumpur, Malaysia
Background Inflammatory bowel disease (IBD) was once considered as a Western disease. However, recent epidemiological data showed an emerging trend of IBD cases in the Eastern Asia countries. Clinico-epidemiological data of IBD in Malaysia is scarce. This study aimed to address this issue. Methods Retrospective analysis of ulcerative colitis (UC) and Crohn’s disease (CD), diagnosed from January 1980 till June 2018 was conducted at our centre. Results A total of 413 IBD patients (281 UC, 132 CD) were identified. Mean crude incidence of IBD has increased steadily over the first three decades: 0.36 (1980–1989), 0.48 (1990–1999) and 0.63 per 100,000 person-years (2000–2009). In the 2010 to 2018 period, the mean crude incidence has doubled to 1.46 per 100,000 person-years. There was a significant rise in the incidence of CD, as depicted by reducing UC:CD ratio: 5:1 (1980–1989), 5:1 (1990–1999), 1.9:1 (2000–2009) and 1.7:1 (2010–2018). The prevalence rate of IBD, UC and CD, respectively were 23.0, 15.67 and 7.36 per 100,000 persons. Of all IBD patients, 61.5% (n = 254) were males. When stratified according to ethnic group, the highest prevalence of IBD was among the Indians: 73.4 per 100,000 persons, followed by Malays: 24.8 per 100,000 persons and Chinese: 14.6 per 100,000 persons. The mean age of diagnosis was 41.2 years for UC and 27.4 years for CD. Majority were non-smokers (UC: 76.9%, CD: 70.5%). The diseases were classified as follows: UC; proctitis (9.2%), left-sided colitis (50.2%) and extensive colitis (40.6%), CD; isolated ileal (22.7%), colonic (28.8%), ileocolonic (47.7%) and upper gastrointestinal (0.8%). 12.9% of CD patients had concurrent perianal disease. Extra intestinal manifestations were observed more in CD (53.8%) as compared to UC (12%). Dysplasia and malignancy, on the other hand, occurred more in UC (4.3%, n = 12) than in CD (0.8%, n = 1). Over one quarter (27.3%) of CD patients and 3.6% of UC patients received biologic therapy. Conclusion The incidence of IBD is rising in Malaysia, especially in the last one decade. This might be associated with the urbanization and changing diets. Public and clinicians’ awareness of this emerging disease in Malaysia is important for the timely detection and management.
Background
Inflammatory bowel disease is a chronic condition, characterized by relapsing and remitting inflammation of gastrointestinal (GI) tract. It encompasses Crohn's disease [1] (CD), which can affect any segment of GI tract, and ulcerative colitis (UC), that involves exclusively the rectum and colon. Although UC and CD share a number of similar clinical features, each does have distinct intestinal manifestation [2]. Patients with UC and colonic CD most commonly present with chronic diarrhoea, per rectal bleeding and accompanied by abdominal pain. On the other hand, ileocolonic CD mainly manifests as abdominal pain localized at periumbilical or right lower quadrant, with or without watery diarrhoea. Vague abdominal pain might be the only symptom for small bowel CD, though more extensive small bowel involvement will cause postprandial abdominal pain, nausea, vomiting and watery diarrhoea. In contrast to UC, perianal disease such as perianal abscess, fistula and fissure can occur in CD [3]. Extra intestinal features for IBD include fever, weight loss, arthralgia, mucocutaneous lesions such as oral ulcers, erythema nodosum, pyoderma gangrenosum and ophthalmologic complications like episcleritis, iritis and uveitis [3].
At present, there is no cure for IBD and therefore the management is aimed at induction and maintenance of the disease remission. Due to its chronicity, IBD can results in significant long-term morbidity, impairment of patient's health-related quality of life and excess health care resource use. Study by Graff LA et al. revealed IBD patients with active disease had higher levels of distress, health anxiety, perceived stress, lower social support and poorer disease-specific quality of life as compared to those with inactive disease [4]. Longobardi T el al. examined the health care resource utilization by patients captured in the University of Manitoba IBD database. They reported that IBD patients compared with healthy controls were more likely to have an outpatient visit (RR, 1.18; CI, 1.17-1. 19) and an overnight hospital stay (RR, 2.32; CI, 2.16-2.49) [5]. When examining the financial burden of the disease in Canada, a country with the highest prevalence and incidence rates of IBD in the world, Rocchi A et al. documented an estimated total cost of $2.8 billion in 2012 ($12,000 per IBD patient); with the direct medical costs exceed $1.2 billion and the indirect costs were dominated by long-term work losses of $979 million [6].
IBD was once considered a Western disease. Based on a systematic review in 2012, the highest annual incidence of IBD was recorded in Europe (UC: 24.3 per 100,000 person-years, CD:12.7 per 100,000 person-years), followed by North America (UC: 19.2 per 100,000 person-years, CD: 20.2 per 100,000 person-years) and Asia plus the Middle East (UC: 6.3 per 100,000 person-years, CD: 5.0 per 100,000 person-years) [7]. While the IBD incidence rates in Western countries has remained relatively stable or steadily increased over time, however it is a rapid rise among Asian countries. For instance, a population-based study from South Korea showed a 10-fold increase in the incidence of IBD over two decades (UC: 0.34 to 3.08 per 100,000 person-years, CD: 0.05 to 1.34 per 100,000 person-years) [7]. This epidemiology shift was likely to be caused by urbanization and changing dietary pattern towards Western diet, together with increased disease awareness and improved diagnostic tools [8]. Additionally, regular dining outside, high use of food flavouring and preservatives were among risk factors for developing colorectal cancer among Malaysian, which is one of the long term complication of IBD [9].
Locally, IBD is perceived as a rare disease and therefore its incidence, clinico-epidemiological and sociodemographic data in Malaysia are scarce. Malaysia's annual incidence of IBD was reported as 0.94 per 100,000 person-years [10]. The first Malaysian study on incidence and prevalence of IBD by Hilmi et al. published in 2015, revealed the crude incidence of IBD was 0.68 per 100,000 person-years. In addition, the trend of IBD incidence was increasing over the past two decades (0.07 to 0.69 per 100,000 person-years) with it being the highest among the Indians (1.91 per 100,000 person-years) [11]. A more recent study from Southern Peninsular Malaysia (state of Johor) published in 2018 by Pang et al., showed a comparable result with the crude incidence of IBD was 0.68 per 100,000 person-years (UC: 0.27 per 100,000 person-years, CD: 0.36 per 100,000 person-years) [12].
The aim of this study was to determine the time trends of the incidence of IBD over the last four decades at a tertiary referral hospital, Universiti Kebangsaan Malaysia Medical Centre (UKMMC), Kuala Lumpur, Malaysia. This study was also observed at the sociodemographic and clinical characteristics of this IBD cohort. We hypothesized that there was an increasing trend of the IBD incidence at our centre for the last four decades, which reflected the overall incidence in Malaysia.
Study design and data collection
We performed a retrospective analysis on all IBD patients who was treated under gastroenterology and colorectal surgery units in UKMMC from January 1980 to July 2018. Data was collected from UKMMC IBD registry, patients' medical records, hospital online information system and during follow up review. The UKMMC IBD registry is a prospectively maintained database that was initiated in 2013. All data prior to 2013 were collected retrospectively and added into the registry. It aimed to capture all the relevant information related to IBD patients who are treated in UKMMC which include sociodemographic details (age at diagnosis, gender, ethnicity, smoking status, education level and family history), disease characteristics (patients' symptoms, Montreal's classification, presence of extra-intestinal manifestation and disease complications), investigation results (blood tests, stool tests, radiology, endoscopy and histology) and treatment modalities (medical and surgical treatments). The data were collected from the patients directly, patients' medical records, and hospital online information system. The collected data were stored in electronic spreadsheet and managed by the gastroenterology team members. This IBD registry is kept confidential, not accessible to the public and being updated regularly every 1-3 months. The quality control of the database was maintained by random checking handled by two independent medical staff and further validated by the Head of Gastroenterology unit. UKMMC is one of the four university teaching hospitals in Malaysia and located in Cheras, Kuala Lumpur. Kuala Lumpur is the capital of Malaysia and it covers an area of 243 km 2 . It has estimated population of 1.79 million in 2017 with population density of 7670 people per sq. km of land area. This tertiary hospital was founded in 1997, has 36,000 admissions per year and covers an urban multi-racial population in the Klang Valley. Malay and Chinese are the two major ethnic groups in Kuala Lumpur (47.2 and 41.4% respectively) followed by Indian, 10.2% and others, 1.2% [13]. It provides full gastroenterology service which include the inpatient, outpatient and endoscopy services.
Diagnosis of inflammatory bowel disease
Diagnosis of IBD requires combined assessment of clinical signs and symptoms, blood tests (such as haemoglobin level, platelet level and inflammatory markers; erythrocyte sedimentation rate, and C-reactive protein, endoscopic findings, histopathological findings and relevant imaging such as computed tomography of abdomen/pelvis and magnetic resonance imaging (MRI) of small bowel and/or pelvis. All the IBD diagnosis was made by UKMMC gastroenterologists, after considering all the available diagnostic information. Any patients who did not meet the criteria of IBD were excluded from the analysis. By using Montreal's classification, UC was classified according to disease location while CD according to disease location as well as disease behaviour.
Incidence trend
The incidence trend of IBD, UC and CD was determined by comparing their mean crude incidence in each of the last four decades, i.e. 1980-1989, 1990-1999, 2000-2009 and 2010-2018. Population data (together with the average annual population growth rate) of Kuala Lumpur were obtained from the Department of Statistics, Malaysia and used as the denominator. The mean crude incidence was expressed as number of cases per 100,000 person-years.
Prevalence
The prevalence of IBD, UC and CD was calculated based on the whole Kuala Lumpur population in 2018 and expressed as number of cases per 100,000 persons. Population data of Kuala Lumpur stratified by ethnicity (Malay, Chinese and Indian) in 2018 were also obtained from Department of Statistics, Malaysia. These data were used to calculate the prevalence of IBD stratified by ethnicity.
Statistical analysis
The data were compiled and analysed using IBM SPSS Statistics version 24.0 (IBM Corporation, New York, USA). Continuous variables that were collected included age at diagnosis and duration of disease. Majority of data were summarized into categorical variables that included gender, ethnicity, smoking status, education level, family history positivity, major comorbidity, duration of disease, age group at diagnosis, disease location and behaviour, presence of extra-intestinal manifestation and disease complications, treatments received and type of surgery. Continuous variables were presented, according to a parametrical distribution, as mean and standard deviation. Categorical variables were presented as absolute value and percentage. Pearson's chi square and one-way ANOVA test were used for the analysis of clinical characteristics. The duration of disease of IBD patients was stratified as follows: less than 5 years (< 5) was labelled as short disease duration; 5 to 10 years (5-10) was considered as long disease duration; and more than 10 years (> 10) was labelled as very long disease duration. The age of IBD diagnosis was classified as: adolescence if less than 19 years-old; young adults were between 19 to 35 years-old; middle-aged adults were between 36 to 55 years-old; and older adults were those above 56 years-old.
Prevalence of IBD
The prevalence rate of IBD, UC and CD, were 23.0, 15.67 and 7.36 per 100,000 persons respectively. When stratified according to ethnic groups, the highest prevalence of IBD was among Indians: 73.
Sociodemographic characteristics of IBD patients
Of all IBD patients, 61.5% (n = 254) were males. UC was slightly more common in male as compared to female (male to female ratio was 1.9:1), while CD occurred equally in both (male to female ratio was 1 (Table 4).
Discussion
Inflammatory bowel disease is a global disease and contributes to the public health burden, although it was initially regarded as a rare disease in developing countries including Malaysia. Malaysia is a multi-racial country with three major ethnicities are Malays, Chinese and Indians, making it to be unique when dealing with the rising incidence of IBD. The incidence of IBD differs across different demographic categories, which means the clinical presentation of IBD patients is distinctive for a certain type of population. As IBD emerges in Malaysia, there are only limited number of studies that documented the trend of the IBD incidence over the last 40 years. It is important to raise awareness and better understanding in IBD for either physicians or patients resulting in new research opportunities and subsequently improved quality of life of IBD patients. Also, with this data published, there will be a reform in the IBD research which was previously less funded by the grant provider. We conducted a retrospective study aimed to reveal the incidence trends including sociodemographic and clinical characteristics of IBD in the last four decades at a tertiary referral hospital, UKMMC. Data were collected primarily from the UKMMC IBD registry. IBD registry was updated every 1 to 3 months and retained for ongoing research purposes and subsequently improved the management and care of IBD patients. The diagnostic rates of both UC and CD were indeed increasing with UC was more common than CD. However, we observed a reverse trend from the year 2000 until July 2018 with a reduction in UC to CD ratio. This depicts the emergence of CD cases in Malaysia, which resembles with the current disease pattern in certain parts of Asia including Hong Kong, Japan and Korea [14]. Environmental risk factors for example breast fed more than 12 months (aOR 0.10, 95% CI 0.04 to 0.30) and antibiotic use before the age of 15 years (aOR 0.19, 95% CI 0.07 to 0.52) were documented to be protective for the development of CD among Asians [15]. However, in this study we did not capture dietary factors and other environmental factors that may influence the incidence of CD.
Majority of UC cases were seen among male but there was no gender difference for CD. This result was dissimilar from the local data published previously by Hilmi et al., where they documented the gender difference was observed in the CD and not UC cases [11]. Previous studies postulated that the gender difference in IBD was caused by multiple factors. A study conducted among Dutch IBD patients involving 2118 CD and 1269 UC concluded that gender differences were featured based [16]. A meta-analysis study on the Chinese population consisted of a median number of 69 CD and 189 UC cases identified male was more predominant in both CD and UC with the median sex ratio (male to female) was 1.28 [17]. The mean age of diagnosis for UC in this study fell between 36 to 55 years with more than 40% were among middle-aged adults. While for CD, the mean age of CD fell between 19 to 35 years with more than 50% were among young adults. These observations were similar to most of the studies reported in the West and Asia countries [11,18]. Malaysia is a multi-racial country with a population of 30 million people who practice various religions. Three major races are Malays, Indians and Chinese. Our recent data noted that IBD was predominantly noticed among Indians, followed by Malays and Chinese. A local data previously reported that IBD (both UC and CD) with limited number of patients were predominantly seen among Indians, followed by Malays and Chinese [11,19,20]. This finding highlighted the diagnosis of IBD which can occur among high risk groups i.e. young adults of Indian ethnicity should be made known to primary care physician so that a timely referral to the gastroenterologist can be made.
Among the recruited patients, the majority was non-smokers; which was again similar to the reported data in Malaysia [11,19,20]. We can't conclude whether smoking is either a risk or protective factor among the IBD population in this region as we did not look into a non-IBD group. Based on the western population study, cigarette smoking was thought to increase the risk of CD and the opposite for UC. A recent study encompassed China and India populations as a representative for Asians failed to conclude the association between smoking and IBD [21]. Another exciting finding from this study was that most of IBD patients have tertiary education, although this was a biased population attending a tertiary hospital. The level of education attained by individuals is influenced by socioeconomic status. Based on the National Health & Morbidity survey 2015, 94% (95% CI) of Malaysian adults did not take adequate fruits and/or vegetables as recommended by the WHO [22]. The low consumption of fruits and vegetable intake may explain the higher incidence of chronic diseases including IBD in this country even among IBD patients with higher socioeconomic status [22]. In term of familial penetrance, only less than five UC patients have either family history of IBD or CRC. Similarly, less than 4% of CD patients have family history of IBD or CRC. This affirmed the lack of familial penetrance among Asians [18]. Unlike in a study investigated of more than 8000 Danish population with CD have an exponential increased risk in individuals with third-degree to first-degree relatives [23].
We used Montreal's classification of IBD as it gives a good inter-observer agreement for the extent of disease in UC [24]. Half of the UC patients (~50%) was left-sided and 40% has extensive disease. This finding was slightly more as compared to previous reported study which was between 37.3 to 39% [11]. Therefore, this group of patients have higher tendency to develop complications and IBD-related neoplasia in the future [25]. Almost half of the CD patients have ileo-colonic disease and three-quarters have non-stricturing, non-penetrating disease character which portrays the overall lesser aggressiveness of CD. Upper gastro-intestinal CD was reported to be rare among Asians, and our findings also echoed previous findings where only a single patient was diagnosed with isolated upper GI involvement [26]. This study also showed more than 12% of An alarming feature of our observation in our centre was the number of UC patients who have co-morbidities associated with metabolic syndrome. The link between metabolic syndrome and IBD was described and the possible explanation was due to adipose tissue dysregulation, chronic inflammation and ineffective immune system [27]. More than three quarters of our UC-related neoplasia patients have type 2 diabetes mellitus (T2DM) which was poorly controlled at the time of neoplasia detection. Disease-linked inflammation, which is the essence that links UC, CRC and T2DM resulting in up-regulation of cytokines along with transforming growth factor beta (TGFβ), tumor necrosis factor alpha (TNFα), nuclear factor kappa-light-chain-enhancer of activated B cells (NFKB), reactive oxygen species (ROS) and other signaling molecules, consequently leading to imbalance in intestinal microbiota which contributes to the inevitable progression to neoplasia [28,29]. Hence, understanding the consequence of T2DM which contributes to disease progression and prognosis is essential [30]. The patients should be alerted and stressed on the importance of their diabetic controls and all patients with IBD should be encouraged to screen regularly for metabolic syndrome.
Almost all of our UC patients (94%) and 30% of CD patients received 5 ASA, given its proven efficacy in IBD treatment [31]. Majority of moderate to severe disease CD patients were treated with immunomodulators as compared to the UC patients (less than 30%). Biologic agents were given to almost a third of our CD patients as this treatment was proven to be effective for the maintenance and remission of CD patients [32]. A small percentage (~3%) of our IBD patients did not receive any treatment for their mild disease in full remission. Surgical treatment among IBD patients has been reduced over the years owing to early diagnosis, comprehensive guidelines, promotion of IBD medical education and a shift of care from surgeons to gastroenterologists [33]. The low surgical incidence among our UC patients can be attributed to medical therapy optimization. Almost one-third of our CD patients have underwent various forms of surgery, which was considerably low compared to the general surgical likelihood. With the emergence of anti-tumor necrosis factor agents and the usage of immunomodulators, both proven to reduce CD-related surgeries as the future of CD management is indeed evolving [33]. Long disease duration and extensive disease extent among general UC population are non-debatable risk factors for development of CRC [25]. However, it is exciting that the non-existence of family history of IBD or CRC among our 12 UC-related neoplasia patients, further affirmed that familial penetrance was lacking even among patients with the aggressive spectrum of UC in this region. It is worth investigating the possible gene dysregulation in different disease duration IBD [34].
Thus, endoscopic surveillance program for high risk IBD patients is therefore essential in IBD management. Based on European Crohn's and colitis organization (ECCO) guidelines for UC, it is recommended that surveillance colonoscopy should be performed 8-10 years after disease onset in patients with extensive disease and 15 years in patients with left-sided [35]. Although the average duration taken for neoplasia development among our long disease duration patients was 26.91 years; early detection with a comprehensive colonoscopy surveillance program would be essential for the future of IBD-related neoplasia in this region.
Our study's strengths include a resonably large number of sample size (n = 413 IBD patients), a prolonged study period (40 years) and the fact that UKMMC is a tertiary care centre for IBD in Kuala Lumpur, capital city of Malaysia. These enable us to examine the IBD incidence trends as well as to provide a more representative data on IBD patients in Malaysia. Our study limitation is mainly due to the nature of its retrospective analysis. In addition, we do not capture any data on dietary factors that might be relevant as a risk or protective factor for IBD. This could open up more opportunities for future research in investigating possible environmental risk factors such as dietary intake and life style especially when there is lack of genetic susceptibility among IBD patients in this Asia region.
Conclusion
This four-decade study concludes that there is emerging trend of IBD in Kuala Lumpur and prevailed mostly among Indians followed Malays and Chinese. The clinical characteristics among these patients were males, non-smokers, highly educated, diagnosed at young age and negative family history of IBD.
|
v3-fos-license
|
2023-04-26T15:18:05.416Z
|
2023-04-22T00:00:00.000
|
258321158
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/biomechanics3020018",
"pdf_hash": "e949835fa5b13d186cdd1a37543b3e26a1a0abe5",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2705",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"sha1": "c2e199d38a04fcf1c95d366b834b3841ed9cf7d5",
"year": 2023
}
|
pes2o/s2orc
|
Reliability of a Pendulum Apparatus for the Execution of Plyometric Rebound Exercises and the Comparison of Their Biomechanical Parameters with Load-Matching Vertical Drop Jumps
: The inability to control the body center of mass (BCM) initial conditions, when executing plyometric exercises, comprises a restrictive factor to accurately compare jumps executed vertically and horizontally. The purpose of the study was to present a methodological approach for the examination of BCM initial conditions during vertical drop jumps (VDJ) and plyometric rebound jumps performed with a pendulum swing (HPRJ). A system consisting of two force plates was used for the evaluation of VDJ. A bifilar pendulum, equipped with a goniometer and accelerometer, was constructed for the evaluation of the HPRJ. Kinematic parameters from both jump modalities were obtained by means of videography (100 Hz). Thirty-eight physically active young males executed VDJ and HPRJ with identical BCM kinetic energy at the instant of impact (KEI). Results revealed that participants produced higher power and lower force outputs at HPRJ ( p < 0.01). The rate of force development was larger in VDJ, while hip movement was less in HPRJ. The use of the presented methodology provided the means to reliably determine the exact BCM release height during the execution of the examined jumps. This provided an accurate determination of the amount of KEI, being the main parameter of calculating load during plyometric exercise.
Introduction
Drop jumps (DJ) are the most recognized and commonly used method of plyometric training [1][2][3][4][5]. When executing a DJ, athletes drop from a raised surface and perform a maximal vertical jump after landing on the ground in the shortest possible ground contact time. Storage and utilization of muscle elastic energy are characteristic in DJ. During the eccentric (downward) phase, gravity forces the body to move downwards and energy is stored in the elastic components of the stretched muscles. This stored energy is utilized and summed to the energy produced during the concentric (upward) phase, e.g., when the body moves upwards [6].
Vertical ground reaction force (vGRF) and power output are suggested to distinguish the level of ability in terms of DJ performance [7]. Power production is a very important factor that affects drop jumping, which is essential for performance in individual and team sports [8,9]. During DJ, the stretch-shortening cycle of muscle function (SSC) is evident [10], since the impact to the ground forces the activated lower limb muscles to lengthen by acting eccentrically during the braking phase, followed by a concentric (shortening) action during the propulsion. The above mechanism results in enhanced jumping ability.
DJ performance is suggested to be characterized by high reliability and low variability [7,9,[11][12][13]. Nevertheless, the kinetic energy at the instant of impact (KEI) is one of
Design of the Study
At first, the validity of the methods to evaluate HPRJ performance was tested. Then, the VDJs were performed to define the target KEI to be set for the execution of the respective HPRJ. Finally, the HPRJs were performed with the same KEI and their parameters were compared to those of the VDJs.
Participants
Thirty-eight physically active young males (n = 38, age: 22.4 ± 4.0 years, height: 1.85 ± 0.05 m, body mass: 81.8 ± 8.2 kg) volunteered to participate in the study. Participants were informed about the purposes of the study and gave their signed consent. All participants were in good physical condition, and were physically active for at least 6 h/week, with no apparent or reported injury or disability. All participants provided a signed informed consent. The study was conducted following the guidelines of the Declaration of Helsinki and of the Institution's Research Committee Ethics Code.
Instruments 2.3.1. Vertical Drop Jumps
For the evaluation of the VDJ, a system consisting of two force plates was used. A one-dimensional force-plate (1-Dynami, ©: Biomechanics Lab AUTh, Thessaloniki, Greece) was used to record the vGRF during the step-off [37] from the raised platform. It was used to calculate the exact BCM dropping height, using the vGRF data and the duration of the impulse. An AMTI Mod. OR6-5-1 (AMTI, Newton, MA, USA) force-plate was used to record the vGRF during contact with the ground. This setup is depicted in Figure 1a and was used to determine VDJ performance variables as described elsewhere [7].
Biomechanics 2023, 4, FOR PEER REVIEW 3 addition, it was also hypothesized that the requirement to overcome the increased loading will result in a larger joint angle range of motion (ROM) in the HPRJ compared to the VDJ.
Design of the Study
At first, the validity of the methods to evaluate HPRJ performance was tested. Then, the VDJs were performed to define the target KEI to be set for the execution of the respective HPRJ. Finally, the HPRJs were performed with the same KEI and their parameters were compared to those of the VDJs.
Participants
Thirty-eight physically active young males (n = 38, age: 22.4 ± 4.0 years, height: 1.85 ± 0.05 m, body mass: 81.8 ± 8.2 kg) volunteered to participate in the study. Participants were informed about the purposes of the study and gave their signed consent. All participants were in good physical condition, and were physically active for at least 6 h/week, with no apparent or reported injury or disability. All participants provided a signed informed consent. The study was conducted following the guidelines of the Declaration of Helsinki and of the Institution's Research Committee Ethics Code.
Vertical Drop Jumps
For the evaluation of the VDJ, a system consisting of two force plates was used. A one-dimensional force-plate (1-Dynami, ©: Biomechanics Lab AUTh, Thessaloniki, Greece) was used to record the vGRF during the step-off [37] from the raised platform. It was used to calculate the exact BCM dropping height, using the vGRF data and the duration of the impulse. An AMTI Mod. OR6-5-1 (AMTI, Newton, MA, USA) force-plate was used to record the vGRF during contact with the ground. This setup is depicted in Figure 1a and was used to determine VDJ performance variables as described elsewhere [7].
Horizontal Pendulum Rebound Jumps
For the evaluation of the HPRJs, a bifilar pendulum was constructed, which allowed participants to swing toward a dynamometer attached to the wall (Figure 1b). The benefit of using the bifilar pendulum, in comparison to the simple pendulum apparatus used in the previous related literature, is that, when the pendulum is rotated from its two solid axes of rotation, the level that is determined by the lower ends of its arms is constant and parallel to the horizontal. Thus, the motion of the pendulum's seat is always parallel to the ground and the execution of the HPRJs can be conducted perpendicularly to the dynamometer mounted to the wall. Another advantage is that its arms can be constructed in any desired length, without any effects from its mass [38]. The bifilar pendulum was comprised of a seat suspended by four parallel 250 × 6 × 3 cm aluminum arms. The back of the seat had a 145 • inclination. The total mass of the seat and the bifilar pendulum was 42.5 kg. Additional details of the mechanical properties of the pendulum are presented in Appendix A.
The pendulum arms were rotated round two parallel bars attached to a fixation plate on the ceiling. The fixation plate was adjustable in order to allow subjects with different body heights to have contact with the wall dynamometer with fully extended legs, while the seat was at the lowest position of its trajectory. A custom-made dynamometer (2-Dynami, ©: Biomechanics Lab AUTh, Thessaloniki, Greece) was mounted on the wall and was used to record the horizontal wall reaction forces (hWRF). The procedure to calibrate the wall dynamometer and its validity are presented in Appendix B.
For the purpose of monitoring the kinetics of the pendulum and the seated subject, the following instruments were attached to the pendulum:
1.
A pendulous foothold with a shock absorbing system connected to a Kistler 932-1B force-transducer (FTD; Kistler Instrumente AG, Winterthur, Switzerland). It was used to guide subjects' lower extremities to the wall dynamometer; it was also used to calculate any contribution of the lower extremity in the vertical component.
2.
A Lucas R60D (Lucas Control Systems Products, Hampton, VA, USA) electronic goniometer, which was used to monitor the temporal angular position of the bifilar pendulum. It was attached at the front-up parallel bar.
3.
A Kyowa AS-20GB (Kyowa Electronic Instruments Co., Japan) accelerometer, which was used to monitor the instant velocity of the bifilar pendulum.
Signals from the wall dynamometer and the accelerometer were amplified using Kyowa DPM-601B (Kyowa Electronic Instruments Co., Chofu, Tokyo, Japan) amplifiers. Signals from the force-transducer were amplified using a Kistler 5037A-1211 (Kistler Instrumente AG, Winterthur, Switzerland) amplifier. All signals were simultaneously recorded and stored in a Pentium II PC, using a 12-bit analog-to-digital converter (PC-LabCard PCL-812, Advantech Co., Ltd., Taipei, Taiwan) A/D card. Sampling frequency was set to 500 Hz. Signals were digitally smoothed using a 4th-order low-pass Butterworth filter, with cut-off frequency set at 15 Hz.
Video Recording
Both VDJs and HPRJs were filmed using a JVC GR-DVL 9600 EG (Victor Company of Japan Ltd., Yokohama, Japan) digital video camera, operating with a sampling frequency of 100 fps. The camera was placed 5 m perpendicular to the plane of motion and was based on a fixed tripod at a height of 1.2 m. A 2.5 m × 1.25 m calibration frame with 12 markers was also recorded to conduct a 2D-DLT analysis for the calculation of the lower limb joints' kinematics.
Experimental Procedure
The warm-up and familiarization procedure has been described in detail previously [7]. At first, the VDJs were performed and the participants were informed about the execution of the step-off from the drop platform and to keep the arms akimbo during the execution. The instruction was to "jump as high as you can with the minimum ground contact time". Each participant performed, bare-footed, three VDJs from 40 cm. A minimum 60 s interval, in order to avoid fatigue, was allowed between trials. The raised platform dynamometer was adjusted to permit subjects to land at the center of the ground force plate. Such an arrangement contributed to a safe execution and an accurate evaluation of the jumps.
Fifteen minutes after the completion of the last VDJ, the participants were adjusted on the pendulum seat with a five-point fixing belt. The pendulum was fixed in a position so that participants could touch the wall force dynamometer with the joints of their lower extremities fully extended when the pendulum was at its lowest position of its trajectory. Identical KEI to the wall dynamometer was accomplished by elevating the bifilar pendulum to the proper release height (H R ) using a Kabit SHZ-500 (Kabit Deutschland GmbH, Ismaning, Germany) electrical hoist. Participants were instructed to execute the HPRJs utilizing a "jump as far as you can with the minimum wall contact time" pattern. All three HPRJ trials were executed bare-footed, while upper extremities were held crossed on the torso. A minimum 60 s interval between trials was also provided.
Kinematic and Kinetic Parameters Derived from the Force Recordings
The analysis of the recorded time curves provided the following parameters [7,36,39,40] using the modules of the K-Dynami (©: Iraklis A. Kollias) software: • Spatial parameters: jump height (H JUMP ); actual drop take-off height (h DROP ); height of release (H R ) of the pendulum; BCM vertical displacement during the braking (S BR ) and propulsion (S PR ) phases.
•
Temporal parameters: total ground contact time (t C ); braking phase duration (t BR ); time to achieve maximum vGRF/hWRF (tF MAX ); time to achieve peak power during the propulsion phase (tP MAX ).
•
Kinematic parameters: BCM velocity at the instants of touchdown (V TD ) and take-off (V TO ).
•
Kinetic parameters: peak force output (F); peak rate of force development (RFD); power in the propulsion phase (P); vertical stiffness (K VERT ); leg stiffness (K LEG ).
Definition of the KEI for the Horizontal Pendulum Rebound Jumps
HPRJ performance was calculated based on initial BCM conditions after push-off phase, which was verified by the signals from the electronic goniometer, the accelerometer, and from the video analysis [40]. During the rest period between the jumping modalities, the analysis of the best VDJ (criterion: H JUMP ) provided the exact KEI that was used as input to set the H R for the HPRJ. The bifilar pendulum was set to be released from a H R that would allow identical KEI compared to VDJ as shown in Equation (1): where H R is the BCM release height for HPRJ condition, m S is the participant's body mass, m P is the mass of the bifilar pendulum, and h DROP is the BCM drop height for the VDJ condition. The best HPRJ attempt, defined by the criterion of maximal H JUMP calculated from V TO , was selected for further analysis, namely the comparison with the VDJ.
Kinematic Parameters Derived from the Video Analysis
The anatomical points that were manually digitized at each field using the K-Motion (K-Invent, Montpellier, France) software and that were used for the kinematic analysis were the following: head of the 5th metacarpal, ulna-styloid process, lateral epicondyle of the humerus, acromion, top of the head, 7th cervical vertebra, greater trochanter, lateral epicondyle of the femur, posterior surface of the calcaneus, lateral malleolus, tuberosity of the 5th metatarsal, and proximal medial phalanx. In the case of the HPRJs, pairs of markers on each of the pendulum's arms were also digitized. The coordinates of the center of mass were calculated for every field using the method of segments [41], as follows (Equation (2)): where C BCM is the coordinates of BCM, P i is the coordinates of the proximal point of the ith segment, D i is the coordinates of the distal point of the ith segment, Q i is the distance of the center of mass of the ith segment from its distal point, m i is the relative mass of the ith segment compared to whole body mass, and n is the number of body segments (n = 14). Temporal position of the center of mass of the system pendulum + participant was calculated using Equation (3): where C Σ is the coordinates of the center of mass of the system, C BCM is the coordinates of subject's BCM, C PCM is the coordinates of the bifilar pendulum's center of mass, m S is participant's body mass, and m P is the mass of the bifilar pendulum (=42.5 kg). A 2nd order low-pass Butterworth filter, with a cut-off frequency ranging from 4 to 6.5 Hz, depending on the noise calculated with residual analysis [42], was used for smoothing the data. The examined angular kinematic parameters were the ankle (ANK), knee (KNEE), and hip (HIP) angle (θ) at the instance of touchdown (td), maximum BCM displacement during contact with the force-plates (low), and take-off (to). In addition, the peak angular velocity (ω) and range of motion (ROM) of the lower limb joints during the braking and propulsion phases were calculated. Furthermore, for the calculation of K LEG , the leg length was extracted as the perpendicular displacement of the greater trochanter relative to the lateral malleolus.
Signal Synchronization
The synchronization of the kinematic and kinetic data was accomplished with Lagrange interpolation, using the K-Motion (K-Invent, Montpellier, France) software. The time instants of take-off, achievement of maximal BCM velocity, and achievement of peak BCM acceleration from both signals, as extracted from both the force and video recordings, were used for reference.
Statistical Analysis
The collected data were checked for normality in their distribution using the Kolmogorov-Smirnov test (p > 0.05). Intra-test reliability was tested using the two-way random with absolute agreement intraclass correlation coefficient (ICC) for both VDJ and HPRJ on the values using the three trials for each jumping task. Inter-instrument reliability of the HPRJ assessment was also tested using the same ICC test correlating the mean values for each participant among the three instruments (wall dynamometer, accelerometer, and goniometer). For all cases, the single measure ICC values were used, with confidence intervals set at 95%. ICCs of <0.40, 0.40-0.75, and >0.75 were interpreted as poor, fair to good, and excellent reliability, respectively [43].
For the comparison of the kinetic and kinematic characteristics of VDJ and HPRJ, paired samples t-test was used. Cohen's d was calculated for every comparison to investigate the effect size, with values of ≤0.49, 0.50-0.79, and, ≥0.80 being interpreted as small, medium, and large effect sizes, respectively [44].
The level of significance for all analyses was set at a = 0.05. All statistical procedures were performed using the IBM SPSS Statistics v.27.0.1.0 software (International Business Machines Corp., Armonk, NY, USA).
Spatiotemporal, Kinetic, and Kinematic Parameters
For the VDJ, h DROP was 30.1 ± 4.5 cm instead of the nominal h DROP of 40.0 cm. On the other hand, the monitored bifilar pendulum allowed the initiation of the HPRJ at a H R of 20.0 ± 0.1 cm. Thus, KEI was almost identical between the two jumping modalities (Table 1). Table 1 presents the comparison of the spatiotemporal and kinematic parameters of the VDJ and the HPRJ. Performance (H JUMP ) was not different (p > 0.05) between the two jumping tests. Data analysis revealed significant (p < 0.05) differences between VDJ and HPRJ for S BR , tF MAX , V TO , and V TD . Table 2 depicts the comparison of the kinetic parameters between VDJ and HPRJ. F was significantly (p < 0.05) larger at VDJ compared to HPRJ. However, F relative to body mass was significantly (p < 0.05) larger in HPRJ. In addition, P was significantly (p < 0.05) larger in HPRJ. However, when P was expressed relative to body mass, no differences (p > 0.05) were observed between the two jumping tests. Concerning the examined stiffness parameters, only K VERT differed significantly (p < 0.05) between VDJ and HPRJ. The above-mentioned differences are also observed in the mean (n = 38) time curves of the examined parameters ( Figure 2). Although similarly progressed during the contact phase, the lower F and S BR resulted in lower K VERT in HPRJ compared to VDJ. The above-mentioned differences are also observed in the mean (n = 38) time curve of the examined parameters ( Figure 2). Although similarly progressed during the contac phase, the lower F and SBR resulted in lower KVERT in HPRJ compared to VDJ. Table 3 presents the comparison of the joint angular kinematic parameters of the VDJ and the HPRJ. With the exception of the θ KNEE and θ HIP at the maximum BCM displacement during the braking phase, all other examined angles were significantly (p < 0.05) different. At the same instant, θ ANK was significantly (p < 0.05) more extended in the HPRJ than the VDJ. All examined lower extremity joints were significantly (p < 0.05) more extended in the VDJ compared to the HPRJ at the instances of touchdown and take-off. In addition, significantly (p < 0.05) larger ROM was observed in the VDJ for both the braking and propulsion phases. With the exception of ω HIP , no differences (p > 0.05) were observed between the jumping tests for the peak joint angular velocity. The mean (n = 38) time curves of the examined joint angular kinematic parameters are presented in Figure 3. It was observed that at the HPRJ, the ankle and hip joints remain at their maximum flexion point for a relatively longer period compared to the VDJ. In addition, it seems that the knee joint was rapidly extended during the last third of the support phase of the HPRJ.
Joint Angular Kinematic Parameters
The time history of lower extremity joints' angular velocity revealed the existence of a similar progression pattern throughout the contact phase in both jumping tests (Figure 3b,d,f). Although larger leg length values were recorded during the VDJ, a similar progression pattern was also present (Figure 3g).
3b,d,f). Although larger leg length values were recorded during the VDJ, a similar progression pattern was also present (Figure 3g).
Discussion
The purpose of the present study was to examine the reliability of a novel pendulum swing apparatus for the execution of HPRJ that could subject participants to an identical KEI as in the VDJ in order to allow the comparison of HPRJ and VDJ biomechanics when
Discussion
The purpose of the present study was to examine the reliability of a novel pendulum swing apparatus for the execution of HPRJ that could subject participants to an identical KEI as in the VDJ in order to allow the comparison of HPRJ and VDJ biomechanics when performed with the same initial loading conditions. The present findings suggest that the execution of HPRJ using a bifilar pendulum was highly reliable. Given the fact that the same initial conditions were applied, jumping height and relative power output were not different, while the relative force output and the lower limb joints' ROM were larger in the VDJ compared to the HPRJ.
The results of the present study concerning the VDJ biomechanical parameters are in logical agreement with those reported in previous studies [7,16,[45][46][47][48]. Differences between the nominal drop height and h DROP have been also found in the past [7,46,49,50]. However, the h DROP values reported for VDJ from 40 cm in previous research [16,48,51] ranged from 35 to 45 cm are not in agreement with the h DROP measured in the present study (30.1 ± 4.5 cm), which is closer to the findings of Geraldo et al. [52].
Comparing the present results with previous studies examining HPRJ, it is concluded that t C is almost identical with what was reported in the past [28]. Reduced relative F and RFD in HPRJ compared to VDJ was also reported [27,28]. The present findings are in agreement with these findings. In addition, larger P values in the propulsive phase of the HPRJ than the VDJ have been reported as well [26,32]. Nevertheless, it is not evident that KEI was controlled in previous studies. The ankle and knee joint angles at the maximum BCM displacement during the contact phase are similar to those reported by Fowler and Lees [28]. With respect to the findings of the same study, although ω ANK was similar, different trends concerning the differences in ω KNEE and ω HIP patterns between VDJ and HPRJ were observed in the present research. This finding seems to be connected with the higher mass of the participant + pendulum system during the execution of the HPRJ while keeping the same kinetic energy at initial contact as in the VDJ.
The lower relative F, S BR , and S PR values during HPRJ can be attributed to the immobilization of the torso because of the fixing on the chair and the consequent lack of contribution of the hip extensors to optimally respond to the required prerequisites for the execution of the SSC. This could also explain the decreased lower extremity joints' ROM during the HPRJ. The changes observed concerning the lower extremity joints' angular displacement between VDJ and HPRJ may have caused alterations to the force-length relationships of lower extremity muscles, leading to differences in the force and power production capabilities, as it has been suggested that the muscle-tendon length of the biarticular muscles spanning the knee and hip joints were altered during different pendulum seat arrangements [34,35].
Force and power outputs are considered to define VDJ performance [7,9]. The increased loading imposed in the SSC during the braking phase of the VDJ leads to larger power output compared to the squat and countermovement vertical jumps [46,53,54]. In the case of the HPRJ, the larger power output with lesser force output can be interpreted as an absence of the necessity to prevail over the body mass. The fact that applied force is efficiently utilized to enhance jumping ability because of the lack of postural control during contact has been also used to interpret jumping performance when using a sledge ergometer [25]. The latter is also evident during the VDJ propulsion phase, when someone has to produce additional force in the vertical axis in order to overcome the gravitational forces applied to his/her body mass. Contrarily, during the HPRJ, the absence of the influence of the gravitational forces does not require an additional force output, since the movement is entirely executed horizontally. This results in the fact that body mass times acceleration of gravity equals zero and, consequently, the applied force is efficiently utilized to enhance the jumping ability because of the lack of postural control during the contact phase [36]. However, in the case of the HPRJ, the participants had to conduct the plyometric task and to negotiate, besides their body mass, the mass of the pendulum as well. When the stretch load is increased, force output is increased and tF MAX is decreased in the VDJs [46,55,56]. This was also observed during collisions using a human pendulum device [57]. In the present study, relative F was lower and tF MAX was higher in the HPRJ than in the VDJs. Further research should be conducted examining HPRJs with different loading conditions. A larger knee flexion and a larger shortening velocity induced by the higher stretch loads are factors that enhance the effectiveness of the SSC [10]. It has been suggested that the knee joint angular kinematics is the regulating factor of HPRJ performance [35,36]. Alterations in the knee joint angular kinematics due to the increment of the stretch load were observed in previous impact [57] and SSC studies [25,49,58]. However, in the present study, the maximum knee flexion joint angle and velocity were not different between the two jumping modalities. In addition, the execution of a plyometric exercise in the horizontal plane was found to alter the muscle activation characteristics [49]. Thus, future research in HPRJ should examine its electromyographic characteristics.
Stiffness, although its optimal regulation enhances performance and power output [59][60][61], was not found to be a determining feature for VDJ performance [7,62]. Nevertheless, K VERT was significantly higher in the VDJ than in the HPRJ. This can be related to the increased BCM velocity at the instant of impact in the VDJ. This possibly resulted in higher stimulation of the neuromuscular system during the breaking phase to optimally regulate the power output and stiffness in the VDJ [45,63,64] compared to the HPRJ.
The findings of the present study should be interpreted taking into account its limitations. At first, the comparison of VDJ and HPRJ was conducted by taking as reference only one dropping height. However, this selection was based on the fact that most of the DJ research has been conducted using VDJs with drop heights up to 40 cm [65] and on previous recommendations [46]. Furthermore, SSC effectiveness during a DJ is affected by both the direction of the movement, referred to the gravitational acceleration, and the duration of preactivation [66]. Thus, as mentioned above, recording the electromyographic parameters in the HPRJ test could provide additional information about the neuromuscular function and the mechanisms involved when executing a controlled SSC at the horizontal level.
In the present study, the usage of two dynamometers for the execution of VDJ provided the opportunity to define h DROP and, thus, H R accurately. This assisted in the calculation of the exact amount of KEI for the HPRJ, which has been reported to be the main parameter of evaluating loading during plyometric exercise [67]. It has been reported that the H R deviation compared to the nominal release height for the plyometric jumps performed with a sledge ergometer is ±3 cm [49]. The lower H R deviation (±0.1 cm), compared to the nominal H R set for the HPRJ condition, allows the constructed bifilar pendulum to be classified as a valid and reliable device for executing controlled pendulum rebound exercises. In addition, the excellent intra-test reliability scores for HPRJ performance verified past findings [68]. This can be attributed to the fact that the trunk was constrained by the bifilar pendulum's seat. This results in a reduced number of degrees of freedom that leads to a higher consistency of the execution of the pendulum plyometric rebound exercise [34]. Furthermore, the utilization of four different methods (dynamometry, goniometry, accelerometry, and video kinematic analysis) for monitoring and accurately measuring HPRJ performance parameters, provides a strong methodological tool for further insight regarding the examination of different modalities of plyometric exercise.
In conclusion, further research should examine the responses of the neuromuscular system and the coordination patterns of the HPRJ in different KEI conditions. Insights into the optimization of the lower limbs' mechanical efficiency in the HPRJ could provide further information concerning the possible improvement in the training process to provoke adaptations in mechanical power production.
Conclusions
The use of two force plates is suggested as a requisite for examining VDJ or landing experiments, as proposed in earlier literature [16]. Furthermore, HPRJs are favorable to be executed with a bifilar pendulum, since their mechanical properties allow the execution of plyometric movement on the horizontal plane. The instrumented bifilar pendulum used in the present study had excellent inter-instrument reliability for the calculation of HPRJ performance. Furthermore, based on the findings of the present study, HPRJs performed with the examined bifilar pendulum apparatus were characterized by excellent intra-test reliability scores. The latter enhances the comparison of plyometric exercise in the vertical and horizontal directions since the initial BCM conditions can be accurately defined. Such an arrangement allows an athlete's KEI to be defined when executing a VDJ or a HPRJ. This results in the fact that a practitioner can define the desired level of loading when executing a plyometric jump, whatever the jumping modality (vertical or horizontal). Furthermore, the lower extremity joints' function and range of motion can be selected, so that the execution of the jump can fulfill the principle of specificity and, thus, meet the sport-specific plyometric training requirements. Finally, it is concluded that future research should take into consideration the initial BCM conditions for the accurate determination of the parameters of a plyometric jump. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data that were used in the present study can be provided by the corresponding author upon reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest. Table A1 depicts the mechanical properties of the pendulum used for the HPRJs.
Appendix B
The 2-Dynami dynamometer (©: Biomechanics Lab AUTh, Thessaloniki, Greece) was constructed using ST42 steel, on which pairs of Kyowa KL-10-A4 (Kyowa Electronic Instruments Co., Chofu, Tokyo, Japan) strain-gauges were attached. Signals were amplified using Kyowa DPM-601B (Kyowa Electronic Instruments Co., Chofu, Tokyo, Japan) amplifiers and were simultaneously stored in a Pentium II PC, after being converted to digital using a PC-LabCard PCL-812PG (Advantech Co. Ltd., Taipei, Taiwan) 12-bit analog-to-digital converter. were calibrated statically and dynamically, using an AMTI Mod. OR6-5-1 (AMTI, Newton, MA, USA) force plate. To check the dynamometer's concurrent validity, free weights of known mass (commercial plates used in weightlifting) were used. The weight plates were weighed with a Delmac PS400L scale (Delmac Scales PC, Athens, Greece) prior to their use for the calibration procedure. The dynamometer was fixed on the ground and a series of combinations among the known weights, ranging from 1.25 to 194.5 kg, was placed on the middle of the dynamometer plates. In total, 170 different combinations of weights were placed. For each weight, the equivalent measure from the dynamometer was stored ( Figure A1). The calibration procedure and the subsequent linear regression analysis (enter method) revealed that the constructed dynamometer was linear (Y = 3.586 + 0.642 × X; F = 9261,467, p < 0.001, R 2 = 0.999) and valid (average error = 0.084 ± 0.330 N).
|
v3-fos-license
|
2021-12-08T16:08:45.118Z
|
2021-12-06T00:00:00.000
|
244936517
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/BC7EA7C6267531B2A8CA169A0669929E/S1474746421000737a.pdf/div-class-title-freedom-and-gender-equality-in-eu-family-policy-tools-div.pdf",
"pdf_hash": "f0212562c35ef89e1dfdc36f5bf08e16f7a1ed2a",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2706",
"s2fieldsofstudy": [
"Political Science"
],
"sha1": "6ef65709ad44f69c6fb6af5f5d220db0dd75a78e",
"year": 2021
}
|
pes2o/s2orc
|
Freedom and Gender Equality in EU Family Policy Tools
The aim of this article is to look critically at the implications of gender equality concepts for individual freedom as conceptualised by the philosopher Isaiah Berlin. The scienti fi c literature addressing the problem of freedom and gender equality with regard to public policy is considerably fragmented. Based on contextual literature, this article will offer four concepts of freedom that serve as analytical categories. I will analyse work/family reconciliation policy tools as introduced at the level of the European Union and reconnect them to three traditions of gender equality. The article re fl ects on historically embedded dichotomy between positive and negative freedom visible in gendered distinction between public and private. The main fi ndings show that the relationship between freedom and equality is mediated by the selected policy tools suggesting that some policy tools expand freedom of all individuals while others indicate a possible limit for freedom.
Introduction
In recent decades, prominent feminist scholars have comprehensively discussed the relationship between social policy and gender equality.These accounts have brought attention to the gendered normative assumptions embedded in policy theory and practice: the gendered division of labour that underlies the welfare state (about the conjunction of work and care and connected family policy models, see Orloff, 1996), the feminist critique of theories of citizenships (O'Connor, 1993;Knijn and Kremer, 1997), the critique of the typologies of welfare state (see Lewis, 1997), etc.Because women's life experiences were theoretically invisible or neglected in social policy theorising, the design of welfare states reproduced women's economic dependence and placed women's primary responsibility in the domestic sphere.The attention of feminists thus brought the gender dimension to the idea of social rights, making the case that, for women, the possibility of making autonomous choices is primarily rooted in contextual social relations of work and family.Social policies thus should serve to rectify gender inequalities by addressing men's and women's ties to unpaid labour, care and to employment.
This feminist argument relies on three premises.First, it presupposes a specific concept of freedom.Second, it places the crux of the issue on the gendered distinction between the public and private spheres.Third, social policy is connected to a normative notion of gender equality that can have multiple meanings.
First, the possibility of exercising freedom is dependent on the embedded gender order.Gendered practices are derived from an organisation of the social structure, as more equal societies grant women 'choice' (Lewis, 2006a) or the possibility of self-government (Pateman, 2004).This gives legitimacy to the state intervention, as it increases positive freedom, linking gender equality to a broader conception of social liberalism.There is, however, an alternative notion of freedom connected to classical liberalism.Negative freedom is understood as the space of a person or a group of people in which no other entity can interfere.The role of a state is to guarantee such space and minimise its interventions.The aim of this article is to examine the complex interrelations between freedom, gender equality and social policy when accounting for both notions of liberty.
In existing scholarship, the issue has been considered from a historical perspective (by tracking the changing relationships between liberalism and gender equality, e.g.O'Connor et al., 1999) or a critical perspective (by reflecting on the role of a woman as an individual during classic liberal historical periods, e.g.Pateman, 1988).This article decontextualises both notions of freedom from their historical groundings and offers a conceptual perspective.The article will thus develop an analytical framework based on concepts of freedom to examine selected family policy tools that are connected to gender equality.
Second, the discussion on freedom and feminism is linked to the conceptual dichotomy between the public and private/domestic spheres in asking the question of how much state intervention in family functioning is justifiable.Feminism aims to recognise the importance of unpaid work and care while simultaneously encouraging women to enter the public sphere (Ray et al., 2010: 197).Even within this argument, however, the encouragement for men to engage in unpaid labour by social policy tools is implemented mainly in countries of social democratic regimes (Pfau-Effinger, 2005: 323-324).This indicates sensitivity to state intervention in the private sphere, and the complex nature of freedom in gender equality.
The problematic relations between the public and private perspectives concerning freedom are translated in this article into two conceptual and methodical choices.First, this article will ground the concept in a singular perspective that pays adequate attention to both notions of freedom.The dichotomy has existed in Western political thought since the eighteenth century (Cherniss, 2013: 146), but the most influential theoretical elaboration of both notions was presented in the essay Two Concepts of Liberty by the philosopher Berlin (2005).As Berlin's essay is not grounded in a social policy discussion, it does not reflect on the role of women, and it does not take a contemporary critical outlook on the public/private dichotomy.This is its greatest strength, as it stays focused exclusively on the notion of freedom and elaborates on interconnections between the subject's freedom and society.Simultaneously, this focus is its greatest weakness, as the essay does not reflect on the complexity of relations between social context and an individual developed in gender equality discussion.
Third, to introduce the contemporary public/private dichotomy problem, this article will analyse social policy tools that aim to address asymmetric gendered behaviour concerning the organisation of paid and unpaid labour.During the past decades, the erosion of traditional two-parent male-breadwinner families associated with the emergence of new social risks has led to an increase in tension between work and family (Lewis, 2006a).The burden was primarily on women who entered the labour market and were still expected to do unpaid domestic work (Lewis, 2006b).The changes were echoed in the social policy literature by systematising welfare state family models (see Sainsbury, 1996or Korpi, 2000) or incentivising the creation of new dimensions for comparing the capacity of the welfare state to address such issues.For example, according to women's capacity to form and maintain independent households (Orloff, 1992) or according to the extent welfare states reinforce the male breadwinner model (Sainsbury, 1999).Policy practices also reflected these challenges and brought the harmonisation of work and care to the fore (Lewis, 2006a).This article will thus examine the un/paid work reconciliation tools introduced between 1982-2000 at the EU level (Stratigaki, 2004).
The tools presented adhere to different policy strategies and policy goals, which are derived from different notions of gender equality.Gender equality was conceptualised in the sameness/difference debate on citizenship status.The concept of 'sameness' (vis-à-vis men) stressed women's inclusion in the public sphere, and the concept of 'difference' called for recognising women's care work (Gornick and Meyers, 2002).In the social policy debate, this schism was translated into three conceptions of gender equality: as 'sameness' or 'inclusion', linked to promoting equal opportunities; 'difference' or 'reversal', which affirms difference from male norms (linked to, e.g.positive action); and 'transformation' or 'displacement', which aims to transform gender norms (e.g. by gender mainstreaming, see Verloo and Lombardo, 2007).I will use the latter definitions provided by Squires (1999) as a heuristic to systematise the tools relating to paid and unpaid work.
The article is structured as follows.First, I will introduce the concepts of negative and positive freedom and develop a theoretical framework by linking it conceptually to the notion of gender equality.Second, the analysis section will introduce the three conceptualisations of gender equality and the family policies associated with them.Later, the policies will be linked to gender equality concepts, and their implications for both notions of freedom will be discussed.
Negative freedom
Negative freedom is the absence of barriers, limitations, and outside interference from other people or institutions.In other words, it is the absence of obstacles that are outside the subject and that prevent her/him from acting ('freedom from', Berlin, 2005).
Berlin explains this concept based on the question 'What is the area within which the subjecta person or group of personsis or should be left to do or be what he is able to do or be, without interference by other persons?' (Berlin, 2005: 233).The answer to this question is the space within which an individual can choose among many alternatives without external interference (Lewandowska, 2016: 146-147).
The nature of external intervention concerns the definition of what can be considered an 'intervention' or 'restriction'.It is a restrictive act carried out by others that may be deliberate or unintentional (Berlin, 2005: 233-234;Pitsoulis and Groß, 2015: 483).In addition to constraints and barriers, there may be indirect coercion or social pressure that is strong enough to have the same effect as direct coercion (Berlin, 2005;Cherniss, 2013: 30 uses the term 'social control').
Freedom and Gender Equality in EU Family Policy Tools
Restrictions can be interpreted as an opportunity to interfere with an individual's affairs or to force an individual to do something, regardless of the type of power being used (Berlin, 2005;Weinstock, 2009: 848).Kramer (2008) argues in this context that even an unused opportunity to interfere with a subject's affairs may regulate the subject's behaviour by the effort to avoid direct confrontation.In other words, by modifying their behaviour, subjects are limited in terms of negative freedom.When individuals are negatively free, they exist within a space where no such external interference exists.However, if no one is coercing them from the outside because they are doing nothing 'forbidden', they are not negatively free, but they enjoy negative freedom as a state of their existence (i.e. they do not realise that outside inference exists; Cherniss, 2013: 2-3).
Coercion to decide is also an interference (i.e. the 'money or life' dilemma represents coercion and constrains negative freedom).In this case, there is a double coercion.First, the subject is pushed to choose one option because only one option is rationally available (i.e.'life').Second, the pressure to choose between only two predetermined possibilities also limits negative freedom (Putterman, 2006: 424).
Another aspect of negative freedom concerns the idea that it is a goal in itself, not the means to achieve something else (Putterman, 2006: 418).People are seen as rational beings who, when making their own choices, will do what is good (at least for themselves).Thus, negative freedom stems from a belief that an individual is able to build his or her life better than other people would have built it for them (Putterman, 2006: 420-421); therefore, it can be held as a universal value.
As a universal value, negative freedom is not limited to being a goal of an individual's pursuit (i.e. as a range of choices of individuals who are not limited by restrictions, interventions, barriers, pressures, and coercion), but it can be a wider societal and political goal.Historically, negative freedom has been linked to the tradition of classical liberalism (Gray, 1980;Lewandowska, 2016).Negative freedom respects the division between the state and society and seeks to reduce the role of the state.People's personal goals and values are understood as the result of their individual choices independent of the social environment.Thus, at the political level, negative freedom focuses on protecting the rights and freedoms of individuals (e.g.movement, speech), which guarantees a space of personal freedom that cannot be violated (Berlin, 2005: 238).
Berlin's concept of negative freedom should, however, be understood within its limits.McBride (1990) argues that Berlin inadequately develops the distinction between negative freedom and social conditions.Understanding freedom as an absence of restriction and barriers is a low-resolution perspective because historical and sociocultural conditions determine range of options subject is capable to conceptualise.The possibility to exercise negative freedom is thus nested in social context that establish the content of subject's choices.
Positive freedom
Positive freedom represents the idea that everyone is free to act in accordance with what they consider meaningful.Being free means being determined to go one's own way towards one's own understanding of what is the highest value in life (Berlin, 2005: 238-239).Positive freedom is therefore a possibility to move towards some higher value (as 'freedom to', Cherniss, 2013;Lewandowska, 2016: 146).
Lucie Novotna
Berlin explains this type of freedom with a question: 'What, or who, is the source of control or interference that can determine someone to do, or be, this rather than that?' (Berlin, 2005: 233).Positive freedom is not about the range of options (like negative freedom) but it relates to the determinants of freedom, which are ideals and a sense of meaning (Lewandowska, 2016: 147).
At the individual level, positive freedom means the ability to be a ruler of oneself, based on one's own reasons and ability to set personal goals.It encompasses being reflective of one's own thoughts and actions, responsible for one's own choices, and being able to act on one's own ideals and sense of meaning (Berlin, 2005: 238-239).In this regard, positive freedom presupposes the existence of two selves: higher, rational and moral, and lower, irrational, emotional, and reactive (Berlin, 2005: 240).To achieve positive freedom, it is necessary for the lower self to submit to the higher self.Controlling irrational desires leads to a state of 'real freedom' (positive freedom ;Paulíček, 2016: 90).
A very important aspect of achieving individual positive freedom is self-reflection.Self-reflection, as defined by Haworth (1986: 39) in this context of positive freedom, means the ability to reflect critically on one's own goals and personal beliefs and values that would otherwise be passively accepted.Thus, self-reflection is a process of judging beliefs, desires, and meanings.The higher self can control the lower self, and the individual can control what beliefs constitute his or her life.A subject is positively free if he is the active creator of his life and makes decisions by himself (i.e.no one is deciding for him and determining his direction, and he is not acting on the basis of external influences or the wishes of others; a subject is a 'doer'; Berlin, 2005: 238-239;Elford, 2012: 241).
The political notion of positive freedom means conceptualising collective entity as an individual.Society, the church, the community, and the state represent collective entities that are driven by higher goals and that determine which behaviour or state of being is right/good and which is not.Right or good might be extracted from various noble values, such as equality, justice, happiness, culture or security (Berlin, 2005: 239-240).Political and legal systems are built on the assumption that the interests of the collective entity are superior to those of the individual.
Positive freedom assumes that people should live according to a given higher value, and if necessary, they should be taught what this higher value is and how to translate it into practice.To make people positively free, it is necessary to educate/direct them towards the intended concept of morality (Berlin, 2005: 240-241).While negative freedom is a value in itself, positive freedom alludes to some other value or ideal (as a means of achieving it).To achieve this value in practice, it is essential that collective efforts be directed towards obtaining it by public policy interventions (Berlin, 2005: 240;Putterman, 2006: 418).
At the core of the problematic nature of the positive concept of freedom is the belief that every individual has a higher and a lower self and that 'good' individuals are those whose higher self matches the interests of the collective entity (Berlin, 2005: 239-241).A coercive entity personified in the proponents of a collective value assumes that it knows the 'true wishes' of individuals and society (Gustavsson, 2014: 272).It is necessary that the collective entity protects individuals from the threats posed by their lower selves.The aim of such an arrangement is therefore to lead individuals to act according to their 'real' interest and to live the 'right life' (McBride, 1990: 302;Berlin, 2005: 240).
There are however two limits of Berlin conceptualisation of positive freedom that should be taken into account.First, Berlin does not pay sufficient attention to contextual Freedom and Gender Equality in EU Family Policy Tools social conditions.The explanation of positive freedom encoded in the distinction between 'higher' and 'lower' self represents a simplification of complex social processes connected to policy and regulatory institutions (see McBride, 1990).Second, the dynamics of a political arrangement are not in the top-down process of the dominance of a ruling class over individuals.Especially in the context of democratic societies, the vision of a 'good life' is based on shared values mediated via a network of interconnected individuals.Public opinion on what is considered desirable is shaped and communicated within the public discourse.Visions of the 'right life' would be a result of discursive processes and held by collective institutions (Lewandowska, 2016: 147).
Positive freedom is realised when this public discourse is not communicated in the language of imposition but in the language of liberation (hence, positive freedom).
Positive freedom is about imposition that is communicated as liberation.The proponents of restrictive policies may not realise that it is an imposition, since the actions they establish are communicated as 'reasonable' and lead to 'increased freedom' (Gustavsson, 2014: 269).
The concept of positive freedom can be distinguished into two types: monism and pluralism.Monism is the belief that there is one absolute universally valid value or value order.Positive freedom is the freedom to choose and act upon that value or value order (Gustavsson, 2014;Lewandowska, 2016: 150).However, according to Berlin, there may be multiple equally relevant, true, and valid values (and value orders) that are incompatible with each other.This concept of incompatible, equally relevant values (including the choice between them) is pluralism.Unlike monism, this view is essentially neutral -there is a plurality of values.Pluralistic positive freedom represents respecting and protecting incompatible value frameworks and orders (O'Neill, 2004: 474;Lewandowska, 2016: 150).
Negative and positive freedom and gender equality I have identified three concepts of liberty (monistic positive freedom, pluralistic positive freedom, and negative freedom) that relate differently to the concept of gender equality.In this section, I will summarise this relation, but first I will introduce a new concept of liberty: it is positive negative freedom (for a summary of all notions of freedom see Table 1).
Lucie Novotna
The concept of positive negative freedom is a concept of positive freedom that takes negative freedom as its reference value.It is positive freedom because it takes into account social context of individuals and attempts to rearrange social conditions of individuals to grant choices to men and women within public/private domain.Simultaneously, it is negative freedom because an aim of such rearrangement is to increase scope of noninterference from subtle forms of coercion arising from embedded gender order.And thus to broaden the negative freedom of all individuals within a society.In the context of gender equality, positive negative freedom responds to classic liberalism.Classic liberalism regarded the organisation of the domestic sphere as unregulatable and private (following the negative notion of freedom, O' Connor et al., 1999).Consequently, it failed to conceptualise women as free individuals with a full range of citizenship rights (Pateman, 1988;O'Connor, 1993).Both of these problems are addressed by the concept of positive negative freedom.
First, negative freedom was formalised as the freedom of non-interference linked primarily to non-interference from larger societal bodies (as classic liberalism was about emancipation from tradition, O' Connor et al., 1999: 47).Berlin was concerned with the question of what constitutes such interference.Framing the answer as existing only within the public sphere thus limited its meaning only to certain forms of barriers, coercion and social control.However, it failed to address the embedded gender order existing in an implied public/domestic distinction.Positive negative freedom extends the space of interference to the domestic sphere.In this way, the requirement for achieving negative freedom encompasses more subtle forms of interference, such as exploitation.Moreover, it provides an imperative for state interference to grant negative freedom to everyone.This was pointed out by Gould's (2013) feminist critique of Berlin, which concluded that the political rights of individuals should be protected to grant freedom of choice to both men and women.This broadening of negative freedom, however, is connected to the state interfering with the labor organisation within families to ensure a space of non-interference for all individuals (thus, positive negative freedom).
Second, negative freedom was historically linked to the public sphere, where its subject was conceptualised primarily as an individual (to whom should be granted liberal rights).Women's theoretical invisibility and their connection to the private sphere made their status as individuals problematic, as their exercise of freedom was rooted in contextual social relations (see Pateman, 1988).The social aspect was, however, fully omitted in doctrines derived from negative freedom, leading to the persistence of the disparities that arose from the gender order.Understanding negative freedom as encompassing all individuals (male and female) makes the case for rearranging society in a manner that allows for choice and grants a space of non-interference to all.This reacts to the necessity of the existence of positive negative freedom as a prerequisite for the existence of negative freedom (because absolute non-interference from the state would be anarchy).
Another concept, negative freedom in the context of gender equality, represents a reduction in state intervention in the reconciliation of family and work.Gender equality may seem to fall under the positive concept of freedom, but as Orloff and Schiff (2015: 7-8) explain, there were already streams within the feminist movement that supported neoliberal politics in the 1980s.For example, these feminists promoted women's employment by eliminating the right to social assistance.Negative freedom can be connected to gender equality as reducing state intervention (tax relief, social assistance, etc.).
Freedom and Gender Equality in EU Family Policy Tools
Gender equality in the context of monistic positive freedom means that a specific doctrine (e.g. a certain conceptualisation of gender equality) is taken as the only correct and rational principle upon which society should be organised.However, pluralistic positive freedom aims to protect the plurality of values and different approaches to life.Plurality serves as the highest value towards which the organisation of society should aim.In terms of gender, it is an approach that encourages the creation of a society where everyone can choose their way of life regardless of gender stereotypes or gender order.
The a nalysis
The previous section of the article introduced an analytical framework that will be used to examine selected family policy tools that are connected to gender equality.Different concepts of gender equality differ in their strategies for combating inequality, following the traditions of inclusion, reversal, or displacement (Squires, 1999;Verloo and Lombardo, 2007).First, inclusion perceives gender equality as sameness.According to this tradition, incorporating women in public life should be based on understanding people primarily as individuals who have the same rights and opportunities and are judged by the same principles and standards regardless of their gender.Second, reversal understands men and women as inherently different.Following gendered public/private dichotomy it perceives the male gender norm as dominant in public life and female norm as primarily domestic.Thus, policy tools should compensate for differences.Last concept, displacement, aims to transform all norms and standards about what is/should-be associated with male and female identity.This conception challenges all aspects of gender norms by reconstructing the policy discourse.
In this section, the concepts of gender equality will be used as a heuristic to classify identified social policy tools regarding unpaid and paid work.The distinctiveness of three equality concepts is exclusively conceptual, while in practice, policy tools are designed to fulfil specific policy aims from which the three normative approaches are theoretically abstracted (for a summary, see Table 2 -Concepts of Gender Equality).
I have derived the social policy tools that will be analysed from a comprehensive overview created by Stratigaki (2004).She evaluated and compiled a timeline of strategic documents that show the development of the concept of the harmonisation of paid and unpaid work in the EU from 1982 to 2000.The European Union level was selected Lucie Novotna because of the cultural sensitivity of the issue.The tools at the EU level were formulated in a general manner to fit member states with different welfare state regimes.The timeframe covers the formation of the policy agenda of gender equality, the time of its greatest prominence and the beginning of the shift of policy strategies away from ensuring gender equality towards employability, investments, and other economic goals (Jenson, 2008: 146).The period of the 1980s and 1990s thus covers a broad range of policy strategies connected to gender equality concepts while maintaining the policy agenda on gender equality.I have reconstructed the timeline and identified the proposed tools (as shown in Table 3 -Timeline of Work/Family Harmonisation Tools).
The following sections are structured based on the concepts of gender equality.In each section, I will introduce each tool and analyse it based on the four identified concepts of freedom (negative, positive negative, monistic positive and pluralistic positive; the summary can be found in Table 4 -Freedom, Gender Equality and Care-Work Harmonisation Policy Tools).
Inclusion
Inclusion encompasses policies that address women's integration into the public sphere.The first policy field connected to this concept of equality aims for labour market inclusion by the elimination of discrimination on the labour market, equal pay, and equal treatment at work.
I have identified the elimination of discrimination as a type of positive negative freedom.Although these policies interfere with the behaviour of companies, groups, and individuals, they aim to create a space of non-interference that increases the overall level of negative freedom.Analogous to the establishment of a law, certain actions must be penalised to ensure the broadest possible scope of choices for everyone.The goal of antidiscriminatory policies is the creation of a gender-equal world where both men and women can coexist without interference.
Equal pay, however, is a different type of freedom, as it directly interferes with the way companies operate.In the strict sense of the word, it is not a restriction but a prescription of the way employees should be rewarded.It is therefore a monistic type of positive freedom.It aims to create a gender-equal world where rewarding employees is based not only on an evaluation of results by an employer but also on the political dimension.Equal pay acts as a corrective mechanism of structural inequality between genders.It is not, therefore, an extension of choice but a salary prescription that pursues a single goal: achieving equality.
Equal treatment is a policy that prescribes the conduct of individuals, groups, or institutions to create an environment in which women and men are treated equally and restricts conduct that treats men and women differently.Thus, depending on the nature of the policy, it is a monistic type of freedom, i.e. a prescription of conduct for the purpose of equality, or a positive negative freedom (i.e. a restriction of gender stereotyping that limits oppressive actions based on gender).
The second inclusion policy field includes tools to reconcile work and family.In general, work and family harmonisation tools can fit into multiple gender equality traditions.Therefore, in this section, I will discuss only the instruments that are designed to reconcile work and family and that use gender-neutral language, specifically creating a network of good-quality institutional care for children, promoting flexible and part-time work for all workers and institutionalising parental leave.The document focused on gender mainstreaming as a new way to resolve the issue of gender equality.
Lucie Novotna 12
https://doi.org/10.1017/S1474746421000737Published online by Cambridge University Press These policies are, however, paradoxical.Because they use a gender-neutral language, they nominally pursue the goal of creating a society where men and women will not be pressured into fulfilling the prescribed gender role.Hence, they should be regarded as positive negative freedom.However, in practice, they preserve the gender-conditioned division of unpaid and paid work.Because of their nature, they do not reflect on social pressure to maintain traditional gender roles.This limits the effect of creating a space of non-interference as they do not address cultural pressure directly but merely remove certain obstacles in women's career paths.
Similarly, flexible working arrangements are a gender-neutral policy tool designed to reconcile family and work.I have identified this tool as a positive negative freedom, as it increases the scope of possibilities for employees to choose how to conduct their work time.
Last, monitoring for gender equality aims to create a world in which men and women are equal in terms of the organisation of society (equity).In this respect, it is an instrument supporting the reorganisation of societal activities to achieve the goal of equality of outcomes.It does not take into account any other relevant factors that may play a role in gender organisation or any other type of outcome.For this reason, it is a monistic type of positive freedom.Freedom and Gender Equality in EU Family Policy Tools
Reversal
Reversal is tied to policies that compensate for gendered differences.The first policy field subsumed under the reversal conception of gender equality is family policies that aim at sharing work and family responsibilities between partners.Specifically, they create highquality institutional childcare, institutionalise parental leave (including paternity leave) and support men's participation in unpaid labour.
The objective of sharing responsibilities aims at the reorganisation of the activities of men and women in the public and private spheres.Equality thus represents a way in which individual families choose the 'right' way of life.It is therefore a monistic type of positive freedom.On the other hand, some measures increase the choices available for both men and women, both nominally and in their implementation, because they reflect structural inequalities between genders and address social pressure based on gender norms (institutional care, parental leave).These represent positive negative freedom.
The second policy field focuses on women's status in the professional environment, i.e. positive action for equal opportunities, which takes gender as the principle of difference.These are equal opportunity policies that compensate women's disadvantages in the labour market.They aim to ease women's entry into the labour market, maximise women's job opportunities, desegregate the market, and help women in the sectors where they are underrepresented.In addition, they address women's training, the reduction of women's risk of unemployment and social exclusion.
I have identified these tools as positive negative freedom, as they aim to increase women's choices for employment by expanding their competencies and to create an environment that is not limited to gender-stereotyped standards.They increase women's range of choice and reduce social pressure; thus, these tools increase the negative freedom of a society as a whole.However, I have excluded tools, such as quotas, that pursue the above objectives in a prescriptive manner.Such policies prescribe specific work organisations and therefore are a monistic type of positive freedom.
The last policy field concerns a promotion of cultural change, such as the elimination of stereotypical attitudes in society, the promotion of sharing household chores and benchmarking based on gender equality.The elimination of stereotypical attitudes in society and the promotion of sharing work and family responsibilities assume a 'right' way of life.These policies thus aim to 'educate' the public to conform to a specific representation of gender equality.They therefore support a monistic type of positive freedom.
Benchmarking is a tool designed to encourage countries to create equal societies.However, more research is necessary to determine the exact type of freedom (monism, pluralism, positive negative) because it depends on the variables measured.
Displacement
Policy tools that stem from the concept of displacement aim to reconstruct public discourse and transform existing gender norms.Gender mainstreaming is the only policy measure connected to the displacement conception of gender equality.It can be classified into two types of positive freedom.When conceptualising gender mainstreaming as a policy aim, it is a pluralistic type of positive freedom.This tool seeks to create a society in which every individual (regardless of gender) can choose the way she/he wants to live life (without complying with gender norms).Gender mainstreaming is not prescriptive in the Lucie Novotna way one 'should' live; rather, it seeks to create a system in which every individual can choose his or her way of life.
However, when conceptualising gender mainstreaming as policy practice, I argue that it is a monistic type of positive freedom.Gender mainstreaming means assessing the implications of a specific policy on men's and women's interests and concerns.In its implementation, gender mainstreaming usually means raising an issue of gender identity in sectors that are primarily concerned with other activities.Thus, restructuring the discourse means in practice prescribing a way in which policy institutions should act to achieve equality.
Conclusion
The gender equality discussion in social policy was based on a specific notion of freedom connected to social liberalism.Existing theoretical arguments connected to gender equality and freedom were derived from the dichotomy between the masculine/public and feminine/private/domestic spheres.This gendered dichotomy translated into the conceptualisations of freedom, whether considering only individuals and their own choices (negative freedom) or understanding subjects as contextually social in their capability to make choices (positive freedom).This article reflected on these underpinnings by transcending established dichotomies of positive/negative freedom and public/private.
The nature of any social contract endows a popular sovereign with power over its subjects, giving legitimacy to state intervention and governance over people.The nature of such interventions is not, even in classic liberal doctrine, divorced from the private sphere.This means that restrictions imposed on individuals to protect freedom of movement, speech or property are justifiable if they lead to broadening the overall space of non-interference (negative freedom).Positive negative freedom is necessary for negative freedom to exist.What proved problematic from the feminist perspective, however, is the historical continuity of an implicit gender order embedded in the public-private dichotomy.The successor of classic liberalism, neo-liberalism, still maintains this gendered public/private distinction.It has failed to address the familial assumptions of classic liberalism despite its claims of genderless individualism (O'Connor et al., 1999).Taking the perspective of positive negative freedom, this article attempted to divorce from these historical contingencies.Positive negative freedom understands all individuals, male and female, to be granted a space of non-interference, as it addresses the conditions under which such a space can arise.Thus, positive negative freedom fits gender equality into liberal thought by blurring the historically established line between the public and domestic spheres.
Going beyond the binary distinction between positive and negative freedom, there are four main conclusions that can be drawn from this article.First, the article suggested a complex relationship between freedom and equality.Gender equality shares with positive freedom its reflection on social conditions of individuals, reflecting particularly on gender order embedded within social institutions and social structure.Simultaneously, gender equality can be translated into policy tools that expand the scope of choices for both men and women concerning the organisation of domestic matters and that grant women choices in their activities within public sphere.Gender equality and freedom thus share various traits that can be identified in proposed policies.In this analysis many work-family Freedom and Gender Equality in EU Family Policy Tools reconciliation tools were identified as positive negative freedom which suggests there might be an influence of other principles connected to both, gender equality and freedom (i.e.equality of opportunity).These relations are not yet elaborated in social policy theorising.
Second, the analysis showed that the specific notion of gender equality (i.e. the specific representation of what gender equality means) does not have direct implications for individual's freedom as it is mediated via selected policy tool.The precise impact on individual's freedom stems from formulation, aim and form of implementation of a specific policy tool.This suggests that reconciliation and boundaries between freedom and gender equality are not purely a theoretical problem, but also an issue of policy formulation and bureaucratic processes (see Carlsson, 2020).Formulation of policy aims, tools and their implementation suggest that freedom and equality are closely tied to conditions that produced given policy.The contemporary policy debates connecting health crises and financial challenges in the post-COVID world opened a window of opportunity for more progressive, equal and inclusive policies (Ihlamur-Öner, 2020).This article offers a framework that can contribute to these debates by reflecting on impact new policies might have on freedom and equality.
Third, negative freedom is understood as a space of non-interference, historically connected to public sphere.The article, however, broadens the notion of interference to social control derived from embedded gender order and focused on policies which intervene within families.From this perspective, gendered division of labour suggests a limited possibility for women to be free individuals.Work-family reconciliation policies analysed in this article are transcending the public-private dichotomy, while simultaneously many of these tools were identified as positive negative freedom.This suggests that specific family policy tools and policy intervention within families is prerequisite of a liberal society.The effects of social policies on individuals' freedom can be assessed and discussed based on the presented conceptual perspective.
Fourth, this article also hinted at the boundaries of gender equality within the liberal tradition.Policies assigned to monistic positive freedom have inherent totalitarian and authoritarian tendencies because they are closely tied to a specific value or value order (see Brochard et al., 2020).Such policies are prescriptive in instituting the 'right' way of life citizens should follow.Thus, gender equality policies can not only free women and men from the shackles of gendered social order but also contain an inherent danger for freedom.To avoid simplification, this danger can manifest seriously only if the inherent value order and its institution take over the policy agenda.What is, however, immediately relevant about this tendency is the fact that monistic positive freedom represents a limit within the liberal tradition.In other words, the more restructuring a society undergoes to achieve a singular value or value order, the less liberal the society becomes.
This article thus showed that social liberalism can exist in many different variants based on the specific nature and implementation of the proposed social policies.This finding opens up many new themes worth investigating.First, within social liberalism and gender equality, the impact of specific social policy strategies and tools on people's freedom should be assessed.Second, the article opened a question of what level of state intervention in the private sphere is justifiable when taking into account all four concepts of liberty and specifically, where one can redraw the line between the public and private spheres.Finally, the specific and subtle boundaries between all three notions of positive freedom in various policy contexts remain open to discussion.
Table 1
Types of negative and positive freedom
Table 2
Concepts of gender equality
Table 3
Timeline of work/family harmonisation tools
Table 4
Freedom, gender equality and care-work harmonisation policy tools
|
v3-fos-license
|
2020-02-25T02:01:23.049Z
|
2020-02-23T00:00:00.000
|
211258797
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1088/1361-6587/ab7e81",
"pdf_hash": "f2dd952ff6ec70b633e12dc906aba2ee79197f80",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2709",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "2479d15bcd45ff17e87471e18f1f58c4f943e315",
"year": 2020
}
|
pes2o/s2orc
|
Non-invasive characterisation of a laser-driven positron beam
We report on an indirect and non-invasive method to simultaneously characterise the energy-dependent emittance and source size of ultra-relativistic positron beams generated during the propagation of a laser-wakefield accelerated electron beam through a high-Z converter target. The strong correlation of the geometrical emittance of the positrons with that of the scattered electrons allows the former to be inferred, with high accuracy, from monitoring the latter. The technique has been tested in a proof-of-principle experiment where, for 100 MeV positrons, we infer geometrical emittances and source sizes of the order of $\epsilon_{e^+} \approx$ 3 $\mu$m and $D_{e^+} \approx$ 150 $\mu$m, respectively. This is consistent with the numerically predicted possibility of achieving sub-$\mu$m geometrical emittances and micron-scale source sizes at the GeV level.
I. INTRODUCTION
In the past decade, significant experimental effort has been put in generating relativistic positron beams using high-power lasers in an all-optical configuration 1 . Broadly speaking, two main schemes have been adopted in this case: a direct one, where the laser is directly focussed onto a high-Z thick solid target [2][3][4] , and an indirect one, where the laser first accelerates a population of ultra-relativistic electrons via laser-wakefield acceleration (LWFA), which then interact with a high-Z solid target [5][6][7][8][9][10] . The latter approach has been numerically shown to be able to produce GeV-scale positron energies, with appealing spatial properties 11 .
The search for novel methods to generate high-energy positrons is mainly motivated by the current need to explore alternative particle acceleration schemes. Currently, the largest particle collider that is operational is the 27 km Large Hadron Collider (LHC) at CERN, which provides proton-proton collisions with a maximum centre-of-mass energy of 13 TeV 12 . Before that, the Large Electron-Positron collider (LEP), provided electron-positron collisions with a maximum centre-ofmass energy of 209 GeV 13 . Despite several iconic results, including the recent detection of the Higgs Boson 14 , there still are several unsolved issues that demand for a higher centre-of-mass lepton collider, ideally in the range of, if not beyond, a TeV.
Several international projects based on radio-frequency technology have been proposed, such as the Compact LInear Collider (CLIC), which is aiming at reaching TeV energies over a 13 km accelerator length 15 . However, the sheer scale of these accelerators is currently imposed by the maximum accelerating field that they can sustain, usually of the order of 10s of MV/m. This makes their realisation considerably expensive and alternative acceleration methods are currently actively studied. Plasmadriven acceleration is arguably one of the most promising a) Presently at: Clarendon Laboratory, Department of Physics, University of Oxford, OX1 3PU UK b) Electronic mail: g.sarri@qub.ac.uk schemes, since it can allow for much higher accelerating gradients compared to radio-frequency systems. Landmark results have already been obtained in this area, including accelerating fields exceeding 100 GV/m 16 , the demonstration of energy doubling of a 42 GeV electron beam in less than one meter of plasma 17 , a 2 GeV energy gain of a positron beam in one metre of plasma 18 , acceleration in a proton-driven wakefield 19 , highly-efficient electron acceleration in a laser-driven wakefield 20 , chargecoupling in a multi-stage accelerator 21 , and the laserdriven acceleration of electrons up to 8 GeV in only 20 cm of plasma 22 .
Large-scale international projects are thus now studying the feasibility of building a plasma-based electronpositron collider. For instance, plasma-based particle acceleration for the next generation of colliders is included as a major area of investment in the Advanced Accelerator Development Strategy Report in the USA 23 , it is the main driver for the European consortium ALEGRO 24 , and it is one of the main areas of development identified by the Plasma Wakefield Acceleration Steering Committee (PWASC) in the UK 25 .
While the plasma-based acceleration of electrons is rapidly progressing, positron acceleration is far more difficult due to a much narrower region in the wake field suitable for positron acceleration and focusing. There are four main regimes that are currently being investigated: the quasi-linear regime 26,27 , nonlinear regime 28 , hollow channel regime 29 , and wake-inversion regime 30,31 .
Whilst each regime has its unique advantages and attractive characteristics, any one of them presents significant challenges that must be overcome before reaching maturity, justifying the considerable attention received by the international research community. One of the major experimental challenges is to provide a positron beam with sufficient spectral and spatial quality, which can then be synchronized with the positron-accelerating region of a plasma wakefield. In particular, one would need low-emittance and short ( ≤ 10s of fs) beams with a non-negligible charge (≥ 1 pC).
It has been recently shown numerically that appealing positron beam characteristics can be achieved by firing a high-energy wakefield-accelerated electron beam through a cm-scale high-Z solid target 11 . For instance, a 5 GeV, 100 pC electron beam, interacting with a 1cm thick lead target, can produce up to 1 pC of 1 GeV positrons in a 5% bandwidth, with sub-micron geometrical emittance and a duration comparable to that of the primary electron beam (as short as a few fs 32 ). Generating GeV-scale, µm-size positron beams with sufficiently good emittance would provide experimentalists with an ideal platform to study plasma-based acceleration of positrons; for example, a dedicated experimental area for this kind of work has been included in the Conceptual Design Report for the European plasma-based accelerator facility EuPRAXIA 33 .
For these studies, it would be highly beneficial to have an online monitoring system for the laser-driven positron beam, where energy, emittance, and source size can be measured on a shot-to-shot basis without interfering with the positron beam. In this laser-driven scheme, the positrons arise from the quantum electrodynamic cascade initiated by the laser-wakefield accelerated electron beam inside the solid target [5][6][7] . The main by-products of this process are also a dense population of gamma-ray photons, and a broadband population of electrons. For high-quality laser-wakefield accelerated electron beams, we show here that the spatial characteristics of the electrons and positrons escaping the solid target are tightly linked. We then propose here to characterise the scattered electrons as a means to infer the positron beam properties in a non-invasive manner.
The paper is structured as follows: numerical simulations showing the correlation between the properties of the electrons and positrons escaping the converter target are shown in Sec. II. A proof-of-principle experiment will then be discussed, with the experimental setup and the characterisation of the parent electron beam and secondary positrons presented in Sec. III. The characterisation of the emittance and source size of the electron beam post-converter and how those relate to those of the positrons are discussed in Sec. IV. Conclusive remarks are given in Sec. V.
II. NUMERICAL MODELLING
To study the correlation between the electrons and positrons emittance at the rear of the converter target, a series of Monte-Carlo simulations using the scattering code FLUKA 34,35 have been performed. We simulate 10 7 mono-energetic electrons contained in a pencil-like beam with different energies: 0.15, 0.5, 1, 2, and 5 GeV. These interact with a 10mm-thick Pb converter (corresponding to approximately 1.8 radiation lengths), where the target thickness has been chosen so to maximise the positron yield at the rear surface 6 . In principle, the divergence and source size of the primary electron beam should be included, since they might affect the spatial properties of the particles escaping the converter target. However, as shown later, the geometrical emittance of the positrons escaping the target is of the order of a few microns, as dictated by the spread induced by the quantum electrodynamic cascade inside the converter. As long as the emittance of the primary electron beam is much smaller than this value, as usually is the case in laser-wakefield acceleration 36 , it can be ignored. As an example, we show, in the supplementary material 37 , a negligible difference between the calculated positron emittance for a primary electron beam with a 5 mrad divergence and that for a primary electron beam with zero initial divergence.
Nonetheless, it is still to be intended that the results shown here are only for demonstration purposes and will be used to infer the positron emittance and source size for our proof-of-principle experiment. Even though the same qualitative behaviour will hold, slightly quantitative differences in the results will be obtained for each specific setup to be adopted (e.g., different converters and different spectra of the parent electron beam) and numerical modelling of the specific configuration to be used should c. a. b.
d. be performed before implementing this technique.
An example of the simulation results is shown in Fig. 1. The electron and positron geometrical emittances (examples in frames 1.a and 1.b) are strongly energy-dependent following a decreasing power law, in agreement with recently published numerical results 11 . Interestingly, the positron emittance is seen to be consistently smaller than that of the scattered electrons (see, for example, Fig. 1.c).
This can be intuitively understood with the following reasoning. For a target thickness (L c ) of the order of a radiation length (L RAD ), positrons in the target are mainly generated following a two-step process (bremsstrahlung + pair production in the nuclear field) whereas the scattered electrons can be either generated during pair production or during the production of bremsstrahlung radiation. However for these target thicknesses the number of electrons generated by pair production can be ignored, resulting in the population of electrons escaping the solid target arising almost exclusively from scattering of the primary electron beam. On average, the positrons are thus created deeper into the target and exit, for each defined energy, with a smaller source size.
These assumptions break down in the limiting cases of ultra-thin or thick targets. As previously discussed 5,7 , for L c /L RAD ≤ 10 −2 , direct electro-production (sometimes referred to as the trident process) 38 will dominate resulting in pairs being generated directly as an electron traverses the nuclear field, without the intermediate step of generating a real photon via bremsstrahlung. On the other hand, thick targets will allow for multistep cascades up to a point where the number of escaping electrons and positrons will become approximately equal, since they both arise from pair production. In this case, occurring at approximately L c /L RAD ≥ 5, 6 the emittance of the escaping electrons and positrons will be approximately equal. The positron emittance will thus be smaller than that of the scattered electrons as long as we can neglect trident pair production and multi-step cascades, i.e., for 10 −2 ≤ L c /L RAD ≤ 5. In the case of lead, this corresponds to 60µm ≤ L c ≤ 2.5 cm. Figure 1.d depicts the ratio between the positron and electron geometrical emittance as a function of their energy. The results are seen to be fairly independent of the initial energy of the primary electron beam, but are mainly related to the ratio of the energy of the escaping particle with that of the primary electrons. This gives, in the ultra-relativistic regime, a general scaling between the electron and positron emittance, practically independent from the energy of the parent electron beam. The trend extracted from the simulations is in the form of a power law: e + / e − = −(0.5 ± 0.1)E 0.6 f rac + 1, with E f rac the ratio between the particle energy and that of the parent electrons. A certain level of uncertainty is present, mostly due to the non-ideal statistics in extracting the positron emittance from the simulations as illustrated by the error bars in Fig. 1.c.
III. EXPERIMENTAL SETUP
In order to experimentally check the viability of this technique, an experiment was carried out using the UHI-100 laser facility at CEA Saclay. The system delivers laser pulses with an energy of E p = 2.5±0.1 J before compression (∼ 0.9 J in focus), a duration of τ p = 24 ± 2 fs, and a central wavelength of λ 0 = 800 nm. The laser was focussed down to a 28 µm FWHM (Full Width at Half Maximum) spot using an F/15 off-axis parabolic mirror combined with an adaptive optic, producing a focal spot with a peak intensity I 0 (4.7 ± 0.7) × 10 18 W cm −2 (dimensionless amplitude a 0 = 1.5 ± 0.1). The laser was focussed onto the entrance of a gas-cell, with a variable length ranging from 0 to 5 mm, and filled with a H 2 +5%N 2 gas mix. The experimental setup is schematically depicted in Fig. 2.
The laser-driven electron beam was then made to propagate through a wedge-shaped Pb converter, placed 13 mm downstream from the rear of the gas cell. The thickness of the converter could be varied in the range 1-20 mm by laterally displacing the wedge. Both the spatial and spectral properties of the leptonic beam were characterised. The spectral characterisation was performed by using a magnetic spectrometer, consisting of a 50 mm, 0.8 T dipole magnet and detector screen. The magnet was placed at a distance of 124 mm downstream of the back of the gas cell, whereas the detector screen was placed 384 mm away from the gas cell. Image plates (Fujifilm BAS-MD) were used as detectors for signal accumulation, whereas Lanex screens were used for singleshot measurements. Suitable shielding (not shown) was inserted to minimise the background noise at the detectors. In addition to this, the magnet could be removed from the beamline to characterise the spatial properties of the electron beam, as well as its pointing fluctuations. To do so, an additional Lanex screen was placed at a distance of 1083 mm from the target (not shown).
The emittance of the beam was characterised by using the pepper-pot technique 39 . In particular, due to the strong dependence of the emittance on the particle energy, a 1D pepper-pot was employed. In this case, an array of slits is used to select a number of beamlets, which are then propagated through the spectrometer, allowing to retrieve an energy-resolved measurement of the emittance. It must be noted that the pepper-pot technique is known to overestimate the emittance of a beam in which the position and momentum of the particles are strongly correlated 40 , as could be the case in a laser-wakefield accelerator. Even though this correlation is expected to rapidly degrade during the propagation in the converter target, the values reported here should still be considered as an upper limit for the geometrical emittances.
The spectral properties of the leptonic beam after the converter has been shown to be fairly independent of the spectral shape of the parent electron beam 11 . For this reason, the laser-plasma interaction was optimised to produce primary electron beams with the highest possible charges and energies in an ionisation-injection regime 41 . This was achieved for a gas-cell backing pressure of 2.75 bar (corresponding to an electron density of n e ≈ 10 19 cm −3 ), gas-cell length of 1mm, and a detuning of the optimum compressor grating position of 0.1 mm. A summary of the parameter scan for the parent electron beam is shown in Fig. 3, where the left column (Figs. 3(s1-s4)) depicts the average spectrum for each set of parameters, with the shaded area representing the standard deviation of the data from the average. The right column (Figs. 3(c1-c4)) shows the dependence of the total electron charge in the beam on each of the parameters.
In these experimental conditions, a reproducible electron beam with the following characteristics was then obtained and used for the rest of the experiment: maximum energy E s = (200±20) MeV, divergence θ s = (5±1) mrad 42 , and a total charge of the order of 10s of pC. Assuming that we were operating in a heavily loaded blowout regime, the electron beam duration can be estimated to be: τ s ≈ 2 √ a 0 /ω p 12 fs, with ω p = 2 × 10 14 Hz the plasma frequency of the background gas. Similarly, the upper limit for the electron beam source size is given by the size of the accelerating bubble in the wakefield: D s 2c √ a 0 /ω p 4 µm 43 . Even though compressor de-tuning might induce cilindrically asymmetric electron beams 44 , these deviations appear negligible in our experimental setup 45 , justifying the cylindrical symmetry assumed throughout this work.
The electron beam was then directed onto a Pb converter of variable thickness. The resulting positron beam was characterised by using the same magnetic spectrometer as in Sec. III. The experimentally measured positron spectra for different converter thicknesses are shown in Fig. 4, compared with those resulting from FLUKA simulations 34,35 , showing good quantitative agreement.
The spectra resemble a relativistic Maxwellian distribution, with a maximum positron charge observed for a converter thickness of ∼ 9 mm, which corresponds to approximately 1.6 radiation lengths (L P b 5.6 mm). This observation is in good agreement with previous experimental results [5][6][7]9,10 . Numerical simulations indicate a temporal broadening of the positrons at 100 MeV of the order of 50 -100 fs, resulting in a peak positron current, at the rear surface of the converter, of the order of 1 A.
FIG. 3. Optimisation of the primary electron beam.
Average spectra (left) and charge (right) of the electron beams produced when different parameters of the laser-plasma interaction were modified, namely (1) the compressor position (shortest pulse corresponds to 0 mm), (2) the position of the gas cell entrance with respect to the laser focal plane, (3) the length of the gas cell, and (4) the backing pressure of the gas filling the cell. The shaded areas in the spectra plots, and the error bars in the charge plots, represent the standard deviation of the data with respect to the average.
IV. EMITTANCE AND SOURCE SIZE AFTER THE CONVERTER
For our experimental parameters, the quantum cascade process has a relatively low efficiency, resulting in a sub-pC positron charge. This low charge, in addition with the high-noise environment, prevents from directly characterising the emittance of the positron beam using a pepper-pot mask. For this reason, since the emittance of the electron and positron beams generated during the cascade are strongly correlated, the emittance of the electron beam after the converter was characterised instead.
For the emittance measurement, a pepper pot mask was placed along the electron beam propagation axis. A typical raw image of the scattered electrons after propagation through the mask and dispersion by the magnetic spectrometer is shown in Fig. 5(a). Each of the horizontal lines visible in the figure corresponds to a beamlet propagating through a single aperture in the pepper pot mask, which is spectrally resolved along the horizontal direction inside the spectrometer.
The electron source size was estimated by means of a penumbral imaging technique, in which spatial information is recovered from the shadow produced by an aperture (see inset in Fig. 5.a), such as the slits in our case. Considering that the spatial profile of the source can be well approximated by a Gaussian, the FWHM of the source is given by the distance between the points at which the signal is 12% and 88% of the maximum signal, respectively. The source size can be thus estimated by measuring such distance at the detector, and transforming to the source plane by taking into account the magnification, slit function, and detector resolution.
The dependence of the electron source size with the electron energy is shown in Fig. 5(b), alongside a comparison with the electron and positron source sizes obtained using FLUKA simulations assuming the experimentallymeasured primary electron beam. The uncertainties for the source size consider the slit size, resolution of the detector, and magnification of the system; whereas the main contribution towards the uncertainty in energy is given by the size of the slits on the pepper pot mask, leading to a possible overlap of an energy range at a given point on the detector. The source size is found to slowly vary with the particle energy with an average value of (230 ± 100)µm. As it can be seen in Fig. 5(b), both the measured source size and its quasi-constant value in the energy range of interest are in good agreement with the expected values obtained in the simulations. It is of particular interest to note that the positron source size D p at a defined energy is consistently lower than that of the electrons D e in the energy range between 10 numerical modelling 11 .
The emittance of the scattered electron beam was extracted using the pepper-pot equations 39 , taking into account that a 1-D system was used (see Fig. 5.c). Similarly to the case of the source size, the emittance is found to be approximately constant for the energy range considered, with an average value of xe ∼ 3.5 µm (normalised emittance at 100 MeV of ne ≈ 200 π µm). The measured values are in good agreement with FLUKA simulations, considering the relatively large uncertainty for the energy of the particles at a given position on the detector, particularly in the case of lower energy particles, for which magnetic fringe fields could introduce additional sources of error in the measurement.
Again, the positron geometrical emittance appears consistently lower than that of the scattered electrons. For example, at 80 MeV (E f rac = 0.4) the ratio between the simulated positron emittance and the measured emittance of the scattered electrons is 66%, in good agreement with the (69 ± 6)% predicted by the trend shown in Fig. 1.
V. CONCLUSIONS
We report on a non-invasive method to characterise the positron beam generated during the interaction of a laser-wakefield electron beam through a high-Z converter. In the ultra-relativistic regime, and for converter thicknesses of the order of a radiation length, the positron geometrical emittance is found to be consistently smaller than that of the scattered electrons, with general trend that is virtually independent from the energy of the primary electron beam. For a ≈ 10 pC broadband electron beam with a maximum energy of 200 MeV, the positron beam is found to exit the converter target with a sub-pC charge, a broadband spectrum extending up to 140 MeV, a duration of the order of 100 fs, and a geometrical emittance at 100 MeV of e + ∼ 3.5 µm. The results confirm numerical work recently published, and are thus consistent with the potential of state-of-the-art PW-scale laser systems to generate GeV-scale positron beams with fs duration and sub-micron geometrical emittance from a fully optical configuration. Positron beams of similar characteristics, once energy-filtered by means of conventional magnetic elements, will be usable as test beams to study advanced plasma-based positron acceleration schemes.
|
v3-fos-license
|
2017-08-17T05:38:46.215Z
|
2014-01-01T00:00:00.000
|
7249653
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "d0ce5cf9df2adb1f5fd6ac9dd531e5f11f7c60eb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2711",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d0ce5cf9df2adb1f5fd6ac9dd531e5f11f7c60eb",
"year": 2014
}
|
pes2o/s2orc
|
Diabetic patients infected with helicobacter pylori have a higher Insulin Resistance Degree.
BACKGROUND
The association of H.pylori (HP) and insulin resistance degree (IR) has not been evaluated in the diabetic patients so far. In this study, we evaluated the association between HP seropositivity and the homeostatic model assessment for insulin resistance (HOMA-IR) in diabetic patients.
METHODS
In this study, 211 diabetic patients admitted to the endocrinology clinic of Shahid Beheshti Hospital of Qom for routine diabetic check-ups were evaluated. The patients were divided into HP(+) and HP(-) groups based on the seropositivity of helicobacter pylori IgG antibody. The serum H. pylori IgG antibody, blood sugar, serum insulin, HbA1c, LDL, HDL, cholesterol, triglyceride, HOMA-IR and BMI were measured.
RESULTS
The mean age of 72 HP(-) patient was 51.5±8.3 and 139 HP+ subjects was 53.5±9 years (P=0.128). The mean HDL in HP(-) cases was 69.2±29.2 mg/dl and for HP+ cases was 60.7±26.7 mg/dl (P=0.037). The mean serum insulin in HP(-) was 60.97±5.64 and HP(+) subjects was 10.12±7.72IU/ml (P=0.002). The Homa-IR degree for HP(-) cases was 3.2±3.3 and for HP(+) cases was 4.5±3.8 (P=0.013). There were no significant differences between these groups according to the short-term or the long-term indices of glycemic control as well as most of the diabetic risk factors or complications. The treatment type was also not significantly different between these groups.
CONCLUSION
It seems that the HP(+) diabetic patients require higher levels of serum insulin to reach the same degree of glycemic control compared to the HP(-) ones.
R ecognized by insulin resistance (IR), some degrees of impairment in insulin secretion and hyperglycemia, type 2 diabetes mellitus (T2DM) is a metabolic disease that is linked to different pathophysiological mechanisms. The role of inflammatory mechanisms in the pathogenesis of this disease is highlighted in recent studies (1). It is believed that inflammation may increase IR. The IR is a pathologic state in which normal insulin concentrations produce subnormal response in the peripheral tissues. In the "Sacramento Area Latino Study on Aging" (SALSA) cohort study, the seropositivity for helicobacter pylori (H. pylori) was associated with a greater rate of incident diabetes (2). The results of this study, however, are criticized by Eshraghian and Pellicano (3). As an example, they claim that the SLASA study paradoxically shows that H. pylori infection and "homeostatic model assessment for insulin resistance" (HOMA-IR) are not associated.
Vafaeimanesh J, et al.
Nevertheless, if we assume that the H. pylori infection is a risk factor for the initiation of T2DM (4), then the arising question is "Do H. pylori infected T2DM patients have a higher degree of IR (4)(5)(6). In order to answer this question in this study, we have evaluated the relationship between the seropositivity for H. pylori and the HOMA-IR in diabetic patients who are receiving appropriate medical treatment (except insulin) for their condition.
Methods
This study involved 211 diabetic patients who were admitted to the endocrinology clinic of Shahid Beheshti Hospital in Qom for routine diabetic check-ups. The patients with history of using H. pylori treatment, proton pump inhibitor, H2 blocker, bismuth, or insulin were excluded from the study. The smoker patients were also excluded. After 12 hours of fasting overnight, venous blood-samples were obtained and stored at 4°C. Serum was acquired by centrifugation of blood samples at 2000 r/min for 15 minutes, immediately after sampling. The serum H. pylori IgG antibody (ELISA, Padtan Elm, Iran), blood sugar, serum insulin (ELISA, DiaMetra, Italy), HbA1c, LDL, HDL, cholesterol and triglyceride were measured. Seropositivity for H. pylori was defined when the titers of higher than 30 AU/mL of IgG were detected in the serum. Multiplying the fasting glucose value (mg/dL) by serum insulin value in each person and then dividing it by 405, the HOMA-IR was calculated in this study. Also, body-mass-index (BMI) was calculated by measuring the body weight (in kg) and dividing it by the square of the height (in meters). Hypertension, coronary artery disease (CAD) and peripheral artery disease (PAD) were detected based on the medical history of the patients. The autonomous neuropathy, gastroparesis and dyspepsia were identified based on the presence or absence of symptoms such as nausea and delayed gastric emptying. The dental disease was diagnosed based on the physical examination. Using the diapason and monofilament tests, the diabetic neuropathy has been identified. The retinopathy and cataract are also diagnosed after the clinical examination of an ophthalmologist. This research was a practice compliance with the Helsinki Declaration. All the subjects were informed about the study protocol, and written consents were obtained from all participants. Statistical analysis: The data were collected and analyzed by SPSS version 11. All data are reported as mean±standard deviation. The chi-square test was used to compare qualitative variables. P-values less than 0.05 were considered statistically significant.
Results
Two hundred and eleven diabetic patients including 135 (64%) females and 76 (36%) males with the mean age of 52.8±8.8 years and the mean T2DM duration of 7.4±5.4 years were included in this study. The H. pylori was positive in 139 (65.9%) of patients (HP + group), and negative in 72 (34.1%) of patients (HPgroup). The characteristics of these two groups are summarized in table 1.
There was no statistically significant difference between these groups with respect to the IR risk factors, the complications of diabetes, and dental diseases. The lipid profile of both groups was not significantly different except serum HDL level that was slightly higher in the HPgroup (table 1).
Discussion
The T2DM is the epidemic disease of the modern ages, and IR is one of its characteristics. In this study, we evaluated the association between H. pylori infection and HOMA-IR in 211 diabetic patients who received appropriate medical treatment other than insulin. The main finding of this study was that HOMA-IR and serum insulin are significantly higher in the T2DM patients that are seropositive for H. pylori than the seronegative ones. Our results also showed that there is no significant difference in the long-term or the short-term glycemic control of patients between these groups since there was no significant difference in HbA1c, in FBS, or in the prevalence of DM complications between the HP + and HPgroups. Therefore, it seems that the HP + patients require higher levels of serum insulin to reach the same degree of glycemic control as the HPones. The association between the IR and H. pylori infection among the otherwise healthy individuals has been issued in several previous studies, but as far as we know, this relation has not been evaluated in diabetic patients so far. In addition, there have been a limited number of these studies and no consensus among them (2,4,(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19). A recent meta-analysis, for instance, shows that H. pylori infection is more frequent in diabetic patients (20). Another meta-analysis, however, found that it is impractical to analyze the association between H. Pylori infection an IR, because of the biasing effect of the small percentage of patients which could be included in the study (21). Consequently, further studies are required in this regard (22). Overall, although the positive association of these two conditions is not a fact yet, the general trend is towards it (21,22). Most of those studies that showed no association between H. pylori infection and IR were not specifically designed for this purpose. For instance, serum insulin but not HOMA-IR is considered as the index of IR in the Gillum et al.'s study. Naja et al. were criticized for not considering the previous history of H. pylori treatment or the use of insulin, anti-acid and bismuth medications as the exclusion criteria (23). These methodological flaws do not apply to our study and our results correlate with the above mentioned trend.
Jeon et al. proposed the possible role of altered gut microbiota in the pathogenesis of insulin resistance and T2DM (2). Concentrations of circulating lipopolysaccharides (a part of the bacterial cell wall) have been reported to be higher in obese patients with T2DM than in non-diabetic thin individuals and correlate with insulin resistance severity (24). Serum lipopolyaccharides originate from the gastrointestinal tract and their levels increase after eating a meal rich of lipids. H. pylori gastritis and its role in mucosal activation of innate immunity and upregulation of IL-1β is also suggested in the pathogenesis of IR (25). H.pylori gastritis and its effects on ghrelin may also affect appetite and insulin sensitivity (26).
It seems that H. pylori eradication treatment may be helpful in lowering the IR in T2DM patients. Nevertheless, there is no agreement among researchers in this regard. Gen et al. showed that the H. pylori eradication reduced the HOMA-IR in dyspeptic non-diabetic patients (13). After one year of follow up, Park et al., however, showed that HOMA-IR was not significantly different in patients receiving appropriate medication for the eradication of H. Pylori compared to the control group (27). In our study, medication type and short-term or long-term glycemic control were not different between the HP + and HPgroups, which is in agreement the with findings of other studies (27). Yet, we cannot exclude the possibility that the drug dose may be higher in the HP + group.
According to previous studies, HP + non-diabetic individuals compared with the HPones have a higher HbA1c level (28). The eradication of H. pylori, however, does not change the HbA1c level in T2DM patients, which was predictable since these patients receive appropriate medications (29). Our findings are also in agreement with these results.
In our study, most of the known risk factors of the T2DM are the same between HP + and HPgroups. It seems that the systematic bias is minimized in this study. On the other hand, we are aware of the limitations of epidemiological indices that we have used for the detection of H. pylori infection or the calculation of IR degree. Consequently, although our study shows the association between H. pylori infection and HOMA-IR in diabetic patients, we suggest better indices to be used for the detection of H. pylori infection or the calculation of IR in the future studies in this regard. We also suggest that the association between the dose of T2DM medications and the IR needs to be evaluated in the future studies.
|
v3-fos-license
|
2024-02-08T16:15:22.603Z
|
2024-02-01T00:00:00.000
|
267538408
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1999-4915/16/2/258/pdf?version=1707198515",
"pdf_hash": "967b4cd520ab049063160dc68e5a153176a49e12",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2717",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "17cd0ef6198610d04a4cf4c367b4ef2f5ba3c2d4",
"year": 2024
}
|
pes2o/s2orc
|
Development of Colloidal Gold-Based Immunochromatographic Strips for Rapid Detection and Surveillance of Japanese Encephalitis Virus in Dogs across Shanghai, China
Japanese encephalitis virus (JEV) causes acute encephalitis in humans and is of major public health concern in most Asian regions. Dogs are suitable sentinels for assessing the risk of JEV infection in humans. A neutralization test (NT) or an enzyme-linked immunosorbent assay (ELISA) is used for the serological detection of JEV in dogs; however, these tests have several limitations, and, thus, a more convenient and reliable alternative test is needed. In this study, a colloidal gold immunochromatographic strip (ICS), using a purified recombinant EDIII protein, was established for the serological survey of JEV infection in dogs. The results show that the ICSs could specifically detect JEV antibodies within 10 min without cross-reactions with antibodies against other canine viruses. The test strips could detect anti-JEV in serum with dilution up to 640 times, showing high sensitivity. The coincidence rate with the NT test was higher than 96.6%. Among 586 serum samples from dogs in Shanghai examined using the ICS test, 179 (29.98%) were found to be positive for JEV antibodies, and the high seropositivity of JEV in dogs in China was significantly correlated with the season and living environment. In summary, we developed an accurate and economical ICS for the rapid detection of anti-JEV in dog serum samples with great potential for the surveillance of JEV in dogs.
Introduction
JEV infection leads to neurological disease, and it is one of the leading viral encephalitides in the world [1].According to World Health Organization (WHO) reports, about 24 countries in Asia and Western Pacific regions have been exposed to JEV, where it accounts for ~35,000 to 50,000 cases and 10,000 to 15,000 deaths each year [1].However, the exact number of JEV cases probably remains under-reported [2].The first JEV epidemics were reported in Japan in the nineteenth century [3].JEV infections occur across a large range of Asian countries, with outbreaks occurring in Japan, China, Taiwan, Korea, the Philippines, and India [4].
JEV harbors a positive-sense RNA genome belonging to the family Flaviviridae.The JEV genome is approximately 11 kb in length and is proteolytically processed into three structural (Cap, prM, and E) and seven non-structural (NS1, NS2A, NS2B, NS3, NS4A, NS4B, and NS5) proteins by a complex combination of host and viral proteases [5][6][7][8].Phylogenetically, JEV is classified into a single serotype with five genetically different Viruses 2024, 16, 258 2 of 12 genotypes (GI, GII, GIII, GIV, and GV).JEV GIII had been the most dominant strain, with several outbreaks in past.However, recent data show the emergence of the GI strain as the most common JEV genotype [9].The JEV zoonosis life cycle contains both invertebrates (mosquitoes) and vertebrates (wild birds and pigs) [10,11].In addition to mosquitoes as a vector, pigs and ardeid birds play the role of amplifying/reservoir hosts [12].Recent theoretical models of vector-borne pathogen transmission show that the pathogen transmission rate mainly depends on the proportion of vector blood meals taken from competent hosts versus dead-end hosts [13].The E protein (53-55 kDa) is a typical membrane glycoprotein, and it is responsible for a number of important processes, such as viral attachment, fusion, and virulence [13].The ectodomain of the E protein can be separated into three structural domains: E domains I (EDI) to III (EDIII).EDIII is also involved in the binding to host receptors and contains specific epitopes that elicit neutralizing antibodies [13].Thus, the EDIII protein could be employed as a candidate antigen for a diagnostic or subunit vaccine of JEV.
Previously, several serological surveys have been conducted on pig farms and wild boars, which tend to show high seropositivity in different regions of the world, including China [14][15][16][17][18].As these animals live apart from human populations, serosurveys of pigs and wild animals may not indicate the prevalence of JEV in urban/residential areas.However, additional monitoring of the risk of JEV infection in humans in JEV-endemic areas can be carried out by examining seroprevalence in companion animals.Previous studies' experimental data demonstrate that, after JEV challenge, dogs do not develop any clinical signs, or viremia, but JEV seroprevalence in dog populations, as sentinels, may be valuable in evaluating the JEV risk to humans in urban/residential areas [19,20].All over China, people keep dogs as companion pets and to guard their property.Dogs live closest to human dwellings, and they could be exposed to arboviruses to the same extent as their owners.
Serological tests, such as the virus neutralization test (VNT), hemagglutination inhibition (HI) test, and enzyme-linked immunosorbent assay (ELISA), have been performed to detect JEV-specific antibodies in serum [21][22][23][24].The HI test requires a large volume of serum and fresh erythrocytes, and VNT requires a special facility (e.g., biosafety level 2 or 3) and a high level of technical skill.However, specific ELISA tests have been developed and evaluated for serological surveys among humans, pigs, bats, and dogs [25][26][27][28][29]. Immunofluorescent assays (IFAs) have been developed for the detection of antibodies against JEV, and they have been effective for the diagnosis of different flaviviruses, such as Yellow fever virus and West Nile virus [30][31][32].However, these tests have lengthy procedures, with the requirements of expensive reagents and skilled persons.
The ICS was developed for the diagnosis of contagious human diseases and has been used for the last three decades, and it has recently been introduced to veterinary fields because it is easy to use, it has a short running time (within 15-20 min), and the results can be seen with the naked eye.For example, the technique is now used to detect antigens or antibodies of animal viruses, such as avian influenza virus [33], porcine reproductive and respiratory syndrome virus [34], porcine circovirus-2 [35], and JEV [36].
To improve the JEV serosurveillance in dogs that share a living space with humans, we developed immunochromatographic strips (ICSs) based on domain III (EDIII) of the JEV envelope protein, and we successfully applied them for the surveillance of JEV antibodies in dogs in China to assess the risk of human infection with JEV.This may provide technical support for controlling the spread and prevalence of JEV.
Virus and Serum Samples
The JEV SA14-14-2 strain (GenBank accession no.AF315119) was propagated on BHK-21 cells, and a 50% tissue culture infective dose (TCID50) was determined for VN [37].The JEV SA14-14-2 strain was also used as a template for the cloning and expression of the JEV recombinant EDIII protein in competent Escherichia coli.A total of 586 serum samples were collected from numerous pet immunization centers, hospitals, farms, and abandoned dog shelters across various districts in Shanghai in 2019-2020 for the detection of JEV antibodies.
The following were verified using VN and provided by the China Animal Health and Epidemiology Center (Shanghai Branch): serum samples positive for Japanese encephalitis virus, canine adenovirus (CAdV), canine coronavirus (CCV), canine distemper virus (CDV), canine Leptospira virus (CLV), canine parainfluenza virus (CPIV), canine parvovirus (CPV), and canine rabies virus (CRV) from experimentally infected dogs; anti-E monoclonal antibodies; and negative serum samples.
Cloning, Expression, and Purification of Recombinant JEV EDIII Protein
The nucleotide sequence of the EDIII (UniProtKB:P27395, D586-T696, a partial sequence of the JEV polyprotein) of the JEV SA14-14-2 strain was amplified via a polymerase chain reaction (PCR) with specially designed oligo primers (forward: 5 -CTAGGATCCGAC AAACTGGCTCTGAA-3 ; reverse: 5 -TCTCTCGAGTTACGTGCTTCCAGCTTTG-3 ).The desired gene fragment was digested with the restriction endonucleases BamH I and Hind III (Takara, Dalian, China), and it was ligated into the pET-28a vector (Takara, Dalian, China).Subsequently, the recombinant pET28a-EDIII plasmid was transformed into Escherichia coli BL21-competent cells (DE3), and the resultant transformants were selected on Luria-Bertani (LB) agar plates supplemented with 50 µg/mL of kanamycin [36].Every single colony was picked from an LB agar plate and further amplified in 5 mL of LB broth.Finally, positive clones harboring the correct insert were confirmed via PCR and sequencing.The confirmed positive clones were allowed to grow further in a 250 mL LB medium containing kanamycin (50 µg/mL) [36].Protein expression was induced using isopropyl β-D-1-thiogalactopyranoside at a final concentration of 1.5 mM.The expressed protein was purified on a Ni column using a His-Bind purification Kit (BioRad, Hercules, CA, USA), according to the manufacturer's instructions.The protein expression was confirmed with SDS-PAGE and Western blot, as previously described, using anti-E monoclonal antibodies [38,39].
Preparation of the Colloidal Gold-Labeled Suspensions
Suitable antibody concentrations for the test lines, control lines, and conjugation with the colloidal gold reagent were determined as reported in previous studies [36,40,41].Colloidal gold particles were prepared with an improved reduction of chloroauric acid by sodium citrate, as described by Wang et al. [42].The colloidal gold solution (pH 8.6) was mixed with purified recombinant EDIII proteins under electromagnetic stirring and stirred rapidly for 30 min.Bovine serum albumin (BSA) was added at a concentration of 1% to inhibit the excess reactivity of the gold colloid.The blend was centrifuged at 15,000 rpm for 1 h at 4 • C.After discarding the supernatant, the obtained conjugate pellet was resuspended in 0.2 M TBST (pH 8.6) and stored at 4 • C.
Preparation of the Immunochromatographic Strip
The composition of the immunochromatographic strip (ICS) is shown in Figure 1, and it was prepared as follows: The ICS was divided into four compartments, i.e., an absorbent pad, a nitrocellulose membrane, a conjugate pad, and a sample pad.Staphylococcal protein A (SPA) (1.0 mg/mL Sigma, Louis, MO, USA) and anti-E monoclonal antibodies (0.1 mg/mL) were blotted on the nitrocellulose membrane and incubated for the development of a test line and a control line, respectively, using an XYZ3050 dispense workstation, and the NC membrane was then dried for 1 h at 37 • C before being stored at 4 • C. The capture test and control band were situated 0.5 cm apart in the center of the membrane.The conjugate pad, composed of a glass fiber membrane, was treated with a recombinant EDIII protein-colloidal gold conjugate solution and then dried under a vacuum.All components of this ICS kit were adhered to a backing plate (300 mm × 25 mm, SM31-25, Shanghai Kinbio Biotechnology Co., LTD, Shanghai, China) in proper order, as illustrated in Figure 1A.The plate was then sliced into 4 mm wide strips using an automatic cutter.Each strip was assembled on a plastic cassette (A-1, Shanghai joey Biotechnology Co., LTD, Shanghai, China) and stored at a broad temperature range (4-30 • C) before use.
workstation, and the NC membrane was then dried for 1 h at 37 °C before being stored at 4 °C.The capture test and control band were situated 0.5 cm apart in the center of the membrane.The conjugate pad, composed of a glass fiber membrane, was treated with a recombinant EDⅢ protein-colloidal gold conjugate solution and then dried under a vacuum.All components of this ICS kit were adhered to a backing plate (300 mm × 25 mm, SM31-25, Shanghai Kinbio Biotechnology Co., LTD, Shanghai, China) in proper order, as illustrated in Figure 1A.The plate was then sliced into 4 mm wide strips using an automatic cutter.Each strip was assembled on a plastic cassette (A-1, Shanghai joey Biotechnology Co., LTD, Shanghai, China) and stored at a broad temperature range (4-30 °C) before use.
Working Principle of Immunochromatographic Strip (ICS)
In this ICS kit, dog serum samples are diluted 100-fold with a normal saline solution and added to the sample pad.A test line will only appear if the serum sample contains JEV antibodies.When the serum samples reach the conjugate pad, the dog JEV antibodies interact with the colloidal gold JEV recombinant EDⅢ protein to form a dog JEV antibody EDⅢ-colloidal gold complex.The complex travels through the NC membrane via capillary action.When it passes through the test line, the complex reacts with SPA, resulting in a dark red band, and the excess of the antigen-antibody complex travels to the control line, where anti-dog JEV antibodies interact with the recombinant EDIII protein complex and form another red band; in this case, the results are judged as positive (Figure 1B).In contrast to this, in samples lacking JEV antibodies, the free EDⅢ-colloidal gold conjugate that cannot bind to the samples will travel to the control line.At the control line, dog anti-JEV IgG will react with the SPA, and a dark band will appear.When there is only one red band on the control line (position C), the results are considered negative; the absence of two bands (at positions C and T) suggests an invalid result.Therefore, after the addition of the serum sample, two bands will appear for positive samples within 10 min (one on the test line (position T) and one on the control line (position C)), whereas only one band will appear on the control line (position C) for negative samples (Figure 1B).
Working Principle of Immunochromatographic Strip (ICS)
In this ICS kit, dog serum samples are diluted 100-fold with a normal saline solution and added to the sample pad.A test line will only appear if the serum sample contains JEV antibodies.When the serum samples reach the conjugate pad, the dog JEV antibodies interact with the colloidal gold JEV recombinant EDIII protein to form a dog JEV antibody EDIII-colloidal gold complex.The complex travels through the NC membrane via capillary action.When it passes through the test line, the complex reacts with SPA, resulting in a dark red band, and the excess of the antigen-antibody complex travels to the control line, where anti-dog JEV antibodies interact with the recombinant EDIII protein complex and form another red band; in this case, the results are judged as positive (Figure 1B).In contrast to this, in samples lacking JEV antibodies, the free EDIII-colloidal gold conjugate that cannot bind to the samples will travel to the control line.At the control line, dog anti-JEV IgG will react with the SPA, and a dark band will appear.When there is only one red band on the control line (position C), the results are considered negative; the absence of two bands (at positions C and T) suggests an invalid result.Therefore, after the addition of the serum sample, two bands will appear for positive samples within 10 min (one on the test line (position T) and one on the control line (position C)), whereas only one band will appear on the control line (position C) for negative samples (Figure 1B).
Specificity, Sensitivity, and Stability of the ICS
The specificity of the developed ICS was evaluated with serum samples positive for Japanese encephalitis virus, canine adenovirus, canine coronavirus, canine distemper virus, canine Leptospira virus, canine parainfluenza virus, canine parvovirus, and canine rabies virus.Anti-JEV-positive serum was used as a positive control, and negative serum was applied as a negative control.
To evaluate the sensitivity of the ICS kit, we serially diluted positive anti-JEV serum in PBS, and 50 µL of each dilution was used for the ICS test.The sensitivity of our ICS kit was determined by finding the minimum dilution concentration that produced a positive result.
Seroprevalence of JEV among Dogs in Shanghai, China
A total of 586 serum samples were examined for antibodies against JEV using the developed ICSs and NT, as described previously [20,24,28,43,44].The coincidence rate of the ICS test was compared with that of the NT [20,24,28,43,44].
Statistical Analysis
All data were analyzed using the Prism 5 software (GraphPad Software, La Jolla, CA, USA).All data were analyzed using a two-tailed Student's t-test.p < 0.05 was considered statistically significant.
Expression and Purification of the Recombinant ED3 Protein
The domain III peptides from various flaviviruses, including JEV, are useful antigens for serological diagnoses [45].The EDIII protein sequences of JEV were successfully cloned into the pET-28a vector, and, after IPTG induction, the JEV EDIII protein was expressed in competent E. coli BL21 (DE3) cells.The expressed protein was purified using Ni columns.The JEV-EDIII fusion protein was found mainly as inclusion bodies.The SDS-PAGE (Figure 2A: Lanes 1, 2, 3, and 4) and Western blot analyses (Figure 2B: Lines 5 and 6) demonstrated that the expressed JEV EDIII protein was highly purified under native conditions, with a molecular weight of 14.5 kDa, and it reacted strongly with anti-E monoclonal antibodies (Figure 2, Lane 6).
Specificity Evaluation of ICS
All serum samples positive for different canine viruses were used to evaluate the specificity of the ICS.Positive results were seen for dog sera containing JEV-positive antibodies, while the antisera of all other tested viruses produced negative results, as shown in Figure 3A.These data show that the ICSs are highly specific for JEV antisera and do not cross-react with other pathogenic canine viruses.The expression of pET-28a-EDIII in E. coli was induced with isopropyl β-D-1-thiogalactopyranoside (IPTG).Inclusion bodies were collected 16 h after induction and subjected to supersonic schizolysis.The recombinant EDIII protein was purified via affinity chromatography on a Ni + spin column.Lane 1: E. coli containing pET-28a-EDIII induced with IPTG; Lane 2: inclusion bodies of IPTG-induced E. coli containing pET-28a-EDIII; Lane 3: supernatant from IPTG-induced E. coli containing pET-28a-EDIII; Lane 4: purified E. coli containing pET-28a-EDIII; M, protein marker.(B) Western blot analysis of the EDIII protein using anti-E monoclonal antibodies.Lane 5: uninduced E. coli containing pET-28a-EDIII; Lane 6: E. coli containing pET-28a-EDIII induced with IPTG.These protein samples were resolved electrophoretically on 12% polyacrylamide gel and transferred to a 0.2 µm polyvinylidene difluoride membrane.Membranes were treated with an anti-E monoclonal antibody followed by an HRP-conjugated goat anti-mouse IgG antibody.The reaction was visualized with a Western blot kit.
Specificity Evaluation of ICS
All serum samples positive for different canine viruses were used to evaluate the specificity of the ICS.Positive results were seen for dog sera containing JEV-positive antibodies, while the antisera of all other tested viruses produced negative results, as shown in Figure 3A.These data show that the ICSs are highly specific for JEV antisera and do not cross-react with other pathogenic canine viruses.
Sensitivity Evaluation of ICS
To determine the sensitivity of our developed ICS test, we prepared two-fold serial dilutions of the JEV-antibody-positive dog serum, i.e., 1:10, 1:20, 1:40, 1:80, 1:160, 1:320, 1:640, and 1:1280, in PBS (Figure 3B).The negative dog serum was diluted in a similar fashion for use as a negative control.No red line was observed (at position T) for the negative dog serum samples.A clear solid red line was observed for the positive serum samples (at position T) on the strips until the 1:640 dilution, indicating that the minimum detection limit is 1:640 (Figure 3B).
Stability Evaluation of ICS
The ICSs were evaluated for their stability to identify JEV antibodies after six months of storage at room temperature and in a refrigerator (4 °C).The strips still had the same sensitivity and specificity as the freshly produced strips after six months of storage, indicating that the ICSs have good stability.
Surveillance of JEV Antibodies in Dogs in Shanghai
A total of 586 dog serum samples were collected from numerous pet immunization centers, hospitals, farms, and abandoned dog shelters across various districts of Shanghai.We tested all of these serum samples by using our developed ICSs and NT.Out of the 586 samples tested, 179 (29.98%) were found to be positive for JEV antibodies.The coincidence rate of detection with these two methods was 96.6% (Table 1).This indicates that about 30% of dogs were seroconverted to JEV during the study period, which might be a public health concern.
Sensitivity Evaluation of ICS
To determine the sensitivity of our developed ICS test, we prepared two-fold serial dilutions of the JEV-antibody-positive dog serum, i.e., 1:10, 1:20, 1:40, 1:80, 1:160, 1:320, 1:640, and 1:1280, in PBS (Figure 3B).The negative dog serum was diluted in a similar fashion for use as a negative control.No red line was observed (at position T) for the negative dog serum samples.A clear solid red line was observed for the positive serum samples (at position T) on the strips until the 1:640 dilution, indicating that the minimum detection limit is 1:640 (Figure 3B).
Stability Evaluation of ICS
The ICSs were evaluated for their stability to identify JEV antibodies after six months of storage at room temperature and in a refrigerator (4 • C).The strips still had the same sensitivity and specificity as the freshly produced strips after six months of storage, indicating that the ICSs have good stability.
Surveillance of JEV Antibodies in Dogs in Shanghai
A total of 586 dog serum samples were collected from numerous pet immunization centers, hospitals, farms, and abandoned dog shelters across various districts of Shanghai.We tested all of these serum samples by using our developed ICSs and NT.Out of the 586 samples tested, 179 (29.98%) were found to be positive for JEV antibodies.The coincidence rate of detection with these two methods was 96.6% (Table 1).This indicates that about 30% of dogs were seroconverted to JEV during the study period, which might be a public health concern.The coincidence ratio of ICS to NT is 96.6%.
Relationship between Dog JEV-Antibody-Positive Rate and Season in Shanghai
We further examined the seroprevalence of JEV during different months and seasons of the year in 2019-2020.From June to September, the environmental conditions are conducive for mosquitos' growth in Shanghai, and it is also the epidemic period of JE [46][47][48].We observed that the average JEV-antibody-positive ratio was 16.4% during spring (March to May), which was substantially below the average positive rate of JEV antibodies in the epidemic late autumn (46.3%) and winter seasons (35.7%) (p < 0.05) (Table 2).These data show that the rate of JEV antibody positivity in dogs in China has a certain seasonality.Therefore, JEV infections in dogs have some seasonal tendencies in Shanghai.Values with the same superscript (a) letters showed no statically significant difference (p > 0.05).However, those with different superscript (b) letters showed a statically significant difference to others (p < 0.05).
Relationship between Dog JEV-Antibody-Positive Rate and Living Environment in Shanghai
We further categorized the seroprevalence of JEV antibodies according to living environment.To this end, dogs were divided into three categories, i.e., domestic dogs (pet clinics and immunization centers), breeding dogs (dog farms), and stray dogs (shelters, etc.).The highest prevalence of JEV antibodies was found in stray dogs (49.5%), followed by breeding dogs (43.3%) and domestic dogs (20.6%) (p > 0.05).Domestic dogs had the lowest JEV antibody prevalence, possibly due to their indoor feeding behaviors and fewer outdoor activities resulting in less exposure to bites from JEV-carrying mosquitos.
Discussion
Japanese encephalitis virus (JEV) causes encephalitis and reproductive disorder in humans and pigs, respectively, having a serious impact on public health and the pig industry [1,49].Previously, JE was considered to be mainly limited to rural areas because of the presence of rice fields, a suitable habitat for Culex mosquitoes, which play the main role as a vector [50,51], and because of the presence of the pig/bird population as a reservoir/amplification host for JEV [12,52].
However, recent data on JEV show that JEV has been isolated from local mosquitoes collected from urban areas, as well as in urban vertebrate hosts, including humans, as a result of seroconversion [53][54][55].Past data on JEV seroconversion show that dogs can be infected with JEV and might play a role in JEV transmission [20,56].Furthermore, dogs live in close proximity to humans, are not vaccinated against JEV, do not show symptoms when infected, and maintain virus neutralization titers for long periods after JEV infection.All of these factors indicate that dogs are good sentinels for assessing the risk of human infection with JEV [57,58].Therefore, we developed a convenient, rapid, sensitive, and specific ICS kit for JEV seroconversion surveillance in dogs.
The ICS is based on the JEV recombinant EDIII protein, which conjugates with colloidal gold to produce an immunogold complex, and it is capable of a more rapid and sensitive diagnosis and monitoring JEV antibodies in dog sera.JEV envelope protein domain 3 (EDIII) harbors the antigenic determinant that is responsible for eliciting neutralizing antibodies [59].In this study, the recombinant EDIII protein conjugated with colloidal gold could bind to JEV antibodies in dog sera, and the binding antibodies were captured by immobilized SPA to form a red band, indicating the presence of JEV antibodies in the samples.
Previous studies have reported that various serological methods, such as VN, ELISA, and HI, can be used for JEV antibody surveillance in dogs [17,29,60].However, these methods are expensive; time-consuming; and require skilled persons, special equipment, well-developed labs, and a live virus.Our developed ICS kit is easy to perform, and the results can be obtained within 15 min (Figure 3).Furthermore, we tested 586 dog sera samples collected under field conditions and showed that the developed ICS kit has a high specificity and sensitivity.These results are comparable with those of previously developed ICS kits used for JEV detection in pigs or for the detection of other canine virus antibodies [36,61,62].
The dog is one of the most important companion animals for human beings.With the development of urbanization, people often keep dogs as companions, and contact between dogs and humans is increasing and becoming worthwhile, as it brings joy.A total of 586 dog sera samples were collected from numerous pet immunization centers, hospitals, farms, and abandoned dog shelters of different districts of Shanghai and tested with the ICS method, and 179 samples were positive for JEV antibodies.A 29.98% JEV-antibody-positive rate was found, which indicates a high JEV infection rate in Shanghai.A study conducted in Japan showed that 25% of dogs have high JE virus-neutralizing antibodies, with relatively high seropositivity detected in the Shikoku (61%) and Kyushu (47%) districts of western Japan [19].In another study conducted in a Cambodian village, a high JEV seroprevalence of 35% was detected in dogs [15].Humans and pet dogs live in the same area and do not receive JEV vaccines; therefore, the JEV-antibody-positive sera from dogs suggest that mosquitoes carry JEV in human environments, indicating that further virome surveillance of local mosquitoes is required [63].
Shanghai is located near the sea and has a subtropical climate, higher temperatures (20~31 • C), and high rainfall and humidity, which is suitable for mosquito breeding [46,47].The presence of higher mosquito vectors shows that JEV can easily spread during the mosquito breeding season (summer-June to September) [47].In the present investigation, we also found an association between the JE epidemic season (August to October) and the positive rate of JEV antibodies in dogs.After the epidemic season, the positive rate of JEV antibodies significantly increased from summer (21.5%) to autumn (46.3%) as compared to the previous epidemic season (the average positive rate was 16.4% during March and April).This proves that JEV infection in dogs is seasonal in China.
In conclusion, our data indicate that the prevalence of JEV is relatively high in Shanghai, China.The ICS was developed for the detection of JEV antibodies in dogs, and the Viruses 2024, 16, 258 9 of 12 assay was applied in a serological survey on JEV infection to assess the risk of JEV infection to humans.
Figure 1 .
Figure 1.Schematic diagram of ICS.(A) Illustration of strip components.(B) Interpretation of the results using ICSs.Positive samples produce two red bands on the membrane strips; a negative sample shows only one band on the control line.If there is no colored band at all or there is one colored band only on the test line, the test is invalid.C, control line; T, test line.
Figure 1 .
Figure 1.Schematic diagram of ICS.(A) Illustration of strip components.(B) Interpretation of the results using ICSs.Positive samples produce two red bands on the membrane strips; a negative sample shows only one band on the control line.If there is no colored band at all or there is one colored band only on the test line, the test is invalid.C, control line; T, test line.
Figure 2 .
Figure 2. Expression and purification of the recombinant EDIII protein.(A) SDS-PAGE analysis of the EDIII protein expressed in Escherichia coli.The EDIII protein was 15.4 kDa in SDS-PAGE.The expression of pET-28a-EDIII in E. coli was induced with isopropyl β-D-1-thiogalactopyranoside (IPTG).Inclusion bodies were collected 16 h after induction and subjected to supersonic schizolysis.The recombinant EDIII protein was purified via affinity chromatography on a Ni + spin column.Lane 1: E. coli containing pET-28a-EDIII induced with IPTG; Lane 2: inclusion bodies of IPTG-induced E.coli containing pET-28a-EDIII; Lane 3: supernatant from IPTG-induced E. coli containing pET-28a-EDIII; Lane 4: purified E. coli containing pET-28a-EDIII; M, protein marker.(B) Western blot analysis of the EDIII protein using anti-E monoclonal antibodies.Lane 5: uninduced E. coli containing pET-28a-EDIII; Lane 6: E. coli containing pET-28a-EDIII induced with IPTG.These protein samples were resolved electrophoretically on 12% polyacrylamide gel and transferred to a 0.2 μm polyvinylidene difluoride membrane.Membranes were treated with an anti-E monoclonal antibody followed by an HRP-conjugated goat anti-mouse IgG antibody.The reaction was visualized with a Western blot kit.
Figure 2 .
Figure 2. Expression and purification of the recombinant EDIII protein.(A) SDS-PAGE analysis of the EDIII protein expressed in Escherichia coli.The EDIII protein was 15.4 kDa in SDS-PAGE.The expression of pET-28a-EDIII in E. coli was induced with isopropyl β-D-1-thiogalactopyranoside (IPTG).Inclusion bodies were collected 16 h after induction and subjected to supersonic schizolysis.The recombinant EDIII protein was purified via affinity chromatography on a Ni + spin column.Lane 1: E. coli containing pET-28a-EDIII induced with IPTG; Lane 2: inclusion bodies of IPTG-induced E. coli containing pET-28a-EDIII; Lane 3: supernatant from IPTG-induced E. coli containing pET-28a-EDIII; Lane 4: purified E. coli containing pET-28a-EDIII; M, protein marker.(B) Western blot analysis of the EDIII protein using anti-E monoclonal antibodies.Lane 5: uninduced E. coli containing pET-28a-EDIII; Lane 6: E. coli containing pET-28a-EDIII induced with IPTG.These protein samples were resolved electrophoretically on 12% polyacrylamide gel and transferred to a 0.2 µm polyvinylidene difluoride membrane.Membranes were treated with an anti-E monoclonal antibody followed by an HRP-conjugated goat anti-mouse IgG antibody.The reaction was visualized with a Western blot kit.
Viruses 2024 , 13 Figure 3 .
Figure 3. Specificity and sensitivity testing of the ICS: (A) Specificity of the ICS.Sera positive for different canine viruses were used to evaluate the specificity of the ICS.(B) Sensitivity of the ICS.JEV-positive serum was diluted from 1:10 to 1:1280 to determine the sensitivity of the ICS.
Figure 3 .
Figure 3. Specificity and sensitivity testing of the ICS: (A) Specificity of the ICS.Sera positive for different canine viruses were used to evaluate the specificity of the ICS.(B) Sensitivity of the ICS.JEV-positive serum was diluted from 1:10 to 1:1280 to determine the sensitivity of the ICS.
Table 1 .
Comparison of the results between ICS and virus neutralization among dogs in Shanghai, China.
Table 2 .
Seroprevalence of JEV determined using ICS in dogs in Shanghai, China.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2010-02-06T00:00:00.000
|
33175742
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://journals.iucr.org/e/issues/2010/03/00/bt5181/bt5181.pdf",
"pdf_hash": "2ef0ae4c51c5fb57dbec0942e2dfa710f14ecef2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2718",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "2ef0ae4c51c5fb57dbec0942e2dfa710f14ecef2",
"year": 2010
}
|
pes2o/s2orc
|
(E)-2-Acetyl-4-[(3-methylphenyl)diazenyl]phenol: an X-ray and DFT study
The title compound, C15H14N2O2, an azo dye, displays a trans configuration with respect to the N=N bridge. The dihedral angle between the aromatic rings is 0.18 (14)°. There is a strong intramolecular O—H⋯O hydrogen bond. Geometrical parameters, determined using X-ray diffraction techniques, are compared with those calculated by density functional theory (DFT), using hybrid exchange–correlation functional, B3LYP and semi-empirical (PM3) methods.
Comment
Azo compounds are very important in the field of dyes, pigments and advanced materials (Klaus, 2003). It has been known for many years that the azo compounds are the most widely used class of dyes, due to their versatile applications in various fields such as the dyeing of textile fibers, the coloring of different materials, colored plastics and polymers, biological-medical studies and advanced applications in organic synthesis (Bahatti & Seshadri, 2004;Catino & Farris, 1985;Fadda et al., 1994;Taniike et al., 1996;Zollinger, 2003).
In the title compound, C 15 H 14 N 2 O 2 , the two aromatic groups atteched to the azo bridge are adopted (E) configuration.
The molecule is planar and the dihedral angle between the two aromatic rings is 0.18(0.14)°. All the bond lengths are in agreement with reported for other azo compounds (El-Ghamry et al., 2008). The title molecule (Fig. 1) has a strong intramolecular hydrogen bond between the hydroxyl group and the carbonyl O atom.
Density-functional theory (DFT) (Schmidt & Polik, 2007) and semi-empirical (PM3) calculations and full-geometry optimizations were performed by means of GAUSSIAN 03 W package (Frisch et al., 2004). The selected bond lengths and angles (Table 2.) obtained from semi-empirical and DFT/B3LYP (Becke, 1988;Becke 1993;Lee et al. 1988) are given in Table 2. As can be seen Table 2. the bond lenghts and angles achieved by DFT method are better than those values obtained from PM3 method.
Experimental
A mixture of 3-methylaniline (0.83 g, 7.8 mmol), water (20 ml) and concentrated hydrochloric acid (1.97 ml, 23.4 mmol) was stirred until a clear solution was obtained. This solution was cooled down to 0-5 °C and a solution of sodium nitrite (0.75 g 7.8 mmol) in water was added dropwise while the temperature was maintained below 5 °C. The resulting mixture was stirred for 30 min in an ice bath. 2-hydroxyacetophenone (1.067 g, 7.8 mmol solution (pH 9) was gradually added to a cooled solution of 3-methylbenzenediazonium chloride, prepared as described above, and the resulting mixture was stirred at 0-5 °C for 2 h in ice bath. The product was recrystallized from ethyl alcohol to obtain solid (E)-2-Acetyl-4-(3-methylphenyldiazenyl)phenol.
Refinement
All H atoms (except for H1) were placed in calculated positions and constrained to ride on their parents atoms, with C-H = 0.93-0.97 Å, O-H = 0.98 Å and U iso (H) = 1.2U eq (C) or 1.5U eq (C). The hydroxyl H atom was isotropically refined. Fig. 1. A view of the title compound with the atomic numbering scheme. Displacement ellipsoids are drawn at the 30% probability level. The dashed line indicates the intramolecular hydrogen bond.
Special details
Experimental. 330 frames, detector distance = 80 mm Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes.
Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating Rfactors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
|
v3-fos-license
|
2019-12-19T09:22:02.789Z
|
2019-12-14T00:00:00.000
|
212501316
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ijaers.com/uploads/issue_files/29IJAERS-1220195-Environmental.pdf",
"pdf_hash": "c5e0196cc8e93a15ce3fc433d76a13629d2e4ef5",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2720",
"s2fieldsofstudy": [
"Education"
],
"sha1": "338417e782f0a251ab49de0c55a43f0aeb897dc3",
"year": 2019
}
|
pes2o/s2orc
|
Environmental Impact due to incorrect waste disposal in River Miriti-AM
The present work is an exploratory and descriptive research, which seeks to expand knowledge about the degradation of the Miriti River located in the municipality of Manacapuru-AM, to present the main characteristics of this degradation. The research was conducted in July and august2019,and was mainly related to the observational method that highlights the characteristics of the observed facts. The work that is presented is classified as field research and unsystematic observation because it did not have a guiding script. Data collection took place on Sundays, the day when the flow of people in the Miriti resort is higher and sought to verify people's care in relation to the waste produced in the use of the living space of the resort. The results indicate the presence mainly of plastic and metallic waste such as bottles and cans of beverages. Contact with people in informal conversations indicate that many agree that users of the site lack awareness of the disposal of solid waste. Keywords— Solid Waste; Miriti; Degradation.
INTRODUCTION
One of the most common practices today is the disposal of solid waste in rivers which leads us to a necessarily sanitary problem in urban areas, as it has reached an numerous number of ecosystems, in this case rivers, lakes, oceans and others. According to Botelho (2011), the search for people for leisure spaces and other activities, causes them to ratify and channel rivers, thus modifying their ecosystems, without considering river geomorphology and hydrology thus causing various types of environmental degradation.
According to Cempre (2010) the responsibility to take care of the environment and give the best disposal of waste produced by human beings is an obligation of the public authorities, companies and of course of the population, that is, it is everyone's, this is also very clear in the new Law d and solid waste (Law No. 12,305 of August 2, 2010), where it clarifies us that everyone should give the best destination to waste produced and consumed.
Also, in this context of environmental damage and how to preserve the environment, just as governments play a key role in this issue, other actors also play a relevant role, such as educators, Non-Governmental Organizations (NGOs) and the media communication. In this sense Silva (2010) in studies describes to us that environmental education is very important in solving this problem, since the actors mentioned can encourage people to know and do their part, such as avoiding water waste and improper disposal of solid waste. In addition to these factors, a more educated society is able to consume products from environmentally friendly companies and charge more its representatives to comply with environmental legislation. Santos (2015) reports to us that the disposal of solid waste in rivers has caused major environmental damage in these ecosystems and is a permanent concern of all involved. Water resources have been greatly affected by pollution, and this causes concern for the importance of this natural resource for all forms of social organization to carry out their activities. It is worth remembering that throughout the history of humanity, the rivers in became the backbone of the cities, because they organized themselves near their banks as a way to promote human development, through the benefits of maintaining life. Rivers are able to structure urban fabric and become axes of development of the design of cities, but their importance is forgotten when people by their actions start to degrade this resource.
We realized in Almeida's words (2010), that if we take into account the history of the occupation, modification and degradation of rivers, especially those located near the cities, these are relatively recent, since the history of human beings in land is at least two million years old and the processes cited increased on a higher scale from the 1st Industrial Revolution, i.e. 150 to 200 years ago. Theriver, object to this study is of great importance to the city of Manacapuru, whether in the environmental or tourist issue, aspects that justify the study on the environmental damage that the river has been suffering over the years. On this we will realize that the river is showing clear signs of environmental problems, due of course by the large number of solid wastediscarded its banks, which cause bad smell, dirt and various other types of damage. All aspects mentioned justify this study, because its data, research and examples of combating environmental damage will serve as a socio-environmental parameter for all those who care about future generations.
In view of the problem, the objective of the work was to report and analyze the physical processes (environmental problems or damage) resulting from the inappropriate disposal of solid waste in the Miriti River, located in the municipality of Manacapuru/AM, in the area where the Miriti Resort is located. The disposal of solid waste in the river has direct connection to the environmental damage that the river has been suffering over time, as it is these waste that cause problems such as water pollution, bad smell, whistling and various other problems that should be avoided.
II. MATERIAL AND METHODS The search
Regarding the objectives, this work is an exploratory research defined by Zanella (2013), as one that "aims to expand knowledge about a given phenomenon".
Thus, the work that is presented now seeks to expand the knowledge of the characteristics that define river degradation and the recognition of the reasons that lead to this degradation.
The work is also a descriptive research, explained by Zanella (2013) as "the one centered on the concern of identifying determinant or contributing factors in the triggering of phenomena". In seeking to identify the reasons that cause the degradation of the Miriti River, the work presents them as causative stemming from them and seeks to describe its origin of these causes.
As for the approach, the research is classified as qualitative, defined by Zanella (2013) as "the one that is based mainly on qualitative analyses, [...], characterized in principle, by the non-use of statistical instruments in the analysis of data." Thus, the identification of the characteristics of environmental degradation of the Miriti River, in Manacapuru-AM, occurred from the observation of the actions and impactful processes.
Collection Area
The Miriti River is located about 7 km from the urban area of the city of Manacapuru/AM ( Figure 1) has a population of 96,236 inhabitants (IBGE, 2018), and with approximately three hectares including the beach and river area. Also, the Miriti River is one of the main sights of the municipality, welcoming many visitors during the week. For the study, on-site visits were performed every 15 days during July and August 2019. The present study is characterized as qualitative research, since it aims to observe, record, and correlate facts or phenomena, trying to describe them, classify them and interpret them for the purpose of studying and observing the damage caused by the disposal of solid waste in the Miriti River, a river located in the municipality of Manacapuru -AM. Thus, all the content obtained was analyzed, that is, the reports of the observations, the analysis of documents and the other information available, such as reports and observations on site. In addition to seeking foundation in theorists who treat the environmental cause with different looks, using texts of their works in the theoretical foundation.
Data collection consisted of the on-site visit aiming at observational verification, described by Zanella (2013), as the highlight of a set, of objects, people, animals, in order to list its main characteristics.
As for the observation made to elaborate the present work, it can be classified as field work, given its particularities, and unsystematic observation, because it a) b) does not have a guiding script or guide, but with the focus on objectives and problem of work.
Data Analysis
Data analysis consists of reading and interpreting the content of the collected documents, as well as on-site observation records. This research procedure is a tool for always renewed action due to the increasingly diverse problems that it proposes to investigate.
In this way you can describe and interpret the contents of the entire class of documents and publications read about the subject covered. This analysis leads to systematic, qualitative or quantitative descriptions, helping to interpret messages and achieve an understanding of their meanings at a level that goes beyond a common reading.
The analysis of the collected data is fundamental so that we can know if there are in fact serious environmental problems, as well as its causes and consequences. With the analysis of observations, for example, we note that often because there are not enough containers for the collection of these materials and the lack of education, residents and bathers end up throwing the waste used on the banks of the river, demonstrating lack of concern for the environment. Another issue that helps in this practice is the lack of on-the-spot surveillance, which favours the infringement, since many people only realize the mistake made when they receive a warning.
III.
RESULTS AND DISCUSSIONS Fresh water with characteristics suitable for consumption, has great importance for humanity, and its availability is limited only to rivers, lakes and other superficial sources, so that according to Silva (2018) if we consider only the characteristics suitable for consumption, available water represents 0.4% of.
In a study on the main characteristics of the springs used as sources of supplies in the municipalities of the Solimões-Amazonas River, Azevedo (2016), points out the Miriti River in Manacapuru-AM, as a superficial source of supply of the municipality, however, it indicates that the river is threatened by the launch of domestic sewage, and according to the data collection for the present work the threat also lies in the large-scale presence of solid waste, since its presence in water bodies can harm animal life.
The Miriti River is an important source of leisure and supply in the city of Manacapuru/AM, and for these reasons an analysis of the possible environmental impacts caused by the disposal of solid waste in the region was carried out. Technical visits were made during 07, 14, 21 and 28 july and on 04, 11 and 18 august 2019, with the objective of visualally verifying possible impacts and causatives. It was observed that along the Miriti River there is a wide variety of solid waste discarded on the banks of the river, especially in the Balneário region of the same name. In Table 1, we can see some of the materials found.
Glass
Pieces of glass and bottles.
Metal
Soft drinks, iron.
Wood
Popsicle toothpick, barbecue skewers, crates and matchsticks. Organic Leftover sours, fruits and feces of animals.
Waste in general
Debris from construction material, cigarette portfolios, fabrics and lighters and etc.
Lopes and Jesus (2017) report that historically societies have always developed their economic and social activities based on water resources and the increasing diversification of human activities to develop economically and socially, has required higher volumes of water to meet various consumptions. Among so many forms of use of water resources, Souza (2014), points out that there are other non-consutive use that are activities related to recreation, leisure, landscape harmony and tourism.
In the case of the Miriti resort, the use is public and can be used at any time, however, it is noticeable that some users do not contribute to the maintenance of the quality of the environment they use, therefore, it is necessary that users have the design according to Brum et al (2013) that public spaces should be the main icons of defense of the environment, because their absence in the urban context, should be compensated with the correct use of these spaces especially when there are water resources in evidence.
It is essential that people know the various types of waste or garbage and do whenever possible the correct disposal, because if we do this with certainty we will have a less polluted environment. Another factor that can make a difference for people to better dispose of their waste is for governments to provide adequate places for waste collection, recycling and disposal. Table 1 demonstrates how solid waste is disposed of in days of great movement of people in the resort.
International Journal of Advanced Engineering Research and Science (IJAERS)
[ Vol-6, Issue-12, Dec-2019] https://dx.doi.org/10.22161/ijaers.612.29 ISSN: 2349-6495(P) | 2456-1908 According to MMA, (2010), the use of packaging in commercial products and by-products is essential for the protection of inputs during its distribution, storage, marketing, handling and consumption stage. Among the functions of the packaging is in ensuring safety and quality of life for the population, allowing access to different products from food or medicines to electronics and utensils in general in all regions of the country. Incorrect disposal of packaging is the major problem with regard to the necessary consumption of these inputs.
(O)
The main criterion for if one could know if the site is clean, organized and with adequate waste disposal or not, went to observation on site, because with the observation of the place we can see how they are to the physical characteristics, such as if there are garbage thrown on the banks and on the beach, as well as the behavior of the river goers in relation to this problem. The data covered above are nothing more than the vision I had of how the site was in the memento of the visit and observation.
When analyzing the characteristics of environmental impacts on water bodies, it is possible to realize that elements capable of identifying degradation in these places, according to Braga and Azevedo (2013) are mainly due to increased density population that without proper planning begins to launch its domestic effluents into water bodies, causing their pollution. Albuquerque (2014), comments that although it is one of the most important waterways in the municipality of Manacapuru-AM, the river suffers each year from intense human actions that impair water quality.
The site presents on weekends a large number of regulars, this ends up facilitating the irregular disposal of waste on the banks of the river, among other bad characteristics, and that we noticed in Table 1, is that the Spa of the Miriti River presents in some days waste is deposited improperly, in addition to disorganization and lack of cleaning. During the observations it was worrying to see how people do not care about making their disposals correct and keeping the place clean, which makes us think that there is much lack of awareness with the environment and future generations.
All these on-site visits were fundamental not only for the collection of real data, but also to better understand the situation and the problem addressed among other factors. Images 1; 2 and 3 show a small picture of the situation of the river and how it suffers from pollution, the lack of adequate place, in this case of dumpsters for the disposal of waste is only one example of the problem, in the images we also perceive the solid waste deposited both in the banks as in the river bed causing numerous environmental damage.
In informal conversations with residents and bathers in the resort, about what they think of the irregular disposal of solid waste on the Miriti River, they believe that what is lacking for people is more awareness and education in dealing with garbage, another aspect claimed by them , is that it should have more participation of the public authorities in raising awareness and care for the environment, because there are still adequate places to dispose of the materials, an example would be the lack of places for selective garbage collection and dumpsters in the resort and Nearby. These are just one of the examples that could be applied to at least minimize the problem of pollution of the Miriti River, both by residents and bathers. IV. CONCLUSION Given the observations made in the Miriti River, especially in its locker room, it is perceived that the changes made from the need to provide attractions, to visitors to the municipality, it was found that the place suffers and will continue to undergo changes with incorrect waste disposals, because, even in view of the presence of dumpsters throughout the public use area, it is common to find residues through the ground and water, thus increasing environmental impacts such as water pollution, degradation in the area among other aspects, this is increasingly visible by the fact that the river already presents to its margins materials derived from plastic, metals and others that even take years to decompose.
Polluting sources are diverse, however, measures can be taken, initially by the municipal public authorities in partnership with the community and other environmental workers, where they can define and implement environmental education, signage and monitoring of the area with environmental guards in which when developed can at least mitigate the pollution problems of the Miriti River, since only through education and actions aimed at raising people's awareness, will it be possible to a change of attitude, since the main causes of environmental degradation are human beings. Awareness work should cover residents of marginal areas of the river who are also polluting sources and should be contemplated by government action to reduce the impacts already caused on the water body.
|
v3-fos-license
|
2018-10-13T23:26:48.915Z
|
2018-01-01T00:00:00.000
|
139729829
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ijepr.avestia.com/2018/PDF/003.pdf",
"pdf_hash": "851f9cc0bf2cc27b23d4de82c25b2e774e6bd5a0",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2721",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "851f9cc0bf2cc27b23d4de82c25b2e774e6bd5a0",
"year": 2018
}
|
pes2o/s2orc
|
A Comparison of Antimony in Natural Water with Leaching Concentration from Polyethylene Terephthalate ( PET ) Bottles
Antimony (Sb) is one of the trace hazardous compounds in drinking water. We investigated Sb concentration on natural environment such as river, reservoir, groundwater and raw water for bottled water. The natural content of Sb in northern Gyeonggi province in South Korea showed range from 0.00~1.64 μg/L. The average of Sb in 47 brands of bottled water was 0.57 μg/L on market. As a results of leaching experiment, was leached from polyethylene terephthalate(PET) bottles under storage condition at 35, 45 and 60°C. Sb concentration was increased from 1.04 to 9.84 μg/L under 60°C after 12weeks. UV-ray irradiation to bottled water not significantly induced antimony leaching for 14days.
Introduction
Antimony (Sb) is one of the trace hazardous components in drinking water.Polyethylene terephthalate (PET) widely used to container for bottled water.Antimony trioxide (Sb2O3) is one of the most important catalysts widely used for solid-phased poly condensation of PET.It offers high catalytic activity, does not induce undesirable color, and has a low tendency to catalyse side reactions [1].Polyethylene terephthalate (PET) has wide acceptance for use in direct contact with food, can be recycled and can be depolymerized to its monomer constituents [2].In a recent study of Sb in bottled water in Europe and Canada, it was shown that the water become contaminated during storage because of Sb leaching from PET [3].People tend to bottled water in cars for weeks or months.The temperature in car can reach to 75℃ at ambient temperature of 33℃ in summer [4].It can be expected that storage at high temperature may enhance contaminant release into water from PET bottles [5].The bottled water market is continuously growing amount up from 2.1 million ton of 2004 to 3.5 million ton of 2013 in South Korea [6].Especially, our institute has concerned safety of bottled water for customer.Because there are 62% of manufacturing plants for bottled water located in Gyeonggi province in South Korea.
Recommended standard for tap water have regulated 20 μg/L of Sb since 1998 in Korea.Recently, recommended standard for bottled water regulated in 2014 that should be inspected within 15 μg/L.Europe Union, World Health Organization, United States and Japan also have drinking water standards for Sb at 5 μg/L, 20 μg/L, 6 μg/L and 2 μg/L, respectively [7,8,9,10].Bottles made using PET typically contain 100~300 of mg/kg Sb in the plastic.In contrast to bottles, Sb is found at natural environment from rocks, groundwater and river.Concentrations of Sb in crustal rocks is about 0.3 mg/kg and pristine groundwater and surface water normally range from 0.1 to 0.2 μg/L [11].The International Agency for Research on Cancer (IARC) was classified as possibly carcinogenic to humans; Group 2B.It can cause nausea, vomiting and diarrhea when exposed MCL in short periods.The exposure of long term elevated Sb can lead to increased blood cholesterol and decreased blood sugar [12].
The objective of this study was to investigate Sb concentration in natural environment such as source water from river or reservoir, tap water, natural springs, raw water for bottled water.Next, brands of bottled water on market were collected and analysed.Then, Sb content was compared between natural water and PET bottled water.
Finally, the Sb leaching experiments for PET bottled water were conducted according to storage duration, temperature and ultraviolet (UV) to know amount of migration Sb from PET bottles into drinking water.
Sb Concentration in Natural Environment
Natural Sb contents were investigated for source water from river or reservoir and tap water in 15 of water supply plants, groundwater from 50 of mineral springs located in northern Gyeonggi province in South Korea.The raw water for bottled water sampled from 54 of intake holes in 13 of bottled water manufacturer plants in northern Gyeonggi province.Sb concentration was analysed by inductively coupled plasma-mass (Bruker aurora, Germany).
Investigation of Sb Concentration in PET Bottled Water and Comparison with Raw Water for Bottled Water
To investigate leaching effect of Sb into water of PET bottles, 47 commercial brands of PET bottled water were collected in market include domestic manufactured products and imported products.The domestic products, imported water and deep sea water were 35, 7 and 7 bottles respectively.And then Sb concentration was compared with PET bottled water which was produced on that day and raw water for bottled water sampling in 54 intake holes.
Leaching Experiment in PET Bottled Water
The PET bottled water stored in 4, 21, 35, 45 and 60℃ for 12 weeks used two bottles for each temperature.Sb concentration was analysed for every 2 weeks during that period.Polypropylene (PP) and glass bottles were used for control sample to compare to leaching amounts of Sb from PET bottles.Next, PET bottled water stored for 2, 6, 12, 24, 36, 48hrs, 3, 7, 9, 12, 14 days under ultraviolet (UV)-ray.Control samples were PP bottles and glass bottles.pH of bottled water at 6-8 poses no effect on Sb release [12].Therefore, leaching experiment for pH influence was not conducted.
Leaching Rate of Sb under Ambient Condition
The Sb concentration for 50 brands of bottled water was analysed under room temperature and natural sunlight after six months.Because boxes of bottled water before reaching customers were mostly stored in uncontrolled storage, yard in country.PET bottled water can easily expose in sunlight and no constant temperature.That's why we conducted leaching experiment under ambient condition.Initial concentration of Sb was analysed in June of 2016 and then after six month checked later concentration.
Power Function Model on Temperature-and Time-Dependant for Sb Leaching
The results of leaching experiment for temperature used to calculate in power function model.The rate of change in Sb leaching was best fit by a power rather than by first-or second order reaction kinetics with respect to Sb concentration [12].The equation of power function model was presented in Eq. 1 as: The a is the fitted initial Sb concentration a is the fitted initial Sb concentration (C, ppb) at time zero and k is a temperature-dependent power function constant.Time is in hours.
Estimation of Exposure Time to Reach Standard Concentration in PET Bottles
When PET bottles place in high temperature condition, it is needed to know how many times to reach standard concentration.Estimated exposure time was from Eq.2 [12] as: The Sb0 was initial concentration when PET bottles stored in 60℃.k was power function model exponent which was plotted by temperature and time dependent relation for Sb leaching.STD ppb was concentration for each nation such as South Korea (15ppb), Europe (5ppb), U.S. (6ppb) and Japan (2ppb).
Natural Sb Concentration for Source and Tap Water in Water Supply Plants
Natural Sb concentration in source water from river or reservoir in fifteen water supply plants was 0.13 μg/L of average.The average of tap water also indicated same value at 0.13 μg/L of average.Although natural concentration was very low to not enough to compare variation of concentration, it means that treatment process to be tap water might be to not influence Sb concentration.In Korea, the recommended standard of Sb content for tap water is 20 μg/L.The average of tap water in northern Gyeonggi area was 0.65% compared to the standard.Figure 1 indicated natural Sb concentration in source water and tap water from fifteen water supply plant located in northern Gyeonggi area.HR, SW and GP plants were located in Gypeong.GY and IS plants were located in Goyang.Most of samples showed approximately 0.1 μg/L.Otherwise source water and tap water from MS plant located in Paju city had higher concentration than others.Sb in source water was 0.72 μg/L and tap water showed 0.84 μg/L.In previous study, typical concentration of dissolved Sb in unpolluted water is less than 1 μg/L [13].
Natural Sb Concentration for Mineral Springs
The average of the drinking water from 50 mineral springs for each of 10 sites in 5 cities in northern Gyeonggi province was 0.02 μg/L that were 0.1 times lower than Sb average concentration of groundwater which was investigated by WHO.According to data from ministry of environment in Korea, Sb concentration for groundwater was 0.24 μg/L on average.
Figure 2 showed Sb concentration on average for mineral springs located in each city.These values were under 0.03 μg/L that was very low.There were 6 nondetected places among 50 mineral springs.The Sb in source water from river-bed, surface water and reservoir was 0.13 μg/L.On the other hand, mineral springs showed 0.02 μg/L in northern Gyeonggi area.The Sb concentration in source water was 6.5 times higher than groundwater from mineral springs.Because the Sb might be to leached from soil and sediment with other metals in river.
Sb Concentration in Mineral Water for Bottled Water
We investigated mineral water from 54 intake holes of 13 bottled water manufacturing plants located in northern Gyeonggi province.Mineral water was draw aquifer up below approximately 200meter.Aquifer Detection rate of Sb was 90.7%.Average concentration in mineral water was 0.32 μg/L and maximum was 1.64 μg/L and there were 5 non-detected intake holes.Figure 3 has shown Sb concentration in mineral water sampled from 54intake holes.From A to M in graph mean each bottled water company and the numbers below these alphabet indicated how many intake holes had in that bottled water manufacturing plants.The C company located in Gapyeong had highest concentration of Sb.Average concentration for 5 holes was 1.22 μg/L.And then I company located in Yeoncheon showed 0.81 μg/L from 4 intake holes.Futhermore the highest value was shown at 1.64 μg/L sampled from second intake hole in I company.The G and H companies had the lowest concentration at 0.01 μg/L.As we reported that source water from river and reservoir included 0.13 μg/L of Sb.And Sb from mineral springs was 0.02 μg/L.The mineral water for bottled water had higher contents than source water and mineral springs.
Sb Concentration for PET Bottled Water on Market
We collected 47 brands of PET bottled water in market.35 products was domestic bottled water.Imported products were 7 and deep sea water was 7 products.Detection rate of Sb was 100% in these products, while the detection rate in mineral water from manufacturing plants was 90.7% .Total average concentration of 47 samples was 0.57 μg/L on the other hand mineral water was 0.32 μg/L.In Korea, the monitoring standard for Sb is 15 μg/L.Therefore the mean concentration in PET bottled water was 0.04% compare to 15 μg/L of monitoring standard to consider safe to drink.There were no change for concentration at 4 and 21℃ for 12weeks.However it was exposed that Sb was released from PET bottles after 2weeks at 35, 45 and 60℃.In case of storing 35℃ bottles, initial concentration of Sb was 1.6 μg/L and time goes by final concentration of Sb indicated 2.50 μg/L which was 1.6 times compare to before 12weeks.At 45 ℃, initial Sb in PET bottle water was 1.71 μg/L and then after 12weeks it contained 2.1times contents.Finally, there are significant releasing effects from PET bottled water at 60 ℃ for every two weeks.After 2weeks, it observed significant increasing concentration of Sb from 1.04 μg/L to 4.31 μg/L to be 4.1 times leaching effects.After 12weeks, Sb was observed at 9.84 μg/L in PET bottled water.
3.6.Sb Leaching Experiment to Irradiate UV-Ray for PET Bottled Water PET bottled water was exposed by UV-ray for 2, 6, 12, 24, 36 and 48hrs, 3, 7, 9, 12 and 14days respectively.As a result, releasing of Sb started to irradiate UV-ray after 6hrs. Figure 6 showed variation of Sb concentration in PET bottled water that indicated releasing amounts.After 6hrs, 0.19 μg/L of Sb was leached from PET bottled water.There were no significant changes of amount for releasing Sb as time goes by.It was founded that the Sb variation was similar range from 0.1 to 0.2 μg/L until 14days.And mean of variation was 0.16 μg/L.Therefore, Duration for UV-ray irradiation was not contributed to releasing Sb from PET bottles.
Leaching Rate of Sb under Ambient Condition from PET Bottled Water
The results for Sb leaching experiment for temperature and UV-ray were mentioned above sections.Also, Sb leaching rate under ambient condition was observed from June of 2016 to December of 2016.The 50 brands of PET bottled water were store in room temperature for six months.Because most consumers would put PET bottled water in part of the kitchen or terrace.Figure 7 showed the leaching rate of Sb. 43 PET bottled water of the 50 samples were observed for releasing Sb.Most samples indicated 20~60% of leaching compare to initial concentration.Maximum leaching rate was approximately 160%.
Sb Leaching Applied for Power Function Model on Temperature-And Time-Dependant
The rate of antimony leaching was correlated by a power function model [10].So, we also calculated equation of Eq.1 using results of leaching experiment for Sb at 35, 45 and 60℃.Table 3 indicated value for k and R 2 for each temperature.All of R 2 were over 0.90 so we knew that this studies for leaching Sb also got well fitted with power function model.
Estimation of Exposure Time to Reach Standard Concentration in PET Bottles
We calculated the expected exposure time to reach standard concentration for standards for each country in case of PET bottled water stored in 60℃.The Eq.2 was used for estimation and Sb0 was 1.04 μg/L.The k value was 0.93 indicated in Table 3.In case of South Korea, the Sb concentration will reach to 15 μg/L of recommended standard after 350days.The 15.7 days were needed to reach 6 μg/L of standard in U.S. The 8.5 days of exposure time can exceed 5 μg/L of standard in Europe.And, 9.2 hours were required to reach 2 μg/L of recommended standard in Japan.
Conclusions
In the northern Gyeonggi province of South Korea, natural concentrations of antimony was investigated in 13 of water supply plants for source water from river or reservoir and tap water were range from 0.01~0.84μg/L and mean was 0.13 μg/L for both source water and tap water.50 mineral springs for public showed 0.02 μg/L that were very low to not need to be worry.And mineral water for bottled water which contained 0.32 μg/L on average to be investigated from 54 intake holes in 13 bottled water manufacturing plants.As a result of 47 brands of PET bottled water, average concentration of Sb was 0.57 μg/L.and detection rate was 100%.Otherwise detection rate of mineral water was 90.7%.PET bottled water brands, the average value was 0.57 μg/L and detection rate was 100%.Otherwise detection rate of raw water to bottled water showed 90.7%.Sb in PET bottled water contained higher concentration than natural water such as river, mineral springs, mineral water for bottled water.As a result of leaching experiment for PET bottled water, releasing of Sb was revealed temperature-dependent.Sb concentration from PET bottled water in 35, 45 and 60℃ started to increase after 2weeks.The leaching amount was rapidly increased till 9.84 μg/L after 12weeks in 60℃, though, it was less than recommended standard in Korea.However UV-ray irradiation to bottled water for 14days induced increasing antimony release into water very slightly that variation for Sb concentration was constant from 0.1 to 0.2 μg/L.Therefore we realized that it needed to store the PET bottled water in storage condition to maintain 4, 21℃.High temperature can induce leaching Sb from PET bottled water.
Figure 1 .
Figure 1.Antimony in source and tap water sampled in water supply plants located in northern Gyeonggi province.
Figure 2 .
Figure 2. The average of antimony concentration in mineral springs for each city.
Figure 3 .
Figure 3. Antimony in mineral water from 54 intake holes in 13 bottled water manufacturing plants.
Figure 4
indicated median concentration of Sb in mineral water for each area in northern Gyeonggi area.The 3 bottled water manufacture plants located in Gapyeong had high value at 0.80 μg/L.The 5 companies located in Pocheon had lowest value at 0.06 μg/L.Therefore we have known Sb concentration was affected by properties of local area.So we tried to investigate geologic map for Sb to find more specific proof but there were no specific geologic information for Sb in Korea yet.
Figure 4 .
Figure 4. Median value of antimony for each area in northern Gyeonggi province.
3. 5 .
Sb Leaching Experiment from PET Bottled Water According to Time and Temperature Sb leaching experiment was conducted by storing PET bottled water at 4, 21, 35, 45 and 60 ℃ for 12 weeks to analyse every two weeks.After 2weeks, it observed that Sb was leached from PET bottles at 35, 45 and 60 ℃.
Figure 5
indicated leaching effects of Sb from PET bottles according to time and temperature.
Figure 6 .
Figure 6.Antimony variation to expose UV-ray for PET, PP and Glass bottles.
Figure 7 .
Figure 7.The Sb leaching rate of 50 brands of PET bottled water were stored in ambient condition after six month.
Table 1 .
The water supply plants located in northern Gyeonggi province.
Table 2 .
Sb concentration in brands of PET bottled water collected in market (μg/L).
Table 2
Becausebottled water was divided into mineral water and deep sea water by law in Korea.Domestic brands of PET bottled water contained 0.75 μg/L and one of the products had 1.09 μg/L of Sb.And minimum concentration was 0.12 μg/L.In case of imported PET bottled water was 0.55 μg/L of average.Maximum and minimum values in imported samples were similar with domestic products.We guessed that imported PET bottled water would contain more Sb concentration than domestic ones because of the long transport time.However, the average of Sb domestic products got higher value.Deep sea water of bottled water contained lower concentration than mineral water of bottled water.
was shown that Sb concentration for 47 brands of PET bottled water collected in 5 markets.The PET bottled water was classified as three parts like domestic, imported mineral water and deep sea water.
Table 3 .
The results of power function model for Sb leaching experiment for exposure time.
|
v3-fos-license
|
2018-12-17T14:04:04.473Z
|
2018-12-17T00:00:00.000
|
55702214
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2018.01473/pdf",
"pdf_hash": "cfed0e5d5c946fbb46c8f1fef7c0fc615273d9af",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2722",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "cfed0e5d5c946fbb46c8f1fef7c0fc615273d9af",
"year": 2018
}
|
pes2o/s2orc
|
A Novel Systems Pharmacology Method to Investigate Molecular Mechanisms of Scutellaria barbata D. Don for Non-small Cell Lung Cancer
Non-small cell lung cancer (NSCLC) is the most ordinary type of lung cancer which leads to 1/3 of all cancer deaths. At present, cytotoxic chemotherapy, surgical resection, radiation, and photodynamic therapy are the main strategies for NSCLC treatment. However, NSCLC is relatively resistant to the above therapeutic strategies, resulting in a rather low (20%) 5-year survival rate. Therefore, there is imperative to identify or develop efficient lead compounds for the treatment of NSCLC. Here, we report that the herb Scutellaria barbata D. Don (SBD) can effectively treat NSCLC by anti-inflammatory, promoting apoptosis, cell cycle arrest, and angiogenesis. In this work, we analyze the molecular mechanism of SBD for NSCLC treatment by applying the systems pharmacology strategy. This method combines pharmacokinetics analysis with pharmacodynamics evaluation to screen out the active compounds, predict the targets and assess the networks and pathways. Results show that 33 compounds were identified with potential anti-cancer effects. Utilizing these active compounds as probes, we predicted that 145 NSCLC related targets mainly involved four aspects: apoptosis, inflammation, cell cycle, and angiogenesis. And in vitro experiments were managed to evaluate the reliability of some vital active compounds and targets. Overall, a complete overview of the integrated systems pharmacology method provides a precise probe to elucidate the molecular mechanisms of SBD for NSCLC. Moreover, baicalein from SBD effectively inhibited tumor growth in an LLC tumor-bearing mice models, demonstrating the anti-tumor effects of SBD. Our findings further provided experimental evidence for the application in the treatment of NSCLC.
INTRODUCTION Non-small cell lung cancer (NSCLC) is one of the leading causes of cancer death worldwide (Jemal et al., 2003). Faced with palliative care, chemotherapy is one of the main methods, but may cause severe side-effects and often leads to multidrug resistance (Ho et al., 2007). Therefore, the future of NSCLC treatment depends on the exploration and development of more effective drugs. In recent years, a large number of therapies, such as platinum therapies still represent the most common first-line treatment for NSCLC, however, it's still difficult to achieve the most ideal treatment effect.
Traditional Chinese medicines (TCMs) are effective to relieve complicated diseases in a multi-target/multi-component manner, which makes them unique among all traditional medicines (Qiu, 2015), and have been used to treat various human diseases for over 4,000 years (Tang et al., 2009). For instance, Scutellaria barbata D. Don (SBD), is a perennial herb which is natively distributed in northeast Asia. This herb is known as Ban-Zhi Lian in TCMs which has been used to inhibit inflammatory (Dai et al., 2013) and block tumor (Wang, 2012) growth. Although SBD has been proven to be dramatically efficient in curing NSCLC , the fundamental molecular action mechanisms are still not systematically explored. The bioactive compounds, the potential targets and the related pathways of SBD remain unknown. With the advancement of analytical tools such as systems biology (Kitano, 2002), network biology ( Barabási and Oltvai, 2004) and network pharmacology (Hopkins, 2008), the intricate and holistic mechanisms of TCMs may be elucidated in a fast and highly effective way.
Recently, as a novelty discipline, systems pharmacology provides a new manner that integrates pharmacology and systems biology pharmacology, provides a new approach to explore TCMs across multiple scales of complexity ranging from molecular and cellular levels to tissue and organism levels (Berger and Iyengar, 2009). Systems pharmacology contains pharmacokinetics (ADME properties of drugs) evaluation, target prediction as well as network analysis , which offers a platform for identifying multiple mechanisms of action of medicine. In our previous work, the systems pharmacology method has been successfully applied to uncover the underlying function mechanisms of TCM formulas for cancer, depression, and cardiovascular diseases treatment Zhang et al., 2014;Zheng et al., 2014).
Here, we introduce the method of systems pharmacology to resolve the underlying action mechanisms of herbal medicines in the treatment of NSCLC. Firstly, we filtered active compounds from the constructed SBD compound database by calculating pharmacokinetic properties and evaluating their oral bioavailability (OB) and drug-likeness (DL). Then, based on the integrated target prediction methods which united the biological and mathematical models, homologous targets of these active compound were predicted. Subsequently, obtained targets were validated by function enrichment analysis and target-disease interactions analysis. Ultimately, the network pharmacology and NSCLC-related signaling pathways evaluation were carried out to systematically disclose the underlying reciprocity between active compounds, active targets and pathways. The results not only significantly improved our understanding of NSCLC treatment mechanism, but also dissected the molecular mechanism of action of SBD, which promoted the exploitation of TCM in the treatment of sophisticated diseases. And in vitro experiments were conducted to evaluate the reliability of some vital active compounds and targets. Additionally, our in vivo results, which we subsequently confirmed using in vitro mechanism based assays, demonstrate that the significant anti-tumor activity of baicalein from SBD is associated with a direct impact of baicalein on improving tumor-inflammatory microenvironment. Our characterization of baicalein mediated changes in enzymes, cytokines, chemokines, and other growth factors associated with a tumor-inflammatory microenvironment offer multiple candidates to serve as potential biomarkers for ongoing clinical trials. In this paper, the detailed flow chart is shown in Figure 1.
Candidate Compound Database
All candidate compounds of SBD were manually collected from a wide scale text mining and our in-house developed database: the Traditional Chinese Medicine Systems Pharmacology Database (TCMSP 1 ) (Ru et al., 2014). We got a total of 80 candidate compounds including flavonoids, terpenoids, and others. Glycosides were easy to hydrolyze into free glycosides absorbed by intestinal mucosa (Németh et al., 2003). Therefore, two aglycone compounds of glycosides in herbs were also added into the compound database for SBD. Eventually, we obtained 82 related compounds of SBD.
ADME Screening
To investigate the active compounds of SBD that play a role in anti-NSCLC, we predicted the OB (predicted OB) and DL (predicted drug-likeness) values of them.
Oral Bioavailability
Oral bioavailability is considered to be an important indicator of the efficiency of active drug delivery to the systemic circulation, and OB is therefore one of the most important ADME of oral drugs. In this article, OB screening is calculated by a powerful internal system, OBioavail1.1 , we set the OB value mainly considering the following two factors as the basic principle. First, the information extracted from the studied medicines should be as much as probable with a minimum of molecules. Second, reasonably explaining the obtaining model by the reported pharmacological data . In this work, we have obtained an OB value of 30%, and the selected active compounds will be analyzed in the next step.
Drug-Likeness
Drug-likeness (DL) is used to estimate the similarity of physical properties of compounds with known drugs. In order to pick out the drug-like active molecules from SBD, based on molecular descriptors and Tanimoto similarity (Yamanishi et al., 2010;Liu et al., 2013), we used a self-constructed model DL to calculate the drug-likeness index of these compounds. The Drug-likeness evaluation method is as follows: Here, A is defined as a molecular descriptor for herbal compounds and B is defined as the average molecular properties of all compounds in Drug Bank database 2 2 http://www.drugbank.ca/ (Wishart et al., 2008). In this work, a compound with DL ≥ 0.18 was selected as the active compound for further study.
In order to acquire the potential active compounds, the screening principle was defined as follows: OB ≥ 30%; DL ≥ 0.18.
Target Prediction
To establish a direct link between the potential active compounds of SBD and the target, target selection for active compounds remains an urgent step. Therefore, compounds were further analyzed at the gene level. Firstly, targets exploration was fulfilled based on the weighted ensemble similarity (WES) and systematic drug targeting tool (SysDT). SysDT is a powerful computational model combining mathematics and bioinformatics. However, WES is a in silico model to pinpoint the drug direct targets of the actual bioactive ingredients . Secondly, we have mapped targets for UniProt 3 , unifying their names and organisms. Normalized compound targets are mapped to the CTD database 4 (Davis et al., 2013), Therapeutic Target Database (TTD 5 ) (Zhu et al., 2012), and Pharmacogenomics Knowledgebase (PharmGKB 6 ) ( Thorn et al., 2013) to obtain their associated diseases, providing a clearly defined target-disease relationship.
GOBP
To probe the involved biological processes of the obtained targets, in this work, gene ontology (GO) enrichment analysis was performed by linking targets to DAVID (The Database for Annotation, Visualization and Integrated Discovery 7 ) . Terms from "Biological Process" (GOBP) were utilized to symbol gene function . Only GO terms with p-value ≤ 0.05 were chosen. The false discovery rate (FDR) was introduced to reveal a multiple-hypothesis testing faulty measure of p-values by utilizing the web tool DAVID. We employed a 0.05 FDR criteria as an important cut off in our analysis.
Network Construction
Currently, we have completed the screening and mapping of the active compounds and active targets. In order to investigate the multiple action mechanism of active compounds against NSCLC, and further clarify the relationship between active targets and active compounds. The Cytoscape 3.6.0 (Liu et al., 2016), a popular bioinformatics software package for biological network visualization and data integration was used. Two types of global networks were constructed: compound-target (C-T) network and target-pathway network (T-P) . In the magic network, compounds, targets, and pathways were represented by nodes, and the relationship between them was represented by the edges. In addition, degree (a vital topological parameter) was analyzed by the plug in Network Analyzer of Cytoscape (Shannon et al., 2003). The degree of a node is defined as the number of edges connected to that node.
In order to explore the integrative mechanisms of the formula for NSCLC, firstly, the activity target was mapped to the KEGG database 8 (Kanehisa et al., 2017) and we got the basic information of the pathway. Secondly, according to the latest NSCLC pathological information, an integrated "NSCLC-Pathway" was assembled by integrating the key pathways that obtained through the T-P network and C-P network analysis.
Cell and Mice
Human NSCLC H1975, RAW264.7 and Lewis lung carcinoma (LLC) cells were obtained from Chinese Academy of Sciences Shanghai cell bank. H1975 cells were cultured in RPMI1640 (Gibco) media complemented with 10% heat inactivated fetal bovine serum (FBS). RAW264.7 and LLC cells were cultured in DMEM (Gibco, United States) with 10% FBS. Cells were cultured at 37 • C with 5% CO 2 for all experiments. Mice were maintained under specific pathogen-free conditions at the Institute of Laboratory Animals, Jiangsu Kanion Parmaceutical, Co., Ltd. and used under protocols approved by the respective Institute of pharmacology and toxicology institutional review board (IRB), all animal experiments were performed in accordance with national and European guidelines. C57BL/6 mice (6-8 week-old) were purchased from the Comparative Medicine Centre of Yangzhou University. Female C57BL/6 wild type mice (6-8 week-old) were inoculated subcutaneously in the right flank with 5 × 10 5 LLC cells per mouse (day 0). Before treatment, mice were then randomized into two groups: control (n = 6), Baicalein (n = 6). Baicalein (1.5 mg/kg, Yuan ye, Shanghai) was treated every day administration after the tumor inoculation (day 2). For untreated mice, an isotype control for physiological saline was intraperitoneally (i.p.) injected. Tumors were measured on every alternate day, and tumor volumes were calculated using the formula for a typical ellipsoid length × (width 2 ) × 0.5(mm 3 ). For survival decomposition, mice with tumors greater than the length limit of 20 mm were sacrificed and counted as dead.
To examine the requirement of the priming and effector phases of tumor mass, the mice were sacrificed and tumors harvested for analysis, following 17 day observations and measurements.
Cell Viability Assay
Baicalein was purchased from Shanghai Yuanye Bio-Technology, Co., Ltd. (HPLC ≥ 98%, shanghai, China). Test samples were dissolved in dimethylsulfoxide (DMSO) (Sigma, United States) to get 100 mM, as a stock solution, and then stored at 4 • C, it was not degrade due to high concentration of DMSO. The final dilutions of DMSO added to the culture medium never exceeded 0.1% what insured there was no effect on cell viability.
H1975 cells in the logarithmic phase were seeded at a density of 1 × 10 5 cells/ml in 96-well culture plates. After incubated 48 h, cells were exposed to different concentrations of baicalein (1.675, 3.125, 6.25, 12.5, 25, 50, 100, and 150 µmol/L), RAW264.7 and H1975 cells have the same experimental protocol. After treatment for 48 h, then, 10 µl of CCK-8 assay (Best Bio, Shanghai, China) was added to each well and the cells were incubated for 1-4 h at 37 • C and 5% CO 2 . A plate reader was used to detect the optical density (OD) absorbance at 450 nm. The cell viability was calculated as: OD of treatment/OD of control × 100%.
Western Blotting
Cells were scraped, collected by centrifugation and lysis in Qproteome TM Mammalian Protein Prep Kit (Qiagen, Germany). The protein concentration of lysate was measured by a Quick Stari Bradford Protein Assay Kit (Bio-Rad, United States). Equal amount of protein taken from each sample was electrophoresis by 10% SDS-page gel electrophoresis and electroblotted onto nitrocellulose membranes, which were then incubated in a blocking buffer of 5% bovine serum albumin (BSA) in Tris-buffered saline. Primary antibody incubations were done overnight at 4 • C in blocking buffer. After washing, secondary antibody incubations was done at room temperature for 1-1.5 h in blocking buffer. Primary antibodies recognizing the following proteins were obtained from ABcam: COX-2, iNOS, NF-κB, p38, ERK, p-p38, P-ERK, AKT, p-AKT, Bcl-2, CDK2, Bax, GAPDH. The membranes were detected by using the Clarity TM Western ECL substrate (Bio-Rad) and labeling were visualized by Imagelab software (Bio-Rad).
Flow Cytometry Staining and Analysis
Tumors were digested with collagenase and hyaluronidase for 1 h at 37 • C. After lysising of red blood cell, the dissociated cells were incubated on ice for 10 min, and then spun down at 300 g, 4 • C for 7 min, cells from these tumors were either used for flow cytometry analysis or further processed and used for functional analyses. Tumor cell suspensions, were washed, blocked with Fc Block (anti-mouse CD16/32 mAb; BD Biosciences) at 4 • C on ice for 15 min, and stained with fluorescence conjugated antibodies against surface markers CD49b, CD3E, CD8a, and CD25. These antibodies were purchased from BioLegend, eBioscience, or BD Biosciences. Cells were then fixed in Fixation/Permeabilization buffer (eBioscience) and stained with antibodies against intracellular proteins, including FoxP3 (BioLegend), granzyme B and interferon-γ (IFN-γ) (BD Pharmingen). Stained cells and isotype-control-stained cells, were assayed using a BD FACSVerse (BD Biosciences, United States). Data analysis was performed using the FlowJo (Tree Star) software.
Statistical Analysis
All Data are rendered as means ± standard error and the statistical results are analyzed by a one-way ANOVA and Student's t-test. p-Values below 0.05 were considered as statistically significant.
Active Compounds Screening
In this work, a total of 82 SBD related candidate compounds were collected from the SBD. To screen out the active compounds, it is significant to evaluate the compounds' ADME properties including oral bioavailability (OB ≥ 30%) and drug-likeness (DL ≥ 0.18). As a result, based on the satisfactory screening conditions: 33 active components have been identified by us. In order to acquire more comprehensive results and make up for the theoretical screening deficiencies, some certain unqualified compounds, which have relatively poor pharmacokinetic properties, but are the most abundant and active compounds of certain herbs, were also selected as the active components for further study. For example, quercetin has poor OB (25%) property, it has been retained for further analysis as it is the main component of SBD and has anticancer and anti-inflammatory effects (Mukherjee and Khuda-Bukhsh, 2015). Also, luteolin with relatively poor OB (26.5%) was retained for further analysis since it exerts remarkable tumor suppressive activity on various types of cancers, including NSCLC (Jiang J et al., 2018). In the end, we obtained all 33 candidate components (Supplementary Table 1) (The structures of the compounds were derived from NCBI 9 ) of SBD. Thereinto, flavonoids compounds have been reported to demonstrate significant biologic activity including anti-inflammatory, inhibit tumor angiogenesis, and cell cycle arrest Anwar et al., 2018). Such as, apigenin (MOL001 OB = 33.6% DL = 0.25), baicalein (MOL070 0B = 44.6% DL = 0.21). According to reports, diterpene compounds have activity test results, indicating that diterpene alkaloids have good cytotoxicity and can effectively inhibit the growth of a variety of human tumor cells (Lee and Ychoi, 2010), for example, Scutebarbatine F. In addition to the above components, SBD also contains ursolic acid, β-sitosterol, which has significant antitumor activity Rajavel et al., 2018). These active compounds could be the main elements for curing NSCLC.
Target and Function Analysis
To get the targets related to NSCLC we firstly identified 225 targets of these active compounds by means of the WES and SysDT algorithms. The results shown that the candidate compounds act on multiple targets, and one target can also be linked to multiple candidate molecules. For example, target Nitric oxide synthase 2 (NOS2) corresponds to 15 compounds accounting for 45% of the total active compounds. Subsequently, as we know, increasing evidence has identified that improving the inflammatory microenvironment plays a crucial role in the research progress of NSCLC (Lee et al., 2017). Hence, the targets involved in the biological progress of NSCLC will be further preserved. Then, 225 candidate targets were mapping to the CTD, TTD, PharmGKB database to obtain the corresponding target related diseases. After screening, we finally retrieved 145 potential targets (Supplementary Table 2).
GOBP Analysis
To identify and analyze whether the biological process corresponding to the active target corresponds to NSCLC. GO (p-value ≤ 0.05) enrichment analysis was used to obtain 24 vital biological processes (Figure 2) by mapping targets to DAVID and screening. The results shown that the majority of these targets were strongly associated with various biological processes, including negative regulation of apoptosis process, positive regulation of cell proliferation, positive regulation of cell migration, inflammatory response, and angiogenesis. These biological processes are related to the research mechanism of NSCLC.
Compound-Target Network Evaluation
In order to more directly reflect the relationship between targets and compounds, we have used Cytoscape 3.6.0 to map out the C-T relation network diagram. As shown in Figure 3, C-T diagram consist of 187 nodes (33 active compound nodes and 145 active target nodes) and 684 edges. Subsequently, C-T network topology analysis showed that the average degree of targets for each target was 4.7, illustrating the multi-target nature of SBD. Among the 33 active compounds, 25 of them show a high degree (degree > 10), which may play a key role in the network. Meanwhile, each active compound is associated with multiple targets, manifesting the potential synergistic effects among them.
Here, baicalein (MOL070) is the core component of SBD and display the highest number of target interactions (degree = 46). Previously, baicalein has been shown to have anti-cancer activities in several human cancer cell types including breast cancer, ovarian cancer, and colonic cancer (Yu et al., 2014;Wang et al., 2017;Dou et al., 2018). But beyond that, scutellarin (MOL036, degree = 39), a flavonoid used in Chinese herbal medicine, inhibited the proliferation and migration of human NSCLC cells (Sun et al., 2018). Also, wogonin (MOL007, degree = 27) is one of the active components of favonoids that are present in extracts from SBD. Recently, regulating immune function, anticancer and anti-inflammatory effects of wogonin have been discovered (Wang et al., 2010;Shi et al., 2017;Zhao et al., 2018). So, we speculate that the top three compounds might be the crucial elements in the treatment of NSCLC, which might exhibit anti-tumor and anti-inflammatory effects of SBD in the treatment of NSCLC. For instance, phosphatidylinositol-4,5-bisphosphate 3-kinase catalytic subunit gamma (PIK3CG, degree = 7) is targeted by seven active compounds from SBD. PIK3CG induces a transcription process that promotes immune suppression tumor growth and inflammation (Kaneda et al., 2017). Then, prostaglandin G/H synthase 2 (PTGS2, degree = 10) is simultaneously targeted by 10 active compounds, and has high expression in various tumors, which can promote tumor growth and regulates inflammatory response Zhu et al., 2018). All of these suggest that SBD probably treat NSCLC by anti-inflammatory, inhibiting tumor angiogenesis, cell cycle arrest, and promoting apoptosis.
Target-Pathway Network Evaluation
The result displays that 145 targets are further mapped to 108 pathways, which show an average degree of 6.85 per pathway and 2.8 per target pathway. Then, we discover that several target proteins (71/145) are mapped to multiple pathways (≥5), FIGURE 3 | C-T network. A compound node and a target node are connected, if the gene is targeted by the consistent compound. Node size is relative to its degree.
exhibiting that these targets might intercede the interactions and cross-talk between different pathways. Meanwhile, numerous pathways (70/108), also regulated by multiple target proteins (≥8), might be the main factors for NSCLC. As shown in Figure 4, those pathways were tightly interacted with targets. Such as, PI3K-Akt signaling pathway (degree = 21), VEGF signaling pathway (degree = 11). For instance, The PI3K-Akt pathway is an important signaling pathway that may activate downstream of a series of extracellular signals and impact on cellular processes including cell proliferation, apoptosis, and survival (Jiang Z. Q. et al., 2018;Tan et al., 2018), which can be targeted by numerous active compound like wogonin (MOL007), baicalein (MOL070) quercetin (MOL002), apigenin (MOL001), and so forth. Hence, PI3K-Akt signaling pathway with the highest degree may be a significant pathway involved in the proliferation, apoptosis against NSCLC. Also, tumor angiogenesis, apoptosis, and migration related pathways were also enriched, like VEGF signaling pathway mediates the absolute dependence of tumor cells on the continuous supply of blood vessels to nourish their growth and to facilitate metastasis . Hence, tumor vascularization is a vital process for tumor growth, invasion, and metastasis. SBD may be served as an attractive herb in anti-NSCLC therapy. In addition, P53 signaling pathway also stands out in the enriched pathway list, which can mediate cell cycle arrest and cell proliferation (Hsua et al., 2001). Therefore, it is concluded that SBD opportunity regulates the treatment of NSCLC through anti-inflammation, cell cycle arrest, tumor angiogenesis, cell apoptosis, and other pathways.
NSCLC-Pathway Construction
Considering the complex mechanism of SBD in the treatment of NSCLC, an integrated "NSCLC-pathway" was constructed by integrating the key pathways that obtained through the T-P network analysis. NSCLC-pathway (Figure 5) that comprises of four signaling pathways such as hsa04370: VEGF signaling pathway, hsa04151:PI3K-Akt signaling pathway, hsa04115: p53 signaling pathway and hsa04064: NF-kappa B signaling pathway. The target proteins of the integrated "NSCLCpathway" exhibit markedly close functional relationship with the NSCLC related proteins. As shown in Figure 5, the NSCLCpathway can be separated into two represent therapeutic modules (Inflammation related module and tumor related module). Inflammation related module consists of hsa04064: NF-kappa B signaling pathway. Then, tumor related module including hsa04370: VEGF signaling pathway, hsa04151: PI3K-Akt signaling pathway and hsa04115: p53 signaling pathway.
Inflammation Related Module
Inflammatory microenvironment plays a very important role in all stages of tumor development. Many important cytokines and chemical factors participate in this process. At the same time, the tumor microenvironment promotes the continuous response of the inflammation. Thus, it is necessary to control the development of the tumor by targeting the key signaling pathway in the tumor-inflammatory microenvironment. In Figure 5, baicalein (MOL070) activates transcription factor p65 (NF-κB), which reduces the expression of the downstream proteins prostaglandin G/H synthase 2 (COX-2) and Nitric oxide synthase (iNOS). For example, The NF-κB family of transcription factors is involved in the activation of a wide range of genes associated with inflammation, differentiation, tumorigenesis, embryonic development, and apoptosis (Ozawa et al., 2018;Zakaria et al., 2018). Then, COX-2 is not expressed in most normal tissues at high levels but is strongly induced by LPS and many cytokines, playing a crucial role in the development of various inflammatory responses . The results showed that baicalein form SBD could be utilized to treat NSCLC by regulating the anti-inflammatory activities of NF-κB, COX-2, and iNOS.
Tumor Related Module
As shown in the Figure 5, tumor related targets in the active targets are mapped to three pathways, including P53 signaling pathway, PI3K-AKT signaling pathway and VEGF signaling pathway. These pathways control tumor development by inhibiting cell proliferation and cell cycle. For example, in the P53 signaling pathway, baicalein (MOL070) can extensively act on cdk-Cyclin complexes and inhibit their activity, especially G1 phase cdk2-CyclinE (CDK2). It has been reported that the cell cycle is blocked by baicalein therapy, thus inhibiting cell proliferation (Hsua et al., 2001). These results suggested that baicalein from SBD inhibits NSCLC proliferation by incomplete DNA synthesis and cell division. Additionally, some targets in the PI3K-AKT signaling pathway engage in equaling the levels between the cell cycle and apoptosis. The apoptosis signaling can be initiated either at face through a death receptor-induced signaling pathway or within the cell via the release of proapoptotic molecules. For example, Apoptosis regulator Bcl-2 (Bcl-2) can be regulated by baicalein (MOL070), scutellarin (MOL036), and wogonin (MOL007). Previous data in vivo indicated that Caspases are linked to Bcl-2 family which is the key regulator of apoptosis in cancer (Timucin et al., 2018). Furthermore, tumor vascularization is an important process of tumor growth, invasion, and metastasis. Anti-angiogenesis has been considered to be an attractive target anti-tumor treatment (Ferrara and Kerbel, 2005;Cooney et al., 2006;Tran et al., 2007). Such as, In the VEGF signaling pathway was modulated by wogonin (MOL007) and baicalein (MOL070). It has been reported in the literature that inhibition of the activation of VEGF downstream protein kinases Mitogen-activated protein kinase 14 (p38) and Mitogen-activated protein kinase 3 (ERK) can inhibit cell proliferation, survival, and migration (Galaria et al., 2004). Thus, all above suggest that SBD may treat NSCLC by regulating the cell cycle, apoptosis, and anti-angiogenesis.
CCK-8 Assay
In our pre-experiment, the RAW 264.7 cell viability as affected by the baicalein from SBD at various doses was determined by CCK-8 assay (Figure 6A), and the outcome showed that high cell viability (>70%) was attained for baicalein at <25 µmol/L, respectively. Thus, three doses of baicalein were taken (5, 15, and 25 µmol/L) for subsequent experiments.
Targets Validation
To further assess the obtained results in systems pharmacology analysis, we have chosen baicalein from SBD to examine the compound potential anti-inflammatory effect using RAW264.7 cells treated with LPSs (1 ug/ml). In particular, we conduct western blot analysis for iNOS, NF-κB, and COX-2 protein expression to confirm anti-inflammatory effects of the predicted compounds.
As shown in Figure 6B, the levels of iNOS, NF-κB, and COX-2 proteins in the panel of RAW264.7 cell lines tested are reported. We observe that after baicalein treatment, the protein expressions of iNOS, NF-κB, and COX-2 in RAW264.7 cells are significantly declined. Figure 6B illustrates that baicalein treatment, as a single agent, causing a decrease of the iNOS, NF-κB, and COX-2 expression. To sum up, in vitro study provides additional information for screening compound with potentially anti-inflammatory effect and demonstrates the reliability of in systems pharmacology screen strategy.
To verify the reliability of anti-tumor related targets screened from systems pharmacology, we observe that baicalein treatment, the protein expressions of CDK2 and Bax in H1975 cells were both declined significantly at different dose levels. Our results show that after 24 h, an increase in Bax levels and a decrease in the level of CDK2 were detected in the P53 signaling pathway (Figure 6D), indicating that activating Bax and inhibiting CDK2 were vital for anti-NSCLC. Then, in the VEGF signaling pathway the phosphorylation of p38 was significantly down-regulated in baicalein-induced H1975 cells compared to those of the control group. Also, we examined the activation of ERK, baicalein inhibited phosphorylation of ERK in H1975 cells. However, total protein levels were not affected (Figure 6C), suggesting baicalein inhibited angiogenesis and cell proliferation by regulating VEGF signaling pathways. Next, we found that the p-AKT expression was down-regulated in H1975 cells treated with baicalein. In addition, the expression of Bcl-2 was decreased by baicalein treatment. The above results indicates that baicalein may suppress the PI3K/Akt pathway by down-regulating p-AKT ( Figure 6C). Taken together, these data suggest that baicalein from SBD may treat the NSCLC by cell cycle arrest, promote apoptosis, and anti-angiogenesis.
In vivo Experiments
To elucidate the molecular mechanism of SBD in treating LLC tumor-bearing mice, we sought evidential insight for the driver of anti-tumor by improving the tumor-inflammatory microenvironment after baicalein from SBD administration ( Figure 7A). LLC tumor-bearing mice were significantly more resistant to the development of baicalein mediate sarcoma than control mice and observed decreased tumor out growth (Figures 7B-D) and a significant survival benefit ( Figure 7E) in baicalein-treated mice. Here, to evaluate the in vivo therapeutic effect of baicalein on NSCLC, tumor samples were collected from LLC tumor-bearing mice and subjected to fluorescence-activated cell sorting analysis. In the tumor-inflammatory microenvironment, we observed a significant increase in the proportion of cytotoxic CD8 + T cells in baicalein-treated mice, a sixfold of the proportion of CD8+/FoxP3+ cells compared to the control group. Also, reduction in the MFI of FoxP3+ Treg cells was observed in baicalein treatment group (Figure 8B), suggesting that an inhibition of Treg function associated with the reduction of FoxP3 protein expression (Ho et al., 2018) may be the direct result of the baicalein. Meanwhile, the baicalein therapy induced expansion of an IFNG and GZB-producing activated CD8+ T cell population in the tumor of mice ( Figure 8A). Then, we examined the density of natural killer (NK) cells in the tumor sample. Significant positive correlations were observed between control and baicalein natural killer cells ( Figure 8C). These results firmly establish that the baicalein from SBD therapy improves tumor inflammatory microenvironment mediated tumor control, resulting in a striking benefit in these advanced mouse models relevant to clinical.
DISCUSSION
Non-small cell lung cancer has a high degree of malignancy and it has the characteristics of an early metastasis, and the prognosis of NSCLC patients remains low despite the many advances that have occurred in early diagnosis and comprehensive therapies (Testa et al., 2018;. Therefore, search for antitumor drugs has become an important problem to be solved urgently. Recently, SBD has been reported to possess vital biological activities, for instance anticancer activities (Zheng et al., 2018). In our work, the complex mechanism of SBD in the treatment of NSCLC was explored based on the system pharmacological work principle. Firstly, with the aid of the evaluation method, 33 active compounds were obtained from SBD and 145 active targets were predicted. These results reveal that the characteristics of SBD are multi-compounds and multi-targets anti-tumor effects. Then, target and C-T network analysis together display that some vital compounds of SBD such as wogonin, baicalein, and scutellarin may play an important role in the treatment of NSCLC, and SBD positively aiming for some targets like Bax, iNOS, and P38 exhibits the therapeutic effects against NSCLC by anti-inflammatory, promote apoptosis and anti-angiogenesis. In addition, The T-P network and the integrated NSCLC-relates pathway indicate that the major compounds of SBD might exert anti-NSCLC effect by modulating plenty different pathways including hsa04370:VEGF signaling pathway, hsa04151:PI3K-Akt signaling pathway, hsa04115:p53 signaling pathway and hsa04064:NF-kappa B signaling pathway. Based on our present study, the in vitro experiments further confirm that the baicalein from SBD combat NSCLC via regulating the critical proteins of our integrated NSCLC-pathway including COX-2, NF-κB, Bax, ERK, CDK1 and so on, attesting that NSCLC can be treated through a complex system with multi-compound-target-disease interactions. So, SBD exhibits anti-NSCLC effects in various aspects, including cell cycle arrest, anti-inflammatory, promoting apoptosis, and anti-angiogenesis in response to active compound. In vivo, additive therapeutic effects of baicalein were investigated for a tumor-bearing mouse model, where baicalein from SBD was demonstrated to possess high efficiency compared with control in the inhibition of tumor growth.
In summary, our study systematically indicated the inhibitory effect of SBD on anti-tumor in vitro and in vivo. These molecular mechanisms, including improve tumor inflammatory microenvironment, cell cycle arrest, promote apoptosis, and anti-angiogenesis are potentially those by which SBD exhibits its effectiveness in cancer treatment.
|
v3-fos-license
|
2008-07-29T11:08:05.000Z
|
2008-07-22T00:00:00.000
|
118625902
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.nuclphysb.2008.09.012",
"pdf_hash": "f831a2da4901c2fe7a759c8ddd5b131fa9fe08e7",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2723",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "f831a2da4901c2fe7a759c8ddd5b131fa9fe08e7",
"year": 2008
}
|
pes2o/s2orc
|
The LHC String Hunter's Companion
The mass scale of fundamental strings can be as low as few TeV/c^2 provided that spacetime extends into large extra dimensions. We discuss the phenomenological aspects of weakly coupled low mass string theory related to experimental searches for physics beyond the Standard Model at the Large Hadron Collider (LHC). We consider the extensions of the Standard Model based on open strings ending on D-branes, with gauge bosons due to strings attached to stacks of D-branes and chiral matter due to strings stretching between intersecting D-branes. We focus on the model-independent, universal features of low mass string theory. We compute, collect and tabulate the full-fledged string amplitudes describing all 2->2 parton scattering subprocesses at the leading order of string perturbation theory. We cast our results in a form suitable for the implementation of stringy partonic cross sections in the LHC data analysis. The amplitudes involving four gluons as well as those with two gluons plus two quarks do not depend on the compactification details and are completely model-independent. They exhibit resonant behavior at the parton center of mass energies equal to the masses of Regge resonances. The existence of these resonances is the primary signal of string physics and should be easy to detect. On the other hand, the four-fermion processes like quark-antiquark scattering include also the exchanges of heavy Kaluza-Klein and winding states, whose details depend on the form of internal geometry. They could be used as ``precision tests'' in order to distinguish between various compactification scenarios.
Introduction
The Standard Model (SM) of particle physics is a well established quantum field theory that describes the spectrum and the interactions of elementary particles to high accuracy and in excellent agreement with almost all experiments. Only astrophysical observations provide indirect experimental evidence for new physics beyond the SM in the form of not yet directly observed dark matter particles. However at the conceptual level, there exist several unsolved problems, which strongly hint at new physics beyond the SM. Probably the most mysterious puzzle is the hierarchy problem, namely the question why the Planck mass M Planck ≃ 10 19 GeV is huge compared to the electroweak scale M EW : (1.1) In fact, there are some good reasons to believe that the resolution of the hierarchy problem lies in new physics around the TeV mass scale. The LHC collider at CERN is designed to discover new physics precisely in this energy range, hopefully giving important clues about the nature of dark matter and perhaps at the same time about the solution of the hierarchy problem. In fact, there are at least three, not necessarily mutually exclusive scenarios, offered as solutions of the hierarchy problem: • Low energy supersymmetry at around 1 TeV.
• Low energy scale for (quantum) gravity and large extra dimensions at few TeVs.
In the latter scenario, the observed weakness of gravity at energies below few TeVs is due to the existence of large extra dimensions [1,2]. In string theory, extra dimensions appear naturally, therefore it is an obvious question to ask whether they can be large enough to accommodate such new physics at few TeVs. This is possible only if the intrinsic scale of string excitations, called the string mass M string is also of order few TeVs. In this case a whole tower of infinite string excitations will open up at the string mass threshold, and new particles follow the well known Regge trajectories of vibrating strings, with the spin j and α ′ the Regge slope parameter that determines the fundamental string mass scale M 2 string = α ′−1 . In this work, we discuss the phenomenological aspects of low mass string theory related to experimental searches for physics beyond the SM at the LHC. We focus on its model-independent, universal features that can be observed and tested at the LHC.
Let us list what kind of string signatures from a low string scale and from large extra dimensions can be possibly expected at the LHC: • The discovery of new exotic particles around M string . For example, many string models predict the existence of new, massive Z ′ gauge bosons from additional U (1) gauge symmetries (see e.g. [3]). They can also have an interesting effect, since they mix with the standard photon by their kinetic energies (see e.g. [4]).
• The discovery of quantum gravity effects in the form of mini black holes (see e.g. [5,6]).
• The discovery of string Regge excitations with masses of order M string . These stringy states will lead to new contributions to SM scattering processes, which can be measurable at LHC in case of low string scale. Furthermore, there are the Kaluza-Klein (KK) and winding excitations along the small internal dimensions, i.e. KK and winding excitations of the SM fields. Their masses depend on the internal volumes 1 , and they should be also near the string scale M string .
It is precisely this last item, which we want to discuss in this work. So let us be more specific with what we mean by new contributions to SM processes from a low string scale M string . Namely these are the α ′ -contributions to ordinary SM processes like the scattering of quarks and gluons into SM fields. As already mentioned, at tree-level the α ′ -corrections to SM processes are due to the exchanges of massive string excitations encompassing all Regge recurrences. In addition, there are contributions of KK states and winding modes with their spectrum depending on the form of extra-dimensional geometry. However a class of amplitudes, e.g. the N -gluon amplitudes of QCD, receive universal α ′ -corrections only, which are insensitive to the details of specific compactifications and to the extent of supersymmetry preserved in four dimensions [8][9][10][11]. Similarly, also other SM processes that involve quarks, leptons and other gauge bosons receive α ′ -corrections, leading to characteristic deviations from the SM predictions and hence can be tested at the LHC. An important step in this direction has already been undertaken in [12,7] where the effects of string resonances have been pointed out as an important signal of string physics. Other string four-point amplitudes that involve four SM fermions, relevant to Yukawa couplings and possibly leading to proton decay and FCNC in specific models, have been computed and analyzed in [13][14][15][16][17][18][19][20][21]. More recently, in [22] and [23] the string effects in the process 1 In fact, there may be another kind of KK excitations, namely along the large extra dimensions, i.e. KK excitations in the gravitational (bulk) sector of the theory. Their masses can be as low as 10 −3 eV. However, KK modes from the bulk couple only at one-loop (annulus) to SM fields resulting in a suppression by a factor of g string ∼ g 2 compared to tree-level processes on the brane. Hence contributions from KK modes of the bulk are less relevant than those from string Regge excitations, cf. also the discussion in [7]. gg → gγ have been considered. In the present work we systematically investigate all possible string tree-level α ′ -corrections to SM processes that involve quarks, leptons and SM gauge bosons, as they arise in intersecting D-brane compactifications of orientifold models 2 . We work in a model-independent way and essentially only need the local information about how the SM is realized on type IIA/IIB intersecting D-branes. Some of the processes exist already at tree level in the SM, and hence the tree level SM background must be subtracted from the string corrections. Other processes like gg → gγ or gg → γγ do not at all exist in the SM at tree-level and can be viewed as the "smoking guns" for D-brane string compactifications with a low string scale and large extra dimensions. However let us remind that it requires much fortune to see these string effects at the LHC along the lines discussed in this work. In particular we need a low string scale, large extra dimensions and also weak string coupling in order for our calculations to be reliable and testable at the LHC.
The present work is organized as follows. In the next Section we discuss some general aspects and the basic setting of string compactifications with a low string scale and large extra dimensions. Then, in Section three, we recall how the SM can be constructed from type IIA/IIB D-branes on orientifolds. We do not discuss fully consistent global orientifold models, which lead to the SM, but we rather focus on the local, intersecting D-brane configurations that realize the SM by open strings. Eventually the local D-brane systems have to be included into a compact manifold in order to obtain a fully consistent orientifold compactifications. However, as we shall argue, the local D-brane systems are sufficient to compute all relevant tree level scattering amplitudes among the SM open string excitations. In Section four we discuss how large extra dimensions can be realized in Calabi-Yau (CY) orientifolds and how the local, SM D-brane system has to be embedded into a large volume CY space. To some extent we follow the recent constructions for large volume compactifications by [24] using the "Swiss cheese" CY spaces, however with the difference that in [24] the string scale is around an intermediate scale of 10 11 GeV and supersymmetry is broken at the TeV scale, whereas in our case M string is at few TeVs and low energy supersymmetry is not needed. In Section five we present a complete calculation of all possible four-point string scattering amplitudes of gauge and SM matter fields. We analyze string corrections to scattering processes that involve quarks and gluons, since they are the most relevant processes for the LHC. The computations of the scattering of four gauge bosons and of the scattering of two gauge bosons and two matter fermions is performed in a model independent and universal way. Our results hold for all compactifications, even for those that break supersymmetry. The poles of the respective amplitudes are due to the exchanges of massless gauge bosons and universal string Regge excitations only. On the other hand, the amplitudes that involve four matter fields depend on the details of the D-brane geometry, and how the D-branes are embedded into the compact CY space. Here also modes of the internal geometry can be exchanged during the four fermion scattering processes. Finally, in Section six, we compute the squared moduli of all amplitudes, sum over polarizations and colors of final particles and average over polarization and colors of incident particles, as needed for the unpolarized parton cross sections. The results are presented in Tables.
In another publication [25] written in collaboration with Luis Anchordoqui, Haim Goldberg and Satoshi Nawata, we use our results to analyze the dijet signals for low mass strings at the LHC.
Physics of large extra dimensions and low string scale
Large extra dimensions are a very appealing solution to the hierarchy problem [2]. The gravitational and gauge interactions are unified at the electroweak scale and the observed weakness of gravity at lower energies is due to the existence of large extra dimensions.
Gravitons may scatter into the extra space and by this the gravitational coupling constant is decreased to its observed value. Extra dimensions arise naturally in string theory. Hence, one obvious question is how to embed the above scenario into string theory and how to compute cross sections.
Planck mass and gauge couplings in D-brane compactifications
Here we discuss the gravitational and gauge couplings in orientifold compactifications.
In the following we consider the type II superstring compactified on a six-dimensional compactification manifold. In addition, we consider a Dp-brane wrapped on a p − 3-cycle with the remaining four dimensions extended into the uncompactified space-time. We have d = p − 3 internal directions parallel to the Dp-brane world volume and d ⊥ = 9 − p internal directions transverse to the Dp-brane world volume. Let us denote the radii (in the string frame) of the parallel directions by R i , i = 1, . . . , d and the radii of the transverse directions by R ⊥ j , j = 1, . . . , d ⊥ . The generic setup is displayed in Figure 1. While the gauge interactions are localized on the D-brane world volume, the gravitational interactions are also spread into the transverse space. This gives qualitatively different quantities for their couplings. In D = 4 we obtain for the Planck mass where the internal six-dimensional (string frame) volume V 6 is expressed in terms of the parallel and transversal radii as The dilaton field φ 10 is related to the D = 10 type II string coupling constant through g string = e φ 10 . The gravitational coupling constant follows from (2.1) through the relation On the other hand, in type II superstring theory the gauge theory on the D-brane world-volume has the gauge coupling: In (2.3) each factor i accounts for an 1-cycle wrapped along the i-th coordinate segment. While the size of the gauge couplings is determined by the size of the parallel dimensions, the strength of gravity is influenced by all directions.
Large extra dimensions and low string scale
From (2.1) and the gauge coupling (2.3) we may deduce a relation between the Planck mass M Planck , the string mass M string and the sizes R j of the compactified internal directions. For type II we obtain 3 : Hence, by enlarging some of the transverse compactification radii R ⊥ j the string scale has to become lower in order to achieve the correct Planck mass (p < 7). This is to be contrasted with a theory of closed (heterotic) strings only. In that case the relation between the Planck mass and the string scale does not depend on the volume. It is given by the relation M string = g string M Planck , which requires a high string scale M string ∼ 10 17 GeV for the correct Planck mass.
A priori, there are no compelling reasons why the string mass scale M string should be much lower than the Planck mass. In the large volume compactifications of [24,28,29] it was shown that that one can indeed stabilize moduli in such a way that the string scale M string is at intermediate energies of about 10 11−12 GeV. Then the internal CY volume V 6 is of order V 6 M 6 string = O(10 16 ). The motivation for this scenario is to obtain a supersymmetry breaking scale around 1 TeV, since one derives the following relation for the gravitino mass: However, giving up the requirement of supersymmetry at the TeV scale, one is free to consider CY manifolds with much larger volume. In fact, if it happens for M string to be within the range of LHC energies, not too far beyond 1 TeV, string theory can be tested. In this case the CY volume is as large as V 6 M 6 string = O(10 32 ). Of course one has to find scalar potentials with minima that lead to such big internal volumes.
Some spectacular signatures are expected near the string mass threshold. They are related to the production of virtual or real string Regge excitations of mass of order M string and to the effects of strongly coupled gravity like the production and decays of microscopic 3 The discussion takes over to type I superstring theory. The type I theory may be obtained from type IIB by an orientifold projection. The world-volume gauge theory on the D-brane sitting on the orientifold plane becomes then SO(2N ) or U Sp(N ). In that case, all gauge couplings, derived in the following for U (N ) gauge groups, have to be multiplied by a factor of 2, i.e. black holes [30,31]. The reason why gravity is expected to become strong at energies comparable to the string mass is the inevitable presence of Kaluza Klein excitations of gravitons and other particles propagating in the bulk of large extra dimensions, with the (modeldependent) masses expected in the range from 10 −3 eV order to 1 MeV order. Although ordinary matter particles couple to these excitations very weakly, with the strength determined by the Newton's constant, the combined effect of a large number of virtual Kaluza Klein gravitons is to increase the strength of gravitational forces at high energies. In string theory, this effect may occur below or above the fundamental string mass scale, depending on the string coupling constant g string . For example, black holes are expected to be produced at energies of order M string /g 2 string [5], although some strong gravity effects may appear already at slightly lower energies [6]. Thus in weakly coupled string theory with g string < 1, black hole production and in general, the onset of strong gravity effects occur above the string mass scale. In this case, the lowest energy signals of strings at the LHC would be due to virtual Regge excitations produced in parton collisions. The corresponding scattering amplitudes can be evaluated by using string perturbation theory, with the dominant contributions originating from disk diagrams. In this work, we discuss the disk amplitudes necessary for studying all 2 → 2 scattering processes of gluons and quarks originating from D-brane intersections.
String Regge resonances in models with low string scale are also discussed in [7,12], while KK graviton exchange, which appears at the next order in perturbation theory, is discussed in [7,32,33] Table 1.
Exchanges of string Regge excitations and string contact interactions
Due to the extended nature of strings, the world-sheet string amplitudes are generically non-trivial functions of α ′ in addition to the usual dependence on the kinematic invariants and degrees of freedom of the external states. In the effective field theory description this α ′ -dependence gives rise to a series of infinite many resonance channels 5 due to Regge excitations and/or new contact interactions.
Generically, as we shall see in Section 5, tree-level string amplitudes involving four gluons or amplitudes with two gluons and two fermions are described by the Euler Beta function depending on the kinematic invariants s = (k 1 + k 2 ) 2 , t = (k 1 − k 3 ) 2 , u = (k 1 − k 4 ) 2 , with s + t + u = 0 and k i the four external momenta. The whole amplitudes A(k 1 , k 2 , k 3 , k 4 ; α ′ ) may be understood as an infinite sum over s-channel poles with intermediate string states |k; n exchanged, cf. Figure 2. After neglecting kinematical factors the string amplitude A(k 1 , k 2 , k 3 , k 4 ; α ′ ) assumes the form as an infinite sum over s-channel poles at the masses of the string Regge excitations. In (2.6) the residues γ(n) are determined by the threepoint coupling of the intermediate states |k; n to the external particles and given by with n + 1 being the highest possible spin of the state |k; n .
Another way of looking at the expression (2.6) appears when we express each term in the sum as a power series expansion in α ′ : In this form (2.9) the massless state n = 0 gives rise to a field-theory contribution (α ′ = 0), while at the order α ′2 all massive states n = 0 sum up to a finite term. The n = 0 term in (2.9) describes the field-theory contribution to the diagram Figure 2, e.g. the exchange of a massless gluon. On the other hand, the term at the order α ′2 describes a new string contact interaction as a result of summing up all heavy string states. Expanding (2.9) to higher orders in α ′ yields an infinite series of new string contact interactions for the effective field theory. For example, for a four gluon superstring amplitude the first string contact interaction is given by α ′2 g −2 Dp trF 4 , which represent a correction to YM theory: While the first string correction for four-gluon scattering yields α ′2 contact terms, the scattering of four chiral fermions yields already a correction at α ′ , cf. Section 5.
Generalities
We will consider type II orientifolds 6 with several stacks of Dp a -branes, each being wrapped around individual compact homology (p−3)-cycles π a of the internal space. Hence the effective open string gauge theories with groups G a live in the (p + 1)-dimensional subspaces R 1,3 ⊗ π a . 6 Alternative GUT constructions from F-theory have recently been discussed in [34].
In order to incorporate non-Abelian gauge interactions and to obtain massless fermions in non-trivial gauge representations, one has to introduce D-branes in type II superstrings. Specifically there exist three classes of four-dimensional models: (i) Type I compactifications with D9/D5 branes This class of IIB models contain different stacks of D9-branes, which wrap the entire space M 6 , and which also possess open string, magnetic, Abelian gauge fields F ab on their world volumes (magnetized branes). This magnetic fields are in fact required, if one wants to get chiral fermions from open strings. Because of Ramond tadpole cancellation one also needs an orientifold 9-plane (O9-plane). In addition one can also include D5-branes and corresponding O5-planes.
(ii) Type IIB compactifications with D7/D3 branes Here we are dealing with different stacks of D7-branes, which wrap different internal 4cycles, which intersect each other. The D7-branes can also carry non-vanishing open string gauge flux F ab , which is needed for chiral fermions. In addition, one can also allow for D3-branes, which are located at different point of M 6 . In order to cancel all Ramond tadpoles one needs in general O3-and O7-planes. A specific class of chiral gauge models can be obtained by placing a stack of D3-branes at a singularity of the internal space M 6 .
(iii) Type IIA compactifications with D6 branes
This class of models contains intersecting D6-branes, which are wrapped around 3-cycles of M 6 . Now, orientifold O6-planes π O6 are needed for Ramond tadpole cancellation. In general, each stack of D6 a -branes, which is wrapped around the cycle π a , is accompanied by the the orientifold mirror stack, wrapped around the reflected cycles π ′ a . The chiral massless spectrum is completely fixed by the topological intersection numbers I of the 3-cycles of the configuration, cf. Table 2. Table 2: Intersection of 3-cycles π a , π b , mirror cycles π ′ a and orientifold plane π O6 In general some of the string U (1)'s are anomalous and receive masses due to Green-Schwarz mechanism. However, for intersecting brane worlds it may also happen that via axionic couplings some anomaly-free Abelian gauge groups become massive. The condition that a linear combination U (1) Y = i c i U (1) i remains massless reads: In general, if the hypercharge is such a linear combination of U (1)'s, Q Y = i c i Q i , then the gauge coupling is given by where we have taken into account that the U (1)'s are generically not canonically normalized (for all possible hyper charge assignments in D-brane orientifolds see [35]). In the following we will describe some local type IIA/IIB D-brane configurations that lead to the SM in a very economic way.
Three stack D-brane models
Here one starts with three stacks of D-branes with initial gauge symmetries: The (left-handed) SM spectrum is shown in Table 3. The hypercharge Q Y is given as the following linear combination of the three U (1) ′ s: Here one is forced to realize the left-handed (ū,c,t)-quarks in the antisymmetric representation of U (3), which is the same as the anti-fundamental representation 3. Note that the three stack models with antisymmmetric matter are dual to the D3-brane quivers at CY singularities [36,37,38]. Alternative bottom-up constructions of the SM via D-branes can be found in [39].
Four stack D-brane models
One of the most common ways to realize the SM is by considering four stacks of D-branes. There are several simple ways to embed the SM gauge group into products of unitary and symplectic gauge groups (see [40]). We will use as a prototype model four stacks of D-branes with gauge symmetries: The intersection pattern of the four stacks of D6-branes can be depicted as in Figure 4. Fig. 4 Intersection pattern of four stacks of D6-branes giving rise to the MSSM.
The chiral spectrum of the intersecting brane world model should be identical to the chiral spectrum of the SM particles. In type IIA, this fixes uniquely the intersection numbers of the 3-cycles, (π a , π b , π c , π d ), the four stacks of D6-branes are wrapped on. There exist several ways to embed the hypercharge Q Y into the four U (1) gauge symmetries. The standard electroweak hypercharge Q (S) Y is given as the following linear combination of three Therefore, in this case the gauge coupling of the hypercharge is given as Now we turn to the particle content of our prototype model. In compact orientifold compactifications each stack of D-branes is accompanied by a orientifold mirror stack of D ′ -branes. In the next Section about the amplitudes, we will not make a difference between the the D-brane and the mirror D ′ -branes. Hence we will use in the following the indices a, b, c, d collectively for the D-branes as well as for their mirror branes. Then self-intersections among D-branes include intersections between D-and D ′ -branes. Furthermore, for simplicity, we will suppress from the spectrum those open string states which one also gets from intersections between D-branes and orientifold planes. With these restrictions the left-handed fermion spectrum for our prototype model is presented in Table 4.
(1, 1) 0,0,1,1 I cd To derive three generations of quark and leptons, the intersection number in Table 4 must satisfy certain phenomenological restrictions: We must have I ab = 3. From the left-handed anti u-quarks, we get that I ac = 3, and likewise for the two types of left-handed anti dquarks, we infer that I ac +I ad + 1 2 I aa = 3. In the lepton sector we require that I bc +I bd = 3 and 1 2 (I cc + I dd ) + I cd = 3.
Embedding of Standard Model D-branes into large volume Calabi-Yau spaces
Let us now discuss large extra dimensions in the context of string compactifications. In fact, it is not completely straightforward to construct SM-like D-brane models on CY spaces with large transverse dimensions. In order to combine D-branes with SM particle content with the scenario of large extra dimensions, one has to consider specific types of CY compactifications. The three or four stacks of intersecting D-branes that give rise to the spectrum of the SM are just local modules that have to be embedded into a global large volume CY-manifold in order to obtain a consistent string compactification. For internal consistency several tadpole and stability conditions have to be satisfied that depend on the details of the compactification, such as background fluxes etc. In this work we will not aim to provide fully consistent orientifold compactifications with all tadpoles cancelled, since it is enough for us to know the properties of the local SM D-brane modules for the computation of the scattering amplitudes among the SM open strings. However it is important to emphasize that in order to allow for large volume compactification, the D-branes eventually cannot be wrapped around untwisted 3-or 4-cycles of a compact torus or of toroidal orbifolds, but one has to consider twisted, blowing-up cycles of an orbifold or more general CY spaces with blowing-up cycles. The reason for this is that wrapping the three or four stacks of D-branes around internal cycles of a six-torus or untwisted orbifold cycles, the volumes of these cycles involve the toroidal radii. Therefore these volumes cannot be kept small while making the overall volume of the six-torus very big. Hence, the SM D-branes must be wrapped around small cycles inside a blown up orbifold or a CY manifold. Other cycles have to become large, in order to get a CY space with large volume and a low string scale M string .
The embedding of the local SM D-brane module into a large CY manifold is depicted in Figure 5. At some other corner of the CY manifold there can be possibly other D-branes, which do not intersect the SM branes and build a hidden gauge sector of the theory.
In Section 5 we shall compute open string disk four-point amplitudes involving SM matter fields. For those amplitudes involving four gauge bosons or two gauge bosons and two matter fermions, the amplitudes do not depend on the geometry of the underlying CY spaces. On the other hand, the four-fermion amplitudes depend on the internal CY geometry and topology. Concretely, the four-fermion amplitudes in general depend on the CY intersection numbers, and also on the rational instanton numbers of the CY space.
However, to perform the open string CFT computations for the scattering amplitudes of matter fields we shall assume that the SM D-branes are wrapped around flat, toroidal like cycles. Therefore the four-fermion amplitudes are functions of toroidal wrapping numbers. Eventually switching from our toroidal-like results to more general CY expressions, some of the factors, which depend on the toroidal geometry, have to be replaced by geometrical or topological CY parameters. However, the kinematical structure of the matter field amplitudes is universal and not affected by the underlying CY geometry. At any rate, as we shall argue at the end of Section 5, for the case that the longitudinal brane directions are somewhat greater than the string scale M string the four-fermion couplings depend only on the local structure of the brane intersections, but not on the global CY geometry.
Type IIB large volume compactifications with wrapped D7-branes
In type IIB orientifolds we assume that the D7-branes are wrapped around 4-cycles inside a CY-orientifold. The relation (2.2) for the volume V 6 applies only for toroidal and orbifold compactifications. Therefore we shall generalize the expressions (2.1) for M Planck to the case of large volume CY compactifications. In the string frame 7 , the volume V 6 of a CY space X is given by with t i (i = 1, . . . , h 1,1 ) the (real) Kähler moduli in the string basis and κ ijk the triple intersection numbers of X. The Kähler form J is expanded w.r.t. a base { D i } of the co- Without loss of generality we restrict to orientifold On the other hand, the real parts of the physical Kähler moduli T i correspond to the volumes of the CY homology four-cycles D k and are computed from the relation: It follows that the volume V 6 of X becomes a function of degree 3/2 in the Kähler moduli T i : For D7-branes wrapped around the four-cycle D k , the corresponding gauge coupling constant takes the form In the Einstein frame the Kähler moduli t k are multiplied by the factor e − 1 2 φ 10 . Therefore, in the Einstein frame the CY volume reads in analogy 8 to Eq. (2.3). In the case of magnetic F-fluxes on the D7-brane world-volume the gauge couplings (4.4) receive an additional S-dependent contribution, cf. [18]. Now we consider CY manifolds which allow for large volume compactification. Here we assume that a set of four-cycles D b α (α = 1, . . . , h 1,1 b ) can be chosen arbitrarily large while keeping the rest of the four-cycles D s Since we want the gauge couplings of the SM gauge groups to have finite, not too small values, we must assume that the SM gauge bosons originate from D7-branes wrapped around the small 4-cycles D s β . This splitting of the four-cycles into big and small cycles is only possible, if the CY triple intersection numbers form a specific pattern. In addition, the Euler number of the CY space must be negative, i.e. h 2,1 > h 1,1 > 1.
For a simple class of CY spaces with this property the overall volume V 6 is controlled by one big four-cycle T b . In this case it has been shown [28,29,41] that one may indeed find minima of the scalar potential, induced by fluxes and radiative corrections in the Kähler potential, that allow for T b ≫ T s β . For these CY-spaces, the volume has to take the form where h is a homogeneous function of the small Kähler moduli T s β of degree 3/2. E.g. one may consider the following more specific volume form: Looking from the geometrical point of view, these models have a "Swiss cheese" like structures, with holes inside the CY-space given by the small four-cycles. The simplest example of a Swiss cheese example is the CY manifold P [1,1,1,6,9] [18] with h 1,1 = 2. In terms of the 2-cycles the volume is given by According to (4.2) the corresponding 4-cycle volumes become: (4.9) 8 On the other hands, for (space-time filling) D3-branes the corresponding gauge coupling constant is given by: Then the volume can be written in terms of the 4-cycles as 10) Another interesting Swiss cheese example, the CY manifold P [1,3,3,3,5] [15] with h 1,1 = 3 has recently been discussed in [42] in the context of a phenomenologically attractive CY orientifold model where the small cycles are wrapped by D7-branes. However, in order to accommodate the SM with at least three or four stack of wrapped D7-branes we need more small 4-cycles. Therefore we assume that there exist CY spaces which have a set of small, blowing-up four-cycles that do not intersect the big cycles, i.e. the CY volume is of the form The SM D7-branes are wrapped around the four-cycles D β i with volumes T β i that are kept small. At the same time one is allowed to choose the four-cycle volume T b to be large. Let us give one example of a hypothetical CY space with three distinct four-cycles D β (β = 1, 2, 3), who still have toroidal-like intersection numbers κ 123 = 1, and whose intersections with the large four-cycles are absent. The dual two-cycles locally form a T 2 × T 2 × T 2 torus inside the CY space. Its volume form is assumed to be where the dots stand for the contribution of other possible cycles. The big four-cycle is just 13) and the three small four-cycles intersect in one point and are given as (4.14) In term of the four-cycle volumes, V is then given as Hence, this would-be CY has the form of a Swiss cheese with geometry, where the intersecting four-cycle holes cut themselves a local T 2 × T 2 × T 2 space out of the entire CY manifold. However we do not know if this kind of CY does exist. Let us comment briefly on IIB large volume orientifolds with D5-branes, which are wrapped around CY 2-cycles. Again, the two have to be kept small, whereas the overall volume of the CY-space is very large. Models of this kind are e.g. possible on toroidal orbifolds, where the D5-branes are located at a singularity in a transversal two-dimensional space, as discussed in [43].
Type IIA large volume compactifications with wrapped D6-branes
Type IIA orientifolds with wrapped D6-branes can be obtained from the type IIB compactifications via T-duality resp. via mirror transformations, which basically exchange the role of the Kähler moduli with the role of the complex structure moduli and vice versa, i.e going from the type IIB CY space X to its type IIA mirror space, denoted by X. The volume of X is still given by eqs. (2.2) resp. (4.11), now expressed in terms of properly defined type IIA radii resp. IIA Kähler moduli T i , which are the 2-cycle volumes on X. Moreover the orientifold O3/O7-planes in type IIB become O6-planes in type IIA, which are wrapped around certain homology 3-cycles Π O6 inside X. Similarly, the type IIB D3/D7-branes become D6-branes, wrapped around homology 3-cycles Π a , which are suitably embedded into the large volume CY space X. The Π a intersect each other at angles θ ab , and their intersection angles with orientifold cycles Π O6 are denoted by θ a .
The corresponding D6-brane gauge coupling constants are proportional to the volumes of the wrapped 3-cycles, i.e.: (4.16) The volume of the cycle Π a is given in terms of the associated complex structure moduli U a of X. To accommodate type IIA orientifolds with low string scale and large overall volume, the corresponding complex structure moduli U s β , around which the SM D6-branes are wrapped, must be small compared to the volume of X to achieve finite values for the corresponding gauge coupling constants. As in type IIB, the CY spaces X must satisfy certain restrictions for large volume compactifications to be possible. In principle the structure of the allowed IIA CY spaces can be inferred from type IIB via mirror symmetry. E.g. one can wrap the D6-branes around certain rigid (twisted) 3-cycles of orbifold compactifications (see e.g. [44]), which can be kept small, whereas the overall volume is made very large.
To perform the computation of the matter field scattering amplitudes, as in type IIB we assume that the 3-cycles, which are are wrapped by the SM D6-branes, are flat and have a kind of toroidal like intersection pattern. Specifically, we assume that the SM sector is wrapped around 3-cycles inside a local T 2 × T 2 × T 2 , and the D6-brane wrappings around the tree 2-tori are described by wrapping numbers (n i a , m i a ) (i = 1, 2, 3), where the lengths L i a of the wrapped 1-cycles in each T 2 is given by the following equation: Then the gauge coupling on a D6-brane which is wrapped around a 3-cycle, is [45,46,47]: Here, the 3-cycle Π a is assumed to be a direct product of three 1-cycles with wrapping numbers (n i , m i ) w.r.t. a pair of two internal directions 9 . In terms of the corresponding three complex structure moduli U i of the T 2 's this equation becomes . (4.20) Finally, the intersection angles of the D6-branes with the O6-planes along the three y i directions can be expressed as 21) and the D6-brane intersection angles are simply given as More details about the effective gauge couplings, and also about matter field metrics of these kind of intersecting D-brane models can be found in [18,48].
Four-point string amplitudes of gauge and matter Standard Model fields
In this Section we compute the four-particle amplitudes relevant to LHC physics at the leading order of string perturbation theory, with the string disk world-sheet incorporating the propagation of virtual Regge string excitations at the tree level of effective field theory 10 . With the protons colliding at LHC, there are always two incident partons, gluons or quarks, while the two outgoing particles are partons fragmenting into jets, electroweak gauge bosons or leptons produced via the Drell-Yan mechanism. In all these processes, the baryonic stack of branes plays a special role. We will call it stack a. Note that in addition to gluons g in the adjoint representation of SU (N a ) = SU (3) color group, this stack gives rise to a color singlet gauge boson A coupled to the baryon number. This boson combines with gauge bosons associated to other stacks to form the vector boson coupled to electroweak hypercharge. We will not enter into details of the mixing mechanism because they are model-dependent. Thus we simply consider A as one of the particles possibly produced in parton collisions. Starting from the amplitudes involving A and gauge bosons associated to different stacks, one can easily obtain the physical amplitudes describing the 9 In type IIB orientifolds, the gauge coupling of a D7-brane, wrapped around the 4-cycle (4. 19) production of photons, Z 0 , or hypothetical Z ′ s in the framework of specific models. Since four-point disk amplitudes can involve as many as four different stacks, it is very important to establish a transparent notation. In this Section, we are still using the string (hep-th) conventions, with the metric signature (− + + +) and some kinematic invariants defined in the string units. In the next Section, we will make transition to the conventions used in experimental literature (hep-ex).
For the correct normalization of the various fields we recall the low-energy effective (N = 1 SUSY) action of the gauge and matter sectors, which reads up to the two derivative level: where F r µν is the field strength of A r µ and λ r is its partner gaugino. The first sum runs over all stacks of D-branes, while the second over their intersections. All traces are in the fundamental representations. The gauge covariant derivatives of matter fermions ψ α β associated to the intersection of a and b are given by: and similar expressions for scalars φ α β . Note that all matter fields are canonically normalized in Eq. (5.1), i.e. the moduli-dependent metrics have been absorbed by appropriate field redefinitions.
Four-point string amplitudes and open string vertex operators
Let Φ i , i = 1, 2, 3, 4, represent gauge bosons, quarks of leptons of the SM realized on three or more stacks of intersecting D-branes. The corresponding string vertex operators V Φ i are constructed from the fields of the underlying superconformal field theory (SCFT) and contain explicit (group-theoretical) Chan-Paton factors. In order to obtain the scattering amplitudes, the vertices are inserted at the boundary of a disk world-sheet, and the following SCFT correlation function is evaluated: 3) Here, the sum runs over all six cyclic inequivalent orderings π of the four vertex operators along the boundary of the disk. Each permutation π gives rise to an integration region I π = {z ∈ R |z π(1) < z π(2) < z π(3) < z π(4) }. The group-theoretical factor is determined by the trace of the product of individual Chan-Paton factors, ordered in the same way as the vertex positions. The disk boundary contains four segments which may be associated to as many as four different stacks of D-branes, since each vertex of a field originating from a Dbrane intersection connects two stacks. Thus the Chan-Paton factor may actually contain as many a four traces, all in the fundamental representations of gauge groups associated to the respective stacks. However, purely partonic amplitudes for the scattering of quarks and gluons involve no more than three stacks.
In order to cancel the total background ghost charge of −2 on the disk, the vertices in the correlator (5.3) have to be chosen in the appropriate ghost picture and the picture "numbers" must add to −2. Furthermore, in Eq. (5.3), the factor V CKG accounts for the volume of the conformal Killing group of the disk after choosing the conformal gauge. It will be canceled by fixing three vertex positions and introducing the respective c-ghost correlator. Because of the P SL(2, R) invariance on the disk, we can fix three positions of the vertex operators. Depending on the ordering I π of the vertex operator positions we obtain six partial amplitudes. The first set of three partial amplitudes may be obtained by the choice while for the second set we choose: The two choices imply the ghost factor c(z 1 )c(z 2 )c(z 3 ) = z 13 z 14 z 34 . The remaining vertex position z 2 takes arbitrary values along the boundary of the disk. After performing all Wick contractions in (5.3) the correlators become basic [49,11] and generically for each partial amplitude the integral (5.3) may be reduced to the Euler Beta function: Although we are mainly interested in the amplitudes involving the particles of the SM, we give below, for completeness, the vertex operators V Φ for the full N = 1 SUSY multiplets.
(i) Gauge vector multiplet: The gauge boson vertex operator in the (−1)-ghost picture reads while in the zero-ghost picture we have: where ξ µ is the polarization vector. The vertex must be inserted on the segment of disk boundary on stack a, with the indices α 1 and α 2 describing the two string ends. For our purposes, the most important property of the gluon vertex operators (5.7) and (5.8) is that they do not depend on the internal (CY) part of the SCFT. They depend only on the SCFT fields describing string coordinates X µ in four dimensions, and on their world-sheet superpartners ψ µ . Although the construction of these vertices utilizes SCFT, their form is universal to all compactifications and remains unaffected by eventual SUSY breaking in the bulk or by D-brane configurations. This is the reason why the results for N -gluon disk amplitudes [9,10,49,11] are completely universal and hold even if SUSY is broken in four dimensions.
In case of N = 1 supersymmetry, the gaugino vertex operators, in the (−1/2)-ghost picture, are where S λ and Sλ are the world-sheet spin fields associated to the negative and positive helicity fermions, respectively. The index I labeling gaugino species may range from 1 to 4, depending on the amount of supersymmetries on the D-brane world-volume, while the associated world-sheet fields Σ I of conformal dimension 3/8 belong to the Ramond sector of SCFT [50,51,52]. In the case of extended supersymmetry on the D-brane world-volume we also have scalars φ a,i in the adjoint representation of the gauge group. Their vertex operators take the form: For this multiplet, the open string couplings are: These prefactors, together with the universal factor 11 which must be inserted in all disk amplitudes with the boundary on a single stack a of D-branes, ensure agreement of the string computations with the effective action (5.1). Indeed, with these normalizations the three-gluon superstring disk amplitude is: (5.13) Furthermore, the string coupling of one gauge boson (5.7) to two gauginos (5.9) is: (5.14) in agreement 12 with Eq. (5.1).
(ii) Matter multiplet: The chiral fermion vertex operators of the quarks and leptons are: These vertices connect two segments of disk boundary, associated to stacks a and b, with the indices α 1 and β 1 representing the string ends on the respective stacks. The internal 11 See [27] for the derivation of this factor. 12 The results (5.13) and (5.14) can be matched directly with the interaction vertices of Eq.
(5.1) by an additional rescaling field Ξ a∩b of conformal dimension 3/8 is the fermionic boundary changing operator. In the intersecting D-brane models, the intersections are characterized by angles θ ba . Then Ξ a∩b can be expressed in terms of bosonic and fermionic twist fields σ and s: The spin fields have conformal dimension h s = 1 2 (θ j − 1 2 ) 2 and twist the internal part of the Ramond ground state spinor. The field σ θ has conformal dimension h σ = 1 2 θ j (1 − θ j ) and produces discontinuities in the boundary conditions of the internal complex bosonic Neveu-Schwarz coordinates Z j .
In case of N = 1 supersymmetry, the vertex operators of chiral matter scalars originating from strings stretching between stacks a and b are where Π a∩b is the scalar boundary changing operator of conformal dimension 1/2. Again, for the D-branes a and b intersecting at angles θ j ba , an explicit representation can be given in terms of bosonic and fermionic twist fields The spin fields have conformal dimension h s = 1 2 (θ j ) 2 and twist the internal part of the Neveu-Schwarz ground state.
For the chiral multiplet, the open string couplings are: These prefactors, together with the universal factor 13 which must be inserted in the presence of operators changing disk boundary, ensure agreement of the string computations with the effective action (5.1). Indeed, the coupling of fermions (5.15) to the gauge boson (5.7) is in agreement with Eq. (5.1). 13 See [27] for the derivation of this factor.
Four gluon amplitudes
Four-gluon amplitudes have been known for many years [53]. The corresponding string disk diagram is shown in Figure 6: The complete amplitude can be generated from the maximally helicity violating MHV amplitudes [9,10]. Usually only one amplitude is written explicitly -a partial amplitude associated to one specific Chan-Paton factor. The full expression is necessary, however, for collider applications. Let us start from the partial amplitude [9,10] M P (g − 1 , g − 2 , g + 3 , g + 4 ) = 4 g 2 Tr ( T a 1 T a 2 T a 3 T a 4 ) 12 4 12 23 34 41 V (k 1 , k 2 , k 3 , k 4 ) , (5.24) where ± refer to polarizations. The Veneziano formfactor is given by Its low-energy expansion reads It is convenient to introduce: The color factor can be written as where the totally symmetric symbols d a 1 a 2 a 3 = STr(T a 1 T a 2 T a 3 ) , d a 1 a 2 a 3 a 4 = STr(T a 1 T a 2 T a 3 T a 4 ) (5.29) are the symmetrized traces [54] while f a 1 a 2 a 3 is the totally antisymmetric structure constant.
The full MHV amplitude can be obtained [9,10] by summing the partial amplitudes (5.24) with the indices permuted in the following way: (5.30) where S 4 is the set of all permutations of {1, 2, 3, 4} while Z 4 is the subset of cyclic permutations. As a result, the imaginary part of the color factor (5.28) cancels and one obtains
Two gauge bosons and two fermions
We consider the following correlation function: The fact that fermions originate from the same pair of stacks, say a and b is forced upon us by the conservation of twist charges, in a similar way as their opposite helicities are forced by the internal charge conservation. It follows that both gauge bosons must be associated either to one of these stacks, say (x, y) = (a 1 , a 2 ), or one of them is associated to a while the other to b, say (x, y) = (a, b). The corresponding disk diagrams are shown in Figure 7. With the position z 4 = ∞ as in (5.4) and (5.5), the correlator (5.32) becomes ) times the Chan Paton factor which is determined by the relative position of z 2 with respect to z 1 and z 3 . If (x, y) = (a 1 , a 2 ), there are two allowed orderings of vertex positions: as in (5.4), with z 2 < 0 or 0 < z 2 < 1, see the left side of Figure 7. Then where the kinematic factor: On the other hand, if (x, y) = (a, b), then there is only one allowed ordering, as in (5.4), with z 2 > 1, see the right side of Figure 7, and we obtain The amplitudes (5.34) and (5.36) may be written as sums over infinite many s-channel poles at the masses (2.7) in exactly the same way as (2.6). On the other hand, for the lowest string correction of Eqs. (5.34) and (5.36), which gives rise to a string contact interaction in lines of Subsection 2.3, we use the expansion of the Euler Beta function (5.6): Hence, the first string contact interaction appears at the order M −4 string as in the case of four-gluon scattering (cf. Figure 3).
In order to write the kinematic factor (5.35) more explicitly, we choose k 2 as the reference momentum for the polarization vector ξ 1 and k 1 as the reference momentum for ξ 2 . Then it is easy to see that the kinematic factor vanishes if both gauge bosons have the same helicity, while for the opposite helicities ξ ± 1 ξ ∓ 2 = 0 and most of terms the r.h.s. of Eq. (5.35) vanish except for [14] .
All other helicity amplitudes can be obtained from the above expressions by appropriate crossing operations. The low energy expansions of the above amplitudes can be obtained by using Eq. (5.6). The leading term agrees with the well-known QCD result.
Four chiral fermions
Let us first discuss what are the possible four-point disk amplitudes among four fermion fields. Of course the amplitudes are constrained by the SU (3) × SU (2) × U (1) Y gauge invariance of the SM. In addition, the allowed disk scattering amplitudes are constrained by the conservation of the additional U (1) charges of the matter fields. Recall that the these U (1)'s are part of the original U (N ) gauge symmetries of the different stack of D-branes, and are in general massive due to the generalized Green-Schwarz mechanism.
Only the SM hypercharge stays as a massless, anomaly free linear combination. Nevertheless, the massive U (1)'s act as global symmetries and provide selection rules that constrain the allowed tree level couplings. Note that space-time instantons, i.e. wrapped Euclidean D-branes, may violate the conservation of the global U (1) symmetries, and can hence lead to new processes, which we do not discuss here.
There are two classes of fermion disk amplitudes. The first class contains the amplitudes describing the processes that occur in the SM by the exchange of massless particles. In the following we discuss the possible amplitudes in the prototype model of Section 4, with particle content as given in Table 4. We indicate which amplitudes occur already in the SM due to the tree level exchange of SM gauge bosons, and which can occur only in the D-brane model under investigation. At this stage we would like to stress that the computations of the four fermion disk amplitudes are performed for intersecting D6-branes wrapped around flat 3-cycles of a six-dimensional torus. It follows that the explicit expressions, which we shall present in the following, may contain also contributions from exchange of massive states (scalar fields), whose masses depend on the intersection angles of the 3-cycles. In addition there are contributions from fields that are due to the extended supersymmetries on the D6-branes on the torus, e.g. moduli fields that describe the positions of the D6-branes on the tori. All these model dependent fields may in general appear in intermediate channels of the four fermion amplitudes.
(i) All four fermions at the same intersection point (one angle).
The simplest case is the scattering of two pairs of fermion/anti-fermion fields which are located at the same intersection point of two stacks, a and b, cf. Figure 8. The amplitude is and exhibits poles due to the exchange of massless gauge bosons from the stack a and stack b. In addition, Regge excitations, KK and winding states are exchanged. An example for this amplitude is the process which receives contributions from the exchange of gluons, photons, W, Z-bosons, as well as the Regge and KK excitations.
(ii) Two pairs of conjugate fermions at two different intersection points (two angles) Now we are considering three stacks of D-branes, a, b and c with two intersection angles θ := θ b − θ a and ν := θ c − θ a , cf. Figure 9. We have open strings spanned between a and b, as well as open strings stretched between a and c 14 . Hence we consider the following amplitude: This amplitude exhibits massless poles due to the exchange of the common massless gauge boson from stack a as well from its Regge excitations, KK and windings states thereof. An example for this amplitude is the process (iii) Four different fermions at four intersection points (three angles): Finally we are considering four stacks of D-branes, a, b, c and d with all four chiral fermions originating from different intersections, cf. Figure 10. The corresponding amplitude is of the form No massless gauge bosons are exchanged in this amplitude. However, there may be the exchange of the SM Higgs field as well as some exotic states with masses of the order of the string scale. An example of this kind of amplitude is the following contribution to the Drell-Yan process: Note that purely hadronic 2 → 2 parton scattering processes with quarks and antiquarks belonging to the same family involve no more than three stacks because all partons share the QCD D-brane stack.
In order to compute the four-fermion amplitudes, we evaluate Eq. (5.3) with the following correlator: involving two chiral ψ α 1 β 1 , ψ δ 3 γ 3 and two anti-chiral ψ β 2 δ 2 , ψ γ 4 α 4 matter fermions. In the type IIA picture the latter are represented by open strings stretched between two intersecting stacks of branes with the vertex operator 15 (5.15). Although the following discussion is carried out for intersecting D6-branes it may be translated into other setups. We shall present the explicit expressions for the four-fermion string amplitudes in the case of the prototype four stack model introduced in Section 4.
The most general case may involve four different D-branes a, b, c and d, which intersect at the four points X i with the angles θ i , i = 1, . . . , 4, respectively, cf. Figure 11. In addition we have the relation Fig. 11 Four D-brane stacks and chiral fermions in the target space and on the disk world-sheet.
The explicit expression of (5.47) depends on the number of different branes a, . . . , d involved and the location of the four intersection points X 1 , . . . , X 4 . On the world-sheet of the disk the ordering X i of adjacent D6-branes is translated into a related ordering of vertex operator positions z i through the map z i −→ X i := X(z i ). Due to the chirality and twist properties of the four fermions the dual resonance channels of the full amplitude are very restricted. Further details depend on the specific configuration of intersections and will be discussed below.
In what follows we need to define the straight line distance between two different brane intersection points: For a given brane a the latter are decomposed into with the longitudinal direction L j a along the brane a and transverse component d j a , cf. Figure 12.
In the following we discuss the three cases introduced before separately. Furthermore we must specify the intersections classes f i the four intersection points X i are related to. In the following let us describe these cases in more detail.
(i) One angle θ : In case (i) we may either have (1) chiral fermions stemming from intersections f i of a single pair of branes a, b. For this case we have d j = 0, i.e. c ≃ b, d ≃ a. In the second case (2) we consider chiral fermions stemming from intersections f i , f j of two pairs of branes a, b and c, d, which are mutually shifted by some distance d j orthogonal to the brane directions L j . (i.1) The pair of two intersecting branes a and b has I ab intersection points f i and all four points X i are assumed to be elements thereof. A generic case with I ab = 3 is depicted in Figure 13 with the three different intersection points drawn in black, red and blue, respectively. We must further specify the class of intersection points f i involved, which yields to the subcases (i.1a) and (i.1b). (i.1a) All four chiral fermions all located at the same intersection f . In that case all four intersection points X i differ a by an integer lattice shift, i.e. X i = f + Z L and hence ǫ j ∈ Z , d j = 0. In Figure 13 the class f may be e.g. the set of four black dots, which span one polygon.
(i.1b) A pair of two chiral fermions from the same intersection f i and an other pair from an other intersection f j . In Figure 13 the intersections f i , f j may be e.g. the two blue and two red dots, respectively. Obviously, the points from the same set of intersections are separated by a lattice vector, i.e. ǫ j ∈ Z, d j = 0. However, the length between points from different intersections is smaller. It is given by the intersection number I ab , i.e. ǫ j ∈ Z I ab , d j = 0. Generically, there are two different configurations for the polygon spanned by the four points X i . In Figure 13, these two polygons are drawn in red and blue, respectively: ǫ j a , ǫ j d ∈ Z , ǫ j b , ǫ j c ∈ Q or ǫ j b , ǫ j c ∈ Z , ǫ j a , ǫ j d ∈ Q and d j = 0. (ii) Two angles θ, ν : For this case we consider two different intersecting D-brane pairs (a, b) and (c, d) with the intersection angles θ, ν respectively, cf. Figure 11. In this case, there is no relation between the set of intersections of (a, b) and (c, d) as in the previous case (i.2). All four chiral fermions may originate from different intersections f i , cf. Figure 11. One pair of fermions is related to the twist-antitwist pair (θ, 1 − θ) at the intersections (X 1 , X 2 ) and the second pair of fermions is related to the twist-antitwist pair (ν, 1 − ν) from the intersections (X 3 , X 4 ). Hence only one polygon configuration is possible.
(i) Four-fermion amplitudes involving two twist-antitwist pairs (θ, 1 − θ) We consider the case of two pairs (a, b) and (c, d) ≃ (b ′ , a ′ ) of D-branes intersecting at the angle θ. In (5.47) we have two pairs of twist-antitwist fields (θ, 1 − θ) from intersections f i and f j , respectively. One pair of fermions is related to a twist-antitwist pair (θ, 1 − θ) at the points X 1 , X 2 related to the intersection f i (or X 2 , X 3 related to the intersection f j ) and a second fermion pair to an other twist-antitwist pair (θ, 1 − θ) at the points X 3 , X 4 related to the intersection f j (or X 1 , X 4 related to the intersection f i ). The explicit expression of (5.47) for the case (i) has been computed 16 in [14,16,13] and extended in [21] (5.53) and the combinations of hypergeometric functions: The first term of (5.52) accounts for the polygon with the twist-antitwist pairs at X 1 , X 2 and X 3 , X 4 , while the second term describes the polygon with the twist-antitwist pairs at X 1 , X 4 and X 2 , X 3 . In (5.52) the spinor products (u 1L u 3L ) (u 2R u 4R ) arise from contracting the space-time spin fields of the fermion vertex operators (5.15). The following identity is useful for extracting the gauge boson exchange channels: The normalization of (5.52) simply arises from our convention (5.21) and (5.22), i.e.
as x → 0 we may extract from (5.52) the s-channel pole contribution with the gauge couplings the u-channel of (5.52) gives rise to: Figure 14. (i.1a) For all intersections X i related to one intersection f we have δ j = 0. From (5.50) we see, that massless gauge boson exchange from stack a or stack b appears both in the sand u-channel. The case under consideration corresponds e.g. to a scattering process of four quarks from one family, e.g. the scattering of u,ū-quarks (cf. Figure 8). For this case we introduce the function inst. (x) , (5.60) which will become relevant in the next Section. The instanton action becomes in that case: According to (5.57) and (5.59) in the limit s, t, u → 0 the function V abab (s, u) exhibits the behaviour: . For this case the mass (5.50) is always non-zero and no massless gauge bosons are exchanged neither in the s-channel (5.57) nor in the u-channel (5.59).
(ii) Four-fermion amplitudes involving two twist-antitwist pairs (θ, 1 − θ) and (ν, 1 − ν) Here we consider the generic case of four different stacks of D6-branes a, b, c and d with two pairs of twist-antitwist fields, (θ, 1 − θ) and (ν, 1 − ν). We have two intersection angles θ and ν referring to the pairs (a, b) and (c, d), respectively, cf. Figure 15. In the previous case θ = ν we have encountered two possible polygon contributions as a matter of how the two twist-antitwist pairs are paired: (X 1 , X 2 ) and (X 3 , X 4 ) or (X 1 , X 4 ) and (X 2 , X 3 ), respectively. Obviously, for θ = ν we have only one polygon from the twist-antitwist pairs (X 1 , X 2 ) and (X 3 , X 4 ). The explicit expression of (5.47) for θ = ν has been computed in [14,17] sin(πν j ) and the function: Above we have introduced the hypergeometric functions , and the Euler Beta function (5.6). Finally, we have and the relation: As result of respecting the global monodromy conditions we have the relations: and sin(πν j ) |v j c | = sin(πλ j ) |v j b |. The function (5.65), which determines the quantum part of the amplitude (5.63), is the square root of the relevant closed string piece [57,58]. For θ j = ν j we have β j = 0 and the functions τ j (x) and I j (x) reduce to the expressions (5.54). With this information it is straightforward to show, that (5.63) boils down to the first term of (5.52) as ν j → θ j .
Due to the chirality and twist properties of the four fermions the amplitude (5.63) furnishes massless gauge boson exchange through the s-channel only. On the other hand, for θ j = ν j the limit x → 1 does not imply massless gauge boson exchange and factorizes onto Yukawa couplings. This property is discussed in more detail for case (iii).
In the following let us consider the case d = a, δ = α and δ j c , δ j b = 0, which corresponds to Figure 9. Only in the s-channel a massless gauge boson exchange occurs. To extract from (5.63) the s-channel pole contribution we need the limit (5.69) with the gauge coupling (5.58) and the mass m j ba of KK and winding states given in Eq. (5.50). In deriving (5.57) a Poisson resummation on the integer p a is involved. Again, for ν j = θ j the limit (5.69) reduces to the corresponding expression of (5.57). The case under consideration describes e.g. the process q −q+ → q +q− q L q c R → q R q c L , cf.
(5.72) According to (5.69) in the limit s, t, u → 0 the functions V abac (s, u) and V ′ abac (s, u) furnish massless gauge boson exchange: On the other hand, the function V ′′ abac (s, u) does not imply massless gauge boson exchange and behaves as: This case is discussed in more detail for case (iii).
(iii) Four-fermion amplitudes involving fermions from four different intersections
The most general case, depicted in Figure 11, involves four fermions from four different intersections X i with the four angles θ i and θ 4 = 2 − θ 1 − θ 2 − θ 3 . The explicit expression of (5.47) for this case has been computed in [17] with v r = p r L r + δ r and the quantum contribution: (5.77) Above we have introduced the hypergeometric functions x], and and γ j = Note, that the function (5.77) and the classical action (5.76) have crossing symmetry under the combined manipulations x ↔ 1 − x and θ j 1 ↔ θ j 3 . For θ j 1 = θ j , θ j 2 = 1 − θ j , θ j 3 = ν j and θ j 4 = 1 − ν j we have α j = 0, γ j , γ j = 1 and the functions I j (x) and τ j (x) reduce to the expressions (5.65) and (5.66), respectively. With this information it is straightforward to show, that (5.75) boils down to (5.63) in this limit. Furthermore, for θ j 1 = θ j 3 = θ j and θ j 2 = θ j 4 = 1 − θ j we have α j , β j = 0, γ j , γ j = 1 and the functions I j (x) and τ j (x) reduce to the expressions (5.54).
The amplitude (5.75) does not furnish massless gauge boson exchange limits. On the other hand, it factorizes onto Yukawa couplings. In the following we investigate the helicity configuration (12) L (34) R and (34) L (12) R , which corresponds to the function with the instanton action (5.76). For x → 0 and 0 < θ j 1 + θ j 2 < 1 the integral (5.79) gives rise to the s-channel limit with the intermediate mass and the Yukawa couplings [14,19]: (5.82) Hence, in the limit x → 0 (heavy) string states with mass (5.81) are exchanged. These states may represent the SM Higgs field as well as some exotic states. The latter may give possible stringy signatures at the LHC [59]. The (relevant) four-point fermion amplitudes V (s, u), whose explicit form is given in Eqs. (5.60), (5.70) and (5.79), receive world-sheet disk instanton corrections from holomorphic mappings of the string world-sheet into the polygon spanned by the four intersection points X i , respectively. The three-point couplings (5.82) are derived from the latter by appropriate factorization and the relevant polygon splits into two triangles, cf. Eq. (5.80).
The amplitudes (5.60), (5.70) and (5.79) give rise to string corrections to the contact four fermion interaction. The first correction appears at the order α ′ . For (n 1 , n 2 ) = (0, 0) corresponding to the helicity configurations (13) L (24) R or (24) L (13) R the latter are extracted by setting s, u = 0 and may be summarized in the expression: To conclude, in contrast to gluon scattering, the first string contact interaction appears already at the order α ′ .
There is yet an other way of writing the expressions (5.60) and (5.70) following after a Poisson resummation on p a V (n 1 ,n 2 ) abac 84) with the mass m ba of KK and winding modes, given in Eq. (5.50). If the longitudinal brane directions are somewhat greater than the string scale M string the world-sheet instanton corrections are suppressed and the exponential sum in (5.84) may be ignored. In that case the four-fermion couplings are insensitive to how the D6-branes are wrapped around the compact space and they depend only on the local structure of the brane intersections encoded in the intersection angles θ j i . In other words, the quantum part of (5.60), (5.70) and (5.79), given by the function I j (x), depends only on the angles θ j i and the string scale M string and is not sensitive to the scales of the internal space. In that case the fourfermion couplings may be written as sum over s-channel poles in lines of (2.6). The massive intermediate states exchanged are twisted states with masses of the order of M string .
From Amplitudes to Parton Cross Sections
The purpose of this Section is to present the squared moduli of disk amplitudes derived in the previous Section, averaged over helicities and colors of the incident partons and summed over helicities and colors of the outgoing particles. In order to respect the notation and conventions used in experimental literature, which are rooted in the classic exposition of Björken and Drell, several steps have to be accomplished. The first one is to revert to the (+ − − −) metric signature. Furthermore, appropriate crossing operations have to be performed on the amplitudes, to ensure that the incident particles are always number 1 and 2 while the outgoing are number 3 and 4. They carry the initial four-momenta k 1 , k 2 and final four-momenta k 3 , k 4 , respectively, that satisfy the conservation law A generic process written as ef → g h has the momenta assigned as e(k 1 )f (k 2 ) → g(k 3 ) h(k 4 ). The kinematic invariants (Mandelstam variables) are defined in the standard way: They are constrained by Since in the previous Section, we implicitly used string mass units M string ≡ M for the Mandelstam variables s, t, u, and a metric of opposite signature, we need to redefine the universal string formfactor: and similarly Now the low-energy expansions read Similarly, we introduce the following functions describing the four-fermion amplitudes: where V abab , V abac and V ′ abac are defined in Eqs. (5.60) and (5.71). The above functions depend on details of compactifications, therefore they are model-dependent. Note that the above redefinitions single out the QCD coupling g a because it is the strongest coupling. Thus the results presented below coincide with the QCD predictions in the limit M → ∞, i.e. V = F = G = G ′ = 1. The effects due to electro-weak forces can also be extracted, although with more care in taking this limit.
There are two basic operations performed when squaring the amplitudes and summing over colors and polarizations. First, the moduli squared of helicity amplitudes containing some spinor products (twistors) are expressed in terms of Mandelstam variables. This involves a repeated use of the following identities: 17 The second operation is the summation over color indices. It depends on the representations of external particles, therefore we include it case by case in the following discussion of all parton scattering processes.
6.1. gg → gg, gg → gA, gg → AA The starting expression is Eq. (5.31) that holds for SU (N ) gluons and U (1) vector bosons A coupled to the baryon number. In order to obtain the cross section for the (unpolarized) partonic subprocess gg → gg, we take the squared moduli of individual amplitudes, sum over final polarizations and colors, and average over initial polarizations and colors. The following formulae are useful for summing over SU (N ) colors: 17 It is worth mentioning that in the step of inverting the momenta from incoming to outgoing ones, k 3 → −k 3 , k 4 → −k 4 =⇒ 2k 1 k 3 → −2k 1 k 3 = t , 2k 1 k 4 → −2k 1 k 4 = u.
As an example, the modulus square of the amplitude (5.30), summed over initial and final colors, is: 14) The modulus squared of the gg → gg amplitude, summed over final polarizations and colors, and averaged over all 4(N 2 − 1) 2 possible initial polarization/color configurations, reads with respect to C(N ). Furthermore, the corresponding kinematic factor is suppressed in the low-energy limit at the rate O(M −8 ) with respect to the leading QCD contribution which emerges from the first term (with C(N ) factor), c.f. Eqs. (6.3) and (6.6). Thus the second term in Eq. (6.15) is suppressed in both large N and in low-energy limits. The U (1) gauge bosons A can be produced by gluon fusion, gg → gA and gg → AA, the processes that appear at the disk level as a result of tree level couplings of gauge bosons to massive Regge excitations [22,23]. It is convenient to relax the normalization constraint on the U (1) generator: T a 4 = Q A II N (6.17) where II N is the N ×N identity matrix and Q A is an arbitrary charge. In our conventions, the standard normalization corresponds to Q A = 1/ √ 2N . Then d a 1 a 2 a 3 a 4 = Q A d a 1 a 2 a 3 (6.18) and all non-Abelian structure constants drop out from Eq. (5.31). The corresponding helicity amplitudes can be obtained from the four-gluon amplitudes by the respective replacement of the color factors in Eqs. (5.31) etc. In this way, the averaged squared amplitudes become In the low-energy limit, the Abelian gauge boson production rates (6.19) and (6.20) are of order O(M −8 ) compared to the gluon production. However, they can be larger than QCD rates in the string resonance region [22,23].
6.2. gg → qq, gq → gq, gq → qA, gq → qB, qq → gg, qq → gA, qq → gB All non-vanishing helicity amplitudes involving two quarks and two gauge bosons can be obtained by appropriate crossing operations from Eqs. (5.39) and (5.40). The squared moduli of these amplitudes, summed over initial and final gauge indices, read: The above expression are written in a form suitable for non-Abelian as well as Abelian gauge bosons. In the latter case, the second term drops out from Eq. (6.21).
As an example, consider the gluon fusion gg → qq. In this case, the following identity is used for summing over the color indices: This process takes place entirely on the QCD stack a while stack b is a spectator and its only effect is to supply the overall factor N b in Eq. (6.21). Note that summing over quark helicities requires some attention because left-and right-handed quarks originate from different stacks. We will handle this by adding contributions from both stacks, with the net result of doubling the square of the chiral amplitude (6.21) and replacing N b by the number of flavors N f . 18 We will also apply this procedure to other channels of the same reaction. The squared modulus of the corresponding amplitude, summed over final polarizations and colors, and averaged over all 4(N 2 − 1) 2 initial polarization/color configurations, is The hadroproduction of B vector bosons from a non-QCD stack b involves at least one incoming quark or antiquark. We average over N b species of each of them. We also sum over all N 2 b −1 SU (N b ) B-bosons; depending on the model, we can always add the U (1) b boson by hand. Since B-bosons couple to chiral quarks, we do not add initial quarks of opposite helicity because they are coupled to other stacks. Thus in order to average the B production rates over incident helicities, we simply divide by the number of available initial helicity configuration. All amplitudes obtained by using Eqs. (6.21) and (6.22) are collected in Tables 5-8. 6.3. qq → qq, qq → qq These amplitudes are more complicated for several reasons. Their computations are sensitive to left-right asymmetry of the SM, i.e. to the fact that different helicity states come in different gauge group representations, originating from strings stretching between distinct stacks of D-branes. Furthermore, by construction, the intermediate channels of quark scattering processes include all N 2 gauge bosons of each U (N ), therefore SU (N ) gauge bosons, as well as their string and KK excitations, must be separated "by hand" from their U (1) counterparts. Whenever this problem is encountered, we will implement the following identity on the group factors: where the sum is over all SU (N ) generators of N -color QCD and Q A = 1/ √ 2N . Note that due to Fierz identity, the factor 13 [24] can be interpreted as arising from the exchange of intermediate vector bosons in either s or u channels, depending on the nature of accompanying kinematic singularity.
If the amplitude involves left-handed quarks and right-handed antiquarks only, q − and q + , respectively, then all fermions come from one intersection, say of stack a and stack b. The corresponding amplitude reads: where the functions F su = F (s, u) and F us = F (u, s) are defined in Eq. (6.7). The most important difference between the above amplitude and the amplitudes involving gauge bosons is that its intermediate channels include not only massless particles and their string (Regge) excitations, but also KK excitations and winding modes associated to the extra dimensions spanned by intersecting D-branes. Even in the limit M → ∞, the function F (s, u) contains, in addition to the poles due to intermediate gauge bosons, an infinite number of poles associated to such massive particles. In fact, F (s, u) encompasses the effects of gauge bosons from both stacks a and b, as reflected by the residues of its massless poles, , (6.28) and of all their excitations. In order to explain how Eqs. (6.25) and (6.26) are useful for the interpretation of kinematic poles, let us extract from the amplitude (6.27) the singularities associated to intermediate gluons, coming from the limit g b → 0 in Eq. (6.28), in which the strength of other interactions is negligible: To be precise, it is the M → ∞ (string zero slope) limit, with the additional assumption that g b ≪ g a = g. Then stack b is a spectator, therefore we should use Eq. (6.25) to revert the factors involving G b generators back to their original form. Furthermore, we rewrite the kinematic factor by using Eq. (6.26) in order to exhibit a s-channel vector exchange in the first term of Eq. (6.27) and a u-channel vector boson exchange in the second term. As a result, the amplitude becomes which does indeed reproduce the well-known QCD result after setting Q A = 0, i.e. subtracting the unwanted contribution of the U (1) gauge boson A. The squared modulus of the amplitude (6.27), summed over initial and final gauge indices, is The charges Q should be adjusted in certain regions of parameter space and/or kinematic limits. For instance, in the QCD limit (6.29), with g b ≪ g, the appropriate choice is Q A = 0 [U (1) component eliminated] and Q B = 1/ √ 2N b (stack b treated as spectator). Note that N a = N for N -color QCD and N b = 2 for one [electroweak SU (2)] quark doublet. Thus 33) and Eq. (6.31) becomes There are five more helicity configuration that remain to be included in unpolarized cross sections. The amplitude with the helicity assignments reversed with respect to (6.27) (+ ↔ −) is very similar because it involves right-handed quarks and left-handed antiquarks originating from one intersection (of the QCD stack a with one of U (1) stacks, c or d), provided that all quarks are of the same flavor. For each flavor, one obtains the same result as on the r.h.s. of Eq. (6.31), with N a = N and N b = 1, although the function F is associated now to a different intersection 19 . If the flavors are different, then the amplitude falls into the category discussed below, because it couples the QCD stack to two other stacks and the disk boundary connects three different stacks. Then the function F has to be replaced by G defined in Eq. (6.8).
The four remaining helicity configurations fall into one class. They involve SU (2) doublets and singlets at the same time, therefore they mix the QCD stack with two other stacks: SU (2) stack b and one of U (1) stacks, say c. The corresponding helicity amplitudes contain massless poles in only one channel, due to intermediate gluons and the A-boson. They are: 36) 19 It is defined as in Eq. (6.7), but starting from V acac or V adad . and the two amplitudes describing the helicity configurations reversed by (+ ↔ −). These can be obtained from the above by the permutation (1 ↔ 3 , 2 ↔ 4), with the net effect of complex conjugation. The functions G ′ su = G ′ (s, u) and G ′ us = G ′ (u, s) are defined in Eq. (6.9). Recall that their low-energy expansions have the form G(s, u) ≈ G ′ (s, u) The squared moduli of the amplitudes (6.35) and (6.36), summed over initial and final gauge indices are, respectively: where we set g a = g and N c = 1. The factor N b = 2 takes combines the cases of same and different components (flavors) of the SU (2) doublet which can be easily disentangled if flavor summation or averaging is not desirable. We should also set K(N a ) = N 2 − 1 (6.39) in order to eliminate the contributions of intermediate color singlets. At this point, we have all ingredients at hand, ready for writing down the squared amplitudes for quark-quark scattering and quark-antiquark annihilation, averaged over the polarizations, colors of the incident particles, and summed over the polarizations, colors of the outgoing quarks and antiquarks. We will consider the cases of identical and different flavors separately. The sum over helicity configurations combines disk diagrams with various stack configurations along the boundary, with the QCD stack a repeating two times and the two other being either b [electroweak SU (2)], c (right-handed u quark) or c ′ (right-handed d quark). We distinguish functions F associated to disk diagram with two b stacks, two c stacks or two c ′ stacks as F bb , F cc and F c ′ c ′ , respectively, see Eq. (6.7). Similarly, G and G ′ need indication of two non-QCD stacks. Thus we define G cc ′ , by Eq. (6.8) with V acac ′ etc.
qq → ll
The disk amplitudes involving four D-brane stacks do not contribute to 2 → 2 parton scattering processes, at least in the simplest realizations of intersecting D-brane scenarios. They are relevant though to the Drell-Yan process qq → ll. The relevant amplitude is with the function V abcd (s, u) is defined in Eq. (5.79). The low-energy expansion of this amplitude is free of kinematic singularities and begins at the order O(M −2 ). For this process, there are also helicity amplitudes receiving contributions from three stacks, as already discussed in the context of quark-quark scattering:
(6.46)
It is clear from the above expressions, especially from Eq. (6.46) which includes all gauge bosons exchanged in the s-channel, that the amplitudes involving leptons are sensitive to the implementation of electro-weak symmetry breaking mechanism in string theory. Since it is a model-dependent problem, we stop short from computing the production rates (averaged squared moduli) for such processes.
Tables
In the Tables below, we collect the squared amplitudes for all parton subprocesses discussed in this Section, summed over the polarizations and colors of final particles and averaged over the polarizations and colors of incident partons. The number of colors has been set to N = 3. Recall that A denotes the U (1) gauge boson from the QCD stack, i.e. the "quiver neighbor" of SU (3) gluons. The corresponding coupling Q A = 1/ √ 6 is displayed explicitly. Furthermore, B is a generic (massless) gauge boson from another stack, for example a SU (2) boson. We assumed that B couples to left-handed quarks only; the generalization to a left-right symmetric vector coupling is straightforward. We factored out the QCD coupling factor g 4 . Thus in the amplitudes involving B vector bosons, marked with ( * ), this factor should be corrected to g 2 g 2 B , where g B denotes the coupling of B gauge group. In these amplitudes, T B qq ′ denotes the (qq ′ ) matrix element of the corresponding group generator. The SU (3)×SU (2)×U (1) SM limit of the amplitudes, with V = F = G = G ′ = 1 as s ≪ M 2 , is in agreement with Table 9.1 of [60]. Table 8: Quark-antiquark annihilation.
Summary
This article is intended to provide some information useful in the upcoming searches for the signals of string physics at the LHC, assuming that the fundamental scale determining the masses of string excitations is of order few TeVs. While on the theoretical side, low mass scenarios face many challenges, it is an experimental question whether string theory describes the physics beyond the SM. Needless to say, since low mass strings require the existence of large extra dimensions, the discovery of fundamental strings at the LHC would revolutionize our understanding of space and time.
The search for string signals should focus on Regge excitations i.e. the resonances created by vibrating strings. The main message of this work is that string theory provides very clear, model-independent, universal predictions not only for the masses and spins of these particles 20 , but also for their couplings to gluons and quarks. These predictions do not depend on details of compactification, D-brane configurations and hold even if supersymmetry is broken in four dimensions. The reason why certain amplitudes are universal, independent of the spectrum of Kaluza-Klein excitations is very simple. At the disk level, the gluon scattering amplitudes involve only one stack of D-branes, thus the momentum components along the compactified D-brane directions are conserved and as a consequence, Kaluza-Klein states carrying such momenta cannot appear as intermediate states. From all 2 → 2 parton scattering amplitudes, only four-fermion processes are model-dependent, but these are suppressed by group-theoretical factors and usually occur at luminosities lower than gluon collisions. The model-dependence of these amplitudes, and also the necessity to avoid FCNC's or proton decay via four-fermion amplitudes, could be useful for some "precision tests" that would distinguish between various compactification scenarios.
The resonant character of parton cross sections should not be difficult to observe. In Refs. [22] and [23], the process gg → gγ, which is absent in the SM at the tree-level, but appears in string theory as a QCD process involving strongly interacting resonances, has been examined to that effect. It turns out that a string mass as high as 3 TeV is observable in this process. More recently [25], we examined the dijet invariant mass spectrum which is sensitive to even higher mass scales. The resonant behavior of stringy cross section at the parton center of mass energies equal to the masses of Regge states is a signal that cannot be missed at the LHC, unless the string scale is too high or the theory does not describe the physics beyond the SM correctly.
We have computed the full-fledged string four-particle scattering amplitudes for the SM fields, as they occur at the (leading) disk level in a large class of orientifold compactifications on an internal manifold with large volume and low string scale. The SM fields arise in these models as open strings ending on a set of intersecting D-branes. Here are the basic characteristics of these amplitudes: (i) Two gluon/two SM gauge boson processes: These amplitudes are given in terms of one kinematic function V (s, t, u), given in (6.5), and can be computed in a completely model independent way. The poles of V (s, t, u) are due to the exchange of massless SM gauge bosons and heavy string Regge excitations. In some particular processes, like gg → gY or gg → Y Y (Y = γ, Z 0 ), the poles due to massless gauge bosons are absent, and the leading contribution originates from heavy string states.
(ii) Two SM gauge boson/two SM fermion processes: As before these processes can be computed in a completely model independent way and are given in terms of the same function V (s, t, u). Hence they receive contributions from the exchange of SM gauge bosons and heavy string excitations. We find that low scale string theory at the LHC leads to model independent string contributions to processes such as qq → gW ± or qq → gZ, which should be a clear signal for new physics. Likewise, exchanges of Regge excitations contribute to processes like gq → qW ± and gq → qZ in a model independent way.
(iii) Four SM fermion processes: The four quark or two quark/two lepton amplitudes like the Drell-Yan process qq → ll are model-dependent and can be expressed in terms of three functions, V abab (s, u), V abac (s, u) and V abcd (s, u), given in (5.60), (5.71) and (5.79), respectively. In general, the latter depend on the string scale and on the parameters describing the internal manifold and the cycles around which D-branes are wrapped. Here one finds poles not only due to exchanges of SM gauge bosons and Regge excitations thereof, but also poles due to internal Kaluza-Klein and winding modes, and open string states with masses depending on the intersection angles.
All parton subprocesses receive string contributions which should be separable from the SM background if the string scale is not too high.
The squared amplitudes, summed over the polarizations and colors of final particles and averaged over the polarizations and colors of incident partons, are collected in Tables 5-8. They are presented in a form suitable for the computations of the respective cross sections and are ready to be implemented in the LHC data analysis.
|
v3-fos-license
|
2017-05-30T17:31:54.572Z
|
2015-11-26T00:00:00.000
|
18261866
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.id-press.eu/mjms/article/download/oamjms.2015.123/606",
"pdf_hash": "6fd8c478b3ae1aa5d787796f5746e4a7b2911142",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2726",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "f48999ba30c9daac61d6894bd913f48174dcc9c7",
"year": 2015
}
|
pes2o/s2orc
|
Adipokines Vaspin and Visfatin in Obese Children
BACKGROUND: Adipokines provides new insights about the physiology, pathology and treatment of obesity. AIM: We investigated the association between serum vaspin and serum visfatin concentrations with obesity in Egyptian children. MATERIAL AND METHODS: Twenty two obese children with body mass index (BMI) above 95th percentile; 11 males and 11 females were included in this study. Their mean age was 9.18 ± 2.8 years. After general clinical examination, fasting blood glucose, triglycerides, total cholesterol and high density lipoprotein cholesterol were measured in cases and controls (n=11). Fasting insulin, vaspin and visfatin were detected using ELIZA. Insulin resistance was estimated by Homeostasis model assessment method (HOMA-IR). RESULTS: Blood pressure, in both systolic and diastolic measurements was elevated significantly in obese children. Significant elevation of serum insulin and insulin resistance (HOMA/IR) were observed in obese children too. Vaspin and visfatin showed significant elevation in obese children than controls. Significant positive correlations were detected between visfatin and BMI, waist circumference, hip circumference and HOMA/IR. We found that Vaspin and visfatin are higher in obese children. CONCLUSION: Visfatin but not vaspin correlates positively with waist circumference and HOMA/IR in obese children.
Introduction
Adipose tissue is the source of adipokines, secreted mainly by adipocytes. The rapidly growing list of adipokines provides new insights about the physiology, pathology and treatment of obesity [1]. Recently, vaspin (visceral adipose tissue-derived serpin protease inhibitor) and visfatin (also known as pre-B-cell colony-enhancing factor 1), have been identified as interesting novel adipokines having insulin-sensitizing and insulin-mimic effects, respectively [2].
Vaspin was originally identified in an animal model of obesity and type 2 diabetes. Increased vaspin mRNA expression in human adipose tissue was found to be associated with obesity [3]. Visfatin, in human, is expressed more in visceral adipose tissue than subcutaneous one. It is upregulated during inflammation [4]. Obesity and metabolic syndrome in children and adolescents is a leading cause of a low grade systemic inflammation [5].
Obesity is associated with an array of health problems in adult and pediatric populations. Adipokines are signaling to organs such as brain, liver, skeletal muscle, and the immune systemthereby modulating homeostasis, blood pressure, lipid and glucose metabolism, inflammation, and atherosclerosis [6]. The secretion of several adipokines is altered in subjects with abdominal adiposity and these changes to the endocrine balance may contribute to increased cardiovascular diseases risk [7]. The association of novel adipokines, vaspin and visfatin, with atherosclerosis is still obscure [8].
Subjects
Twenty two obese children with body mass index ((BMI) above 95 th percentile 11 males and 11 females were included in this study. Their mean age was 9.18 ± 2.8. Subjects are free from any other diseases. Genetic or endocrine causes of obesity were excluded from this study. Cases were not in body weight control regime or exercise at the time of the study. Eleven age and sex matched children were also included and served as controls. All controls had normal BMI ranging from 5 th to 85 th [9]. In this study BMI detected according to the Egyptian growth charts 2002 [10]. All obese and controls children underwent thorough medical examination and anthropometric measurements by one member of our team works. Informed consents were taken from the parents of all children included in this study.
Methods
After 12 hours of fasting a blood sample was taken, and the serum was collected. Blood glucose level was determined immediately and rest of serum was stored at -80°C. Fasting blood glucose, triglycerides, total cholesterol and high density lipoprotein cholesterol were carried out using an auto analyzer (Olympus-Au-400). Low density lipoprotein cholesterol was calculated following Friedwald formula [11] [12]. Fasting serum visfatin was assessed using ELISA using kits from CUSABIO BIOTECH CO., LTD. Catalogue No.CSB-EO8940h. Vaspin was assessed also by ELISA technique, using kit Human soluble Cluster of differentiation 100 (sCD100) ELISA kit from CUSABIO BIOTECH CO., LTD.
Statistical Analysis
Mann-Whitney test was used for not normally distributed data and Student's t tests were used for normally distributed data. Both tests are performed using the statistical version 10 programs (Stat Software Inc., Tulsa, OK, USA). The relative strength of correlations was calculated using the Spearman rank correlation coefficient (r s ). Table 1 showed the descriptive data and anthropometric measurements of cases and control group. As expected, significant differences were detected between cases and controls concerning the anthropometric measurements related to obesity. Systolic and diastolic blood pressures were significantly elevated in obese children. Vaspin concentration was higher in obese children than in controls. Similar difference was also elicited for serum visfatin. There was no significant difference in fasting blood glucose between the two studied groups, while significant elevation of serum insulin and insulin resistance (HOMA/IR) was observed in obese children relative to controls. Total cholesterol and LDL values were elevated in obese cases; no significant difference was detected in triglycerides and HDL in both groups under study ( Table 2). No correlations was found between serum vaspin and different demographic, laboratory and clinical studied data in obese children, except for a positive correlation between vaspin and waist hip ratio (P< 0.01, r = 0.7020). Table 3 shows the correlations with visfatin in obese children. Significant positive correlations were detected between visfatin level and height, weight, BMI, waist circumference, hip circumference and HOMA / IR of obese children.
Discussion
Obesity is one of the most serious risk factors for chronic diseases. It plays a central role in insulin resistance and metabolic syndrome [13]. Obese children in our study showed significant elevated serum insulin and insulin resistance (HOMA/IR) than controls. Obesity in childhood and adolescence is also associated with established risk factors for cardiovascular diseases and accelerated atherosclerotic processes [14]. Total cholesterol and LDL levels were elevated in obese cases in this study. Accelerated atherosclerotic processes are associated with elevated triglyceride and lower HDL [15]. We detected both changes in obese children but levels failed to reach significances in comparison to control children. Systolic and diastolic blood pressures were significantly higher in obese children. Same changes in blood pressure were reported by several authors [15,16]. Cekmez et al in 2011 concluded that large for gestational age children had a higher vaspin and visfatin levels than those who are appropriate for gestational age [17]. Our data in obese children showed elevated levels of both vaspin and visfatin. Many studies concluded the same elevation of visfatin in obese children [18,19]. Pagno et al., in 2006, found that plasma visfatin and its mRNA were significantly lower in obese subjects, compared with normal-weight controls [20]. Variation in results between studies may be related to genetic variations [21].
Administration of vaspin to obese mice improves glucose tolerance, insulin sensitivity and reduces food intake [22]. Vaspin may have antiatherogenic effects through its potential insulinsensitizing properties and through its beneficial effects on the asymmetric dimethylarginineendothelial nitrous oxide system [23]. It also protects vascular endothelial cells against free fatty acid-induced apoptosis through a phosphatidylinositol 3-kinase/Akt pathway [24]. In our study we did not find correlations between vaspin and cardiovascular risk factors, except with waist / hip circumference, which is not a reliable indicator for abdominal fat in children as waist cirumference [25]. Failure to detect such correlations may be related to different ages of cases in different studies and probably a small sample size.
Visfatin correlated with waist circumference and insulin resistance (HOMA/IR) in our study. Such correlation with Insulin resistance was elicited by Araki et al., in 2008 [26]. Insulin resistance expressed in HOMA/IR is more significantly interrelated with the metabolic syndrome components [27]. Waist circumference, a proxy measure of abdominal obesity, is associated with cardio-metabolic risk factors in childhood and adolescence [27]. Similar to atherosclerosis, abdominal fat is also one of the predictor risk factors of morbidity in obese individuals [28]. Adipokines may further contribute to obesityatherosclerosis relationships, the full understanding of which will require much more researches [29].
In conclusion vaspin and visfatin levels increased in obese children. Visfatin related positively to abdominal fat and insulin resistance in the form of HOMA/IR; abdominal fat and insulin resistance are important indicators of metabolic syndrome in children. Vaspin is not a sensitive indicator of abdominal obesity and insulin resistance as visfatin. Still more researches are needed to explore its role in improving insulin tolerance.
|
v3-fos-license
|
2020-11-18T14:06:58.054Z
|
2020-11-01T00:00:00.000
|
226989132
|
{
"extfieldsofstudy": [
"Medicine",
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/17/22/8379/pdf",
"pdf_hash": "1e086a04da3f6449bb16cf3cb6bd58d52ed25dcd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2728",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "af0e509cd68cbeadab34972b906aba394e3e3255",
"year": 2020
}
|
pes2o/s2orc
|
The Focal Play Therapy: An Empirical Study on the Parent–Therapist Alliance, Parent–Child Interactions and Parenting Stress in a Clinical Sample of Children and Their Parents
The present study aims to investigate the outcomes of the Focal Play Therapy with Children and Parents (FPT-CP) in terms of parent–therapist alliance, parent–child interactions, and parenting stress. Thirty parental couples (N = 60; 30 mothers and 30 fathers) and their children presenting behavioral, evacuation and eating disorders took part to the study. Through a multi-method longitudinal approach, data were collected at two time points (first and seventh sessions) marking the first phase of the intervention specifically aimed to build the alliance with parents, a crucial variable for the remission of the child’s symptoms (and to the assessment of the child’s symptoms within family dynamics.) Therapeutic alliance was assessed by the Working Alliance Inventory by therapists and parents. Parent–child interactions and parenting stress were evaluated using the Emotional Availability Scales and the Parenting Stress Index, respectively. Results showed that a positive parent–therapist alliance was developed and maintained during the first seven sessions. Furthermore, parent–child interactions significantly improved on both parents’ and child’s dimensions. However, parenting stress levels remained unchanged between the two time points. The findings should enrich scientific knowledge about the role of parental engagement in preschool child-focused treatments as to better inform practice and improve the quality of care for children and their families.
Introduction
Early parent-child interactions have a major influence on child's cognitive, emotional and social development [1][2][3]. Their bidirectional nature is well documented (i.e., parents influence children just as vice versa) [4,5]. Historically, the literature has emphasized various dimensions of adult-child interactions that affect child socio-emotional adjustment as well as the development of language and other cognitive abilities [6,7]. Among them, a growing body of evidence indicates that emotional availability (EA) represents a key determinant for positive parent-child interactions [8][9][10][11][12][13][14][15][16][17]. Based on attachment theory [18] integrated with Emde's [19] perspective on emotions, EA denotes the quality of emotional exchanges between parents and their children with focus on their reciprocal accessibility and ability to read and respond appropriately to one another's communications [20][21][22][23].
Parent-Child Interactions and Child Psychopathology
Empirical evidence has shown that problems in parent-child interactions are strictly associated with the development of child psychopathology [21][22][23][24][25]. In preschool years, child developmental tasks center around the acquisition of physical and emotional independence and autonomy [26]. In this context, parents need to balance protective and "letting go" behaviors to stimulate the development of the child's self-regulatory abilities in different areas of his/her development [25,[27][28][29][30][31][32][33][34][35][36][37][38][39]. The difficulties which may arise during this stage of development often lead to child behavioral, social and emotional problems, which become in turn common causes of concerns for parents of children aged 2-5 years [24,40]. Among them are child's oppositional and aggressive behaviors, difficulty with eating and/or evacuation and so on. Particularly, eating disorders include: the avoidant/restrictive food intake disorder, eating of non-nutritive substances, repeated regurgitation and chewing of food, while evacuation disorders consist of constipation, enuresis and encopresis [41]. Without early interventions, these problems tend to persist into school age with negative consequences on the child's physical and mental health and family burden [42,43]. Indeed, in most of these cases parents suffer from distress and psychological impairment [44][45][46][47].
The Focal Play Therapy with Children and Parents
In order to prevent adult psychopathology, most clinical approaches today focus on the early identification and treatment of problems in the parent-child relationships [48][49][50][51]. The Focal Play Therapy with Children and Parents (FPT-CP) [25,[36][37][38][39]52,53] is a psychodynamic model of intervention originally developed for eating and evacuation disorders and then adapted to a wide range of problems usually connected to parent-child relationship problems during preschool years. It is based on both the active engagement of parents in the diagnostic-therapeutic process and the use of play as a narrative dimension of the family history [54][55][56][57].
The FPT-CP is structured into weekly alternate play sessions with children and parents together and sessions with parents only. Specifically, they are organized as follows: first session with parents; second session with the child and his/her parents; third session with the child and his/her mother; fourth session with parents; fifth session with the child and his/her father; sixth session with parents; seventh session with the child and his/her parents. Basically, during the FPT-CP joint sessions the therapist introduces the child to a temporal sequence of play where the main character is a plasticine puppet which performs the human basic physiological functions. It seems to enjoy eating and, afterwards, it expresses the need to go to the toilet in a potty made with plasticine. The focus is on the phenomenal qualities of both eating and evacuation of natural functions. In this context the therapist allows the child to project his/her psychological contents, desires, fears and internal conflicts into play. Parents are asked to take part in the play and, afterwards, they discuss with the therapist alone (without the child) what emerged during play sessions, the psychological meanings of the play and parental attitudes which may support (e.g., intrusive and coercive manners) or not support (e.g., tolerance, collaboration) the child's symptoms. Positive parental skills are promoted along with the achievement of self-managed and self-regulated child behaviors into a harmonious family life [36].
Specifically, the FPT-CP first phase (seven sessions) is aimed to understand the child's symptoms within family dynamics and to promote and maintain a positive therapeutic relationship with parents [25,[36][37][38][39]52]. This aspect is crucial since the parent-therapist alliance allows parents to understand the child's problems and to come to an agreement regarding the main goals and tasks of treatment [42,43,[58][59][60][61][62][63][64].
Over the past few decades, a vast amount of literature has evolved around the topic of the alliance in individual psychotherapy, while similar research in family therapy, including the FPT-CP, is recent [65,66] and needs more empirical evidence. The few studies available on schoolchildren and their parents attending separate treatment sessions have shown that a positive parent-therapist alliance is associated with low drop-outs, a decreased youth symptomatology, and improved parenting practices and family functioning [67][68][69][70].
The Present Study
In light of the above-discussed issues, there is a need to collect data in order to better inform clinical practice and to improve the quality of care for children and their families. Specifically, through a multi-method longitudinal approach, the present study wants to investigate the outcomes of the FPT-CP first phase in terms of parent-therapist alliance, parenting stress, and parent-child interactions. Indeed, to our knowledge, while some evidence on the benefits of a positive alliance with parents-who are treated separately from their schoolchildren-exists, there is a paucity of data on the alliance with parents involved in the therapy sessions with their preschool children.
Data were collected at two time points (T1: first session, T2: seventh session) marking the first phase of the FPT-CP by recruiting a clinical sample of preschool children and their parents. In this context, we aimed to investigate differences between T1 and T2 in: (a) the parent-therapist alliance from each participant's perspective; (b) the levels of parental distress; (c) the quality of mother-child and father-child interactions from both sides. We hypothesized that: (a) the parent-therapist alliance would be positive and stable over time; (b) parental distress would significantly decrease; (c) parent-child interactions would significantly improve.
Participants
Families were recruited consecutively between November 2015 and December 2017 at the "Psychological Consultation Centre for Children and Parents" (Department of Psychology, University of Bologna, Italy; director: Professor Elena Trombini). The center provides psychological assessment, treatment and support for children and their families. Parents were given voluntary access to the Centre for their child's behavioral (e.g., oppositional and aggressive behaviors), eating (e.g., food refusal and selective eating), or evacuation (e.g., constipation, enuresis and encopresis) problems.
This research adopted a longitudinal design. A total of 30 couples (N = 60; 30 mothers and 30 fathers) and their preschool children (N = 30; 21 males and 9 females) took part in the study. Exclusion criteria for the present study were: (a) child's organic diseases, (b) child's neurodevelopmental disorders, (c) parental past or present psychiatric disorders, (d) parents' lack of competence in the Italian language, (e) the refusal of one parent to attend the study. No exclusion criterion was met by any of the families who participated in the study.
Seven psychotherapists carried out the FPT-CP with the children and their families. All therapists were female, experts in psychoanalytic psychotherapy with children and families, and had been previously trained to use the FPT-CP methodology
Procedure
The study was approved by the Ethic Committee of the University of Bologna (Italy). Participation was voluntary and based on the family informed written consent which included confidentiality and the client's right to withdraw at any time. Each family was randomly assigned to a clinician according to his/her availability, and the average caseload for each psychotherapist was about four families, each of which was seen once a week.
At the end of the first and sixth sessions parents were asked to complete a demographic questionnaire and two self-reports on therapeutic alliance and parenting stress, respectively. Data on alliance were triangulated with the scores obtained by the therapists at the same measure, which they filled in for both mothers and fathers at T1 and T2. Changes in the quality of the parent-child interactions were evaluated at the beginning of the second (before treatment) and during the seventh (where only data collection occurred) sessions. To this aim, two consecutive sessions of a 10-minutes free-play interaction, first with the mother and then with the father, were recorded.
Measures
The therapeutic alliance refers to the quality of the relationship between the client and the therapist. Specifically, it consists of three dimensions: the agreement on (1) goals of treatment; (2) tasks, methods and activities used to achieve treatment goals; (3) the development of a personal bond between the client and the therapist [71]. Therapeutic alliance was assessed by the Working Alliance Inventory-Short Form (WAI-SF) [72,73]. The WAI-SF is composed by 12 items rated on a seven-point Likert scale. The total score (range 12-84) is based on the sum of three subscales: goal, tasks, and bond (range 4-28).
Higher scores indicate a more positive alliance [74].
Parental distress arises when the demands associated with parental role exceed parent's resources to face them [75]. In the present study parental distress was measured using the Parenting Stress Index-Short Form (PSI-SF) [76,77]. The PSI-SF consists of 36 statements which evaluate specific domains of parental distress rated on a 5-point Likert scale. The total score (range 36-180) is a combined score of the three subscales (range 12-60): parental distress, parent-child dysfunctional interaction and difficult child. As indicated by the Italian validation [77], scores between the 15th and 84th percentiles are within the normal range for stress; scores between the 85th and 89th percentiles represent a high level of stress; scores ≥ 90th percentile indicate clinically significant or severe parenting stress.
Interactions between parents and their children were coded through the fourth edition of the Emotional Availability Scales: Infancy to Early Childhood Version (EAS) [78]. The EAS have been largely used in research settings over 20 countries to evaluate the quality of parent-child relationships focusing on emotional availability which refers to the quality of emotional exchanges between parents and children [79]. These scales describe and evaluate six dimensions, four on the adult's side (Sensitivity, Structuring, Nonintrusiveness, and Nonhostility), and two on the child's side (Responsiveness to adult and Involvement of adult). Scores are assigned based on the frequency and the quality of the behavior observed. For each dimension, a total score and a direct score can be obtained. Direct scores for each dimension are evaluated on a 1-7 points Likert scale where the lowest scores (1-2-3) indicate severe clinical problems, the mid-point ratings (4)(5) refer to mild/moderate problems, the high-end scores (5.5-6-7) represent good/optimal ratings. In the present study, direct scores were used, as common for research purposes, in order to give an immediate indication of the level of emotional availability displayed by the dyad [80,81]. All videos were scored by two blind raters previously trained in the use of the EAS. The degree of agreement between the two coders was measured using the average absolute agreement intraclass correlation coefficients (ICC) [82] on a random selection of 30% of the videos. ICCs averaged 80.
Statistical Analysis
Demographic data were analyzed using Pearson's χ 2 test and Student's t test for independent samples for nominal and continuous variables, respectively. All hypotheses were tested through Repeated measure Analyses of Variance (ANOVA) or Multivariate ANOVA (MANOVA). Each model included Role (i.e., mother vs. father and parent vs. therapist in the case of the analysis on therapeutic alliance), and Time (i.e., T1 vs. T2) as within-subject variables. All statistical analyses were performed using SPSS (version 25) for Windows (IBM, Armonk, NY, US). A p value of less than 0.05 was considered significant.
Demographic and Clinical Characteristics
All couples were Italian, employed, married (83.3%) or cohabiting (13.3%). No significant differences between parents were found with regard to education level as both mothers (77%) and fathers (67%) had mostly a university degree. Differences in age reached significant levels, with fathers being slightly older than mothers (t = −2.53, p < 0.05; M = 39.9, SD = 4.9, and M = 42.1, SD = 5.1, for mothers and fathers, respectively). Children's ages ranged between 2.1 and 5.8 years (M = 4.1, SD = 1.1), and they were referred for behavioral (43.3%), evacuation (36.7%) or eating (20%) problems. Table 1 presents statistics for the WAI-SF total scores at T1 and T2. Parents and therapist's alliance scores were high and indicative of a positive alliance at each time of assessment. Results from the ANOVAs showed that showed no significant main effects of Role, Time, nor their interaction over WAI global scores (all ps > 0.05). With regards to the ANOVA pertaining therapist's ratings, a main effect of Role was detected, as alliance with mothers was overall higher than alliance with fathers (F (1, 29) = 12.8, p < 0.001). Moreover, therapist's ratings on alliance were significantly lower compared to self-rated alliance by both mothers (F (1, 29) Table 2 presents descriptive statistics for the PSI scales and global scores at T1 and T2. ANOVAs' results showed no significant difference in parenting stress levels between fathers and mothers (all ps > 0.05). These findings seem to attest that parenting stress remained essentially unchanged from session 1 to session 6. In light of these results, PSI's scores obtained at T1 and T2 were averaged in order to obtain a more precise global indicator of parenting distress. This new score was used to check for the percentage of parents above the cut-off points suggested by the PSI manual. In the present sample, 11 mothers (36.7%) and 7 fathers (23.3%) reported clinically relevant distress levels, 3 mothers (10%) and 4 fathers (13.3%) were considered at risk, and 16 mothers (53.3%) and 19 fathers (63.3%) scored below the threshold level. Table 3 presents the results of two repeated measure MANOVAs, comparing separately parent's and child's dimensions of the EAS. With regards to parent's dimensions of the EAS, results showed a significant main effect of Role, Time and EAS dimensions, as well as significant interaction effect of Time × EAS dimensions (all ps < 0.05). Particularly, scores at the EAS were overall significantly higher for mothers compared to fathers. Significantly different mean values emerged across dimensions for both parents, with Non-hostility being the dimension with the highest scores and Structuring the dimension with the lowest (p < 0.001). Moreover, with the exception of Non-hostility which remained almost constant, each dimension improved significantly from T1 to T2 irrespective of parental role (see Table 3 On the other hand, children's interaction with mothers was not affected by their stress levels (i.e., above vs. below the clinically relevant threshold of the 85th percentile). Note. EAS = Emotional Awareness Scales. T1 and T2 refer to session number 2 and 7, respectively. * refers to interaction effects.
Discussion
The goal of this study was to investigate the outcomes of the FPT-CP in terms of parent-therapist alliance, parental distress, and parent-child interactions in a clinical sample of preschool children. Our findings support the hypotheses concerning parent-therapist alliance and parent-child interactions except for parental distress which did not significantly decrease at T2. The FPT-CP is a clinical methodology based on the need for the therapist to build an early, positive, therapeutic relationship with children and their parents in order to facilitate the treatment success. As previously described, parents are actively involved in play sessions with children and afterwards in sessions with the therapist alone where they share comments and reflect on the psychological meanings of the child's and family's play. The recent international literature in this field has shown that a positive therapeutic relationship with parents significantly correlates with low premature terminations from therapy, increased youth functioning and family well-being across different types of child and family treatment [65][66][67][68][69][70]. However, today, there is a dearth of data on preschool child-focused interventions where parents are actively involved with the aim of understanding the child's symptoms within family dynamics and restoring healthy family relationships.
In line with our expectations, results have shown that a positive parent-therapist alliance was developed and maintained during the first FPT-CP 7 sessions. In this regard, it is important to consider that parents' access to the Centre was voluntary, subsequently their treatment motivation was presumably high, and they were also well-educated and has been willing to share aims and procedures of the intervention since their first sessions. In line with previous studies [83,84], we did not find differences between mothers' and fathers' alliance scores, which were significantly higher than the therapist's ratings of alliance. However, unexpectedly, the therapist's report of the alliance with mothers was significantly higher than the alliance with fathers at both measurements. Although the therapists in this sample were all females and more gender similarities might explain these results, several studies did not find significant relationships between therapist alliance and treatment outcomes. Thus, the ability of therapists to accurately evaluate various aspects of their treatments was questioned [85][86][87].
For what concerns levels of parenting stress, significant differences did not emerge neither between T1 and T2 nor between mothers and fathers. Nevertheless, at a qualitative level, mothers and fathers showed a different pattern of stress development on the Difficult Child scale. This dimension deals with how parents perceive their children, whether they are easy or difficult to care for [76,77] and this is often used in studies with clinical samples of children whose parents struggle to manage their behaviors. While at the end of the FPT-CP first phase, mothers' scores were lower and in a subclinical range (T1: 85th percentile, T2: 80th percentile), fathers' ratings slightly increased and still had clinical significance (85th percentile). It may be that, during the FPT-CP 7 sessions, mothers started to understand the reasons behind their children's maladaptive behaviors and symptoms rather than to simply perceive them as difficult, challenging or disturbing. A similar result did not occur in the sample of fathers who probably needed a longer therapeutic process to fully understand the psychological meanings behind the child's behaviors and to be able to effectively deal with them. Indeed, although they are highly motivated and psychologically activated, they can perceive therapeutic tasks as difficult as they are not yet equipped with the emotional skills necessary to get through them.
In line with our hypothesis, with regard to parent's dimensions of the EAS, both mother-child and father-child interactions significantly improved from T1 to T2 except for the scale of Non-hostility on which parents obtained the highest scores among the EAS dimensions at both time points. Indeed, there was no evidence of parents' negative emotionality towards their children both in its covert and overt components. However, although significant improvements occurred in parent-child interactions, at the end of the FPT-CP first phase fathers' scores on the Sensitivity scale were not yet optimal suggesting, according to the manual [78], the presence of some inconsistencies of parental behaviors, the lack of a proper sense of timing and some difficulties with dealing with conflict situations. Furthermore, both parents still reported problems on the Structuring scale which measures the caregiver's ability to scaffold the child's activities and set appropriate limits while respecting the child's need of autonomy.
As previously discussed, in the preschool years children strive for independence and autonomy and, at this stage, parents should effectively adapt themselves to understand children's desire to do things for themselves and support their emerging autonomy, thus preventing child behavioral, social and emotional problems in such a delicate developmental phase. Due to the nature of problems in our sample, this aspect of parent-child relationships may require more sessions to effectively reach optimal levels of interaction. Furthermore, mothers' scores on the EAS scales were overall significantly higher than fathers at both time points, thus suggesting a higher level of maternal relational competence which has been present since the beginning of the intervention.
For what concerns child's dimensions of the EAS, as expected, they significantly improved with mothers and fathers. Furthermore, at both measurements, child's scores were higher for mothers compared to fathers, thus confirming a less problematic scenario in the context of child-mother relationships. Indeed, as for child-father interactions, at T2 children's scores on the Responsiveness scale (i.e., the counterpart of the adult Sensitivity scale) were not still optimal. Specifically, this dimension measures both the child's responsiveness to the adult and the presence of autonomous activities and explorations [78]. In this sample, children showed an affectively positive and responsive attitude towards their fathers although responsive and exploratory behaviors were somehow unbalanced in favor of the former. Moreover, moderate problems were reported on the Involvement scale as children showed some over-involving behaviors towards their fathers that, according to the EAS manual [78], might suggest that they were assuming the lion's share of the responsibility for maintaining contact and interactions with the adult, thus compromising the child's autonomous initiatives.
The overall differences found on the EAS scales for both parent's and child's dimensions were not explained by neither child's age nor child's problems. However, while children's interactions with mothers were not affected by their stress levels, it was found that children obtained lower scores on the Responsiveness and Involvement scales when fathers' ratings of distress were clinically significant compared to children whose fathers showed normal distress levels. It seems that, compared to fathers, maternal distress did not hamper the capacity of the mother-child dyad to share an emotional connection and to enjoy a mutually fulfilling and healthy relationship. We can speculate on a greater maternal ability to manage stress in ways the children found reasonable.
Overall, the results show that a positive therapeutic relationship with both parents was developed and maintained during the first seven sessions as necessary condition for the success of the FPT-CP. Along with it, significant changes in parent-child interactions occurred for mothers, fathers and their children toward more positive healthy relationships. These results are clinically relevant and give advice about the importance of involving parents in child-focused treatments where a structured clinical methodology is used. In the present sample mothers showed somehow a less problematic scenario and a greater parental competence since the beginning of the intervention. Hence, despite significant paternal improvements, mothers and fathers may follow a different trajectory for individual changes and fathers might need more therapeutic sessions to build and maintain a healthy relationship with their children. In this regard, clinicians should carefully monitor progresses as well as mothers' and fathers' individual times to achieve them. More studies are needed on this issue with longer research-design.
The following limitations have to be considered. At first, a larger sample size is required for more reliable results with greater precision and power. Indeed, children's limited sample size could not have allowed to detect specific differences related to child's diagnosis (behavioral, evacuation, eating disorders). Furthermore, since parents' access to the Centre was voluntary and all parents were well-educated a sampling bias could have occurred. Similarly, using a more balanced sample of therapists (not women only) may allow to better understand some discrepancies found among the therapists' and parents' scores. To this aim, in order to obtain more reliable data on therapeutic alliance, the use of both questionnaires and observational measures such as the System for Observing Family Therapy Alliances (SOFTA) [88] would be useful. Moreover, the present study mostly focused on the adults' voice, while data on parent-child interactions were collected from the child's side as well. In order to obtain a more comprehensive picture of adult and child-related data [89][90][91][92] future studies should capture children's voices in reliable and valid ways. It would also be interesting to collect data longitudinally over the course of the FPT-CP to monitor clinical outcomes in terms of both parental variables and remission of child's symptoms. Lastly, we could not infer causality due to the nature of the study and, therefore, future research should compare different treatment approaches based on different levels of parental engagement as to understand how these effectively work, thus improving the quality of care for children and their families.
Conclusions
The FPT-CP is a structured psychodynamic methodology aimed to promote and maintain the parent-therapist alliance and it is based on the use of play as a narrative dimension of family dynamics. Parents are actively involved into the child's play sessions where they gradually understand the psychological meanings behind child's play, his/her resources and capabilities well beyond child's symptoms. In this way, parents become aware of the role they play in regard to child's difficulties and the child is no longer seen as simply difficult or challenging. As a consequence, parents are motivated to change parental non-adaptive behaviors to help their children and family dynamics can improve in a more compatible way with the child's developmental needs.
In the present study, we offered empirical evidence on the associations between the use of the FPT-CP and the improvement in parent-child interactions since its first sessions. Therefore, in light of what previously discussed and of the empirical evidence collected in several studies, this methodology could represent an innovative model for preventive psychodynamic interventions to be applied to both public and private clinical contexts for children and their families.
Author Contributions:
We state that all authors have participated in the work with a substantial contribution to conception, design, acquisition, analysis, and interpretation of data. Conceptualization, E.T. and I.C.; Data collection, I.C., P.S., I.M. and E.T.; Formal analysis, F.A., I.C. and P.S.; Methodology, I.C., P.S. and E.T.; Writing-original draft, I.C., F.A. and P.S.; Writing-review and editing, all the authors. Agreement has been reached for all aspects of the manuscript in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
|
v3-fos-license
|
2018-12-11T06:28:29.471Z
|
2016-03-29T00:00:00.000
|
55134580
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.atmos-chem-phys.net/16/4063/2016/acp-16-4063-2016.pdf",
"pdf_hash": "e0ba54cc56f2824dd236b1fd5d127e54a229caf7",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2729",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "90de1fca604d2e12a8ddd068a36be7d191df968f",
"year": 2016
}
|
pes2o/s2orc
|
Size-segregated compositional analysis of aerosol particles collected in the European Arctic during the ACCACIA campaign
Single-particle compositional analysis of filter samples collected on-board the FAAM BAe-146 aircraft is presented for six flights during the springtime Aerosol-Cloud Coupling and Climate Interactions in the Arctic (ACCACIA) campaign (March–April 2013). Scanning electron microscopy was utilised to derive size distributions and size-segregated particle compositions. These data were 5 compared to corresponding data from wing-mounted optical particle counters and reasonable agreement between the calculated number size distributions was found. Significant variability in composition was observed, with differing external and internal mixing identified, between air mass trajectory cases based on HYSPLIT analyses. Dominant particle classes were silicate-based dusts and sea salts, with particles notably rich in K and Ca detected in one case. Source regions varied from the Arc10 tic Ocean and Greenland through to northern Russia and the European continent. Good agreement between the back trajectories was mirrored by comparable compositional trends between samples. Silicate dusts were identified in all cases, and the elemental composition of the dust was consistent for all samples except one. It is hypothesised that long-range, high-altitude transport was primarily responsible for this dust, with likely sources including the Asian arid regions. 15
Introduction
The response of the Arctic environment to climate change has received increased interest in recent years due to the visible loss in sea-ice volume over the past 3 decades (e.g.Serreze et al., 2007;Perovich et al., 2008).The polar regions of our planet have a unique response to a warming atmosphere due to environmental characteristics vastly different to the mid-latitudes, including high surface albedo and strong variability in annual solar radiation.These factors cause the Arctic to respond to climatic changes at a heightened pace (Curry et al., 1996).The complexity of the Arctic environment requires detailed observations to further our understanding of the feedbacks and underlying processes involved; however, the ability to carry out such studies is hampered by the remote location, which is difficult for in situ investigation.
Existing numerical models do not effectively reproduce the changing Arctic environment.Discrepancies in forecasted sea-ice coverage, and predicted dates for 100 % loss, are due to a variety of uncertainties within the models themselves (e.g. de Boer et al., 2014).A key uncertainty in our ability to model how these changes will progress is in our representation of atmospheric aerosol-cloud interactions (Boucher et al., 2013).Aerosols play an important role in the Arctic radiative balance and their influence is thought to be amplified by the unique environmental conditions of this region (Quinn et al., 2007).The annual cycle of aerosol concentration in the Arctic varies significantly by season -with highs in spring of approximately 4-5 times that observed in late summer (Heintzenberg et al., 1986) -and such variabil-Published by Copernicus Publications on behalf of the European Geosciences Union.
ity impacts the microphysics of the mixed-phase clouds commonly observed (Verlinde et al., 2007).
The interaction of aerosol particles with clouds as ice nucleating particles (INPs) or cloud condensation nuclei (CCN) is dependent upon properties such as their size, hygroscopicity and composition (Pruppacher and Klett, 1997).Aerosol particles can therefore influence ice crystal or cloud droplet number, thus affecting properties such as droplet effective radius or cloud optical depth (Zhao et al., 2012); properties which significantly affect the net radiative impact of the cloud (Curry et al., 1996).The study of INPs has developed significantly in recent years via laboratory and field studies (DeMott et al., 2010;Hoose and Möhler, 2012).It is still not clear which properties of aerosol particles promote them to act as INPs in the atmosphere.They are generally thought to be insoluble, super-micron in size, have a similar molecular structure to ice (Pruppacher and Klett, 1997) and have the potential to produce chemical bonds with ice molecules at their surface (Murray et al., 2012).For example, mineral dusts are known INPs and are used regularly in laboratory studies of ice nucleation (e.g.Zimmermann et al., 2008;Connolly et al., 2009;Kanji et al., 2013;Yakobi-Hancock et al., 2013).Sources of these particles are not ubiquitous across the globe.Internally mixed particles can also act as INPs or (giant) CCN.A complex particle is difficult to clearly categorise as an INP or CCN as its nucleation will be heavily dependent on the environmental conditions.The presence of coatings on particles can also have a significant impact on their role in aerosol-cloud interactions.Coatings of soluble material could enhance CCN ability and promote secondary ice production via the formation of large cloud drops (Levin et al., 1996), whilst organic coatings could suppress the nucleating ability of an efficient INP (Möhler et al., 2008).It is not well understood which particles, in which mixing state and from which sources facilitate ice nucleation in the Arctic atmosphere.
Previous studies of Arctic aerosol have indicated that the population is primarily composed of organic material, continental pollutants (e.g. as sulfate or nitrate gases), crustal minerals and locally sourced species such as sea salt (Barrie, 1986;Hara et al., 2003;Behrenfeldt et al., 2008;Geng et al., 2010;Weinbruch et al., 2012).A wide range of sources contribute to this population and it is difficult to quantify the impact of different regions.Extended studies of Arctic aerosol have been conducted, which consider the differences in particle properties between seasons, showing that the annual cycle of aerosol particle composition (Ström et al., 2003;Weinbruch et al., 2012) and concentration (Ström et al., 2003;Tunved et al., 2013) is dominated by the influence of the Arctic haze (Barrie, 1986;Shaw, 1995).Between February and April, an influx of aerosol from anthropogenic sources becomes trapped in the stable Arctic atmosphere and persists for long periods of time (up to several weeks) before being removed by precipitation processes (Shaw, 1995).Spring in the European Arctic is routinely characterised by these high particle number concentrations, dominated by the accumulation mode, and low precipitation rates with comparison to summer, autumn and winter (Tunved et al., 2013).During this time, aerosol particles have the potential to interact with other species, grow and develop with a low chance of being removed from the atmosphere.This promotes an enhanced state of mixing (e.g.Hara et al., 2003), which compounds the difficulty in understanding how these particles interact with the clouds in the region.It is thought that the European continent is the primary source of this aerosol, with only small contributions from North America and Asia (Rahn, 1981); however, long-range transport from the Asian continent has been found to sporadically contribute to this phenomenon (Liu et al., 2015).Improving our understanding of the properties of these aerosol particles will help us to comprehend how they influence the clouds of the Arctic, and a strong method of achieving this is by identifying their chemical composition (Andreae and Rosenfeld, 2008).
By improving our knowledge of aerosol and cloud properties via in situ observational studies in the Arctic, it is possible to reduce the uncertainty associated with aerosol-cloud interactions (Vihma et al., 2014).To this end, the Aerosol-Cloud Coupling and Climate Interactions in the Arctic (AC-CACIA) campaign was carried out in the European Arctic in 2013, utilising airborne-and ship-based measurements to collect a detailed data set of the Arctic atmosphere.The campaign was split into spring and summer segments, completed in March-April and July of 2013 respectively.During the spring section of the campaign, the Facility for Airborne Atmospheric Measurements (FAAM) BAe-146 atmospheric research aircraft was flown in the vicinity of Svalbard, Norway, with the capability of collecting in situ samples of aerosol particles on filters.This study presents the analysis of filter samples collected during this campaign, with a focus placed upon identifying the compositional properties and sources of the non-volatile, coarse-mode aerosol particles present in the atmosphere during the Arctic spring and inferring how these might interact with the cloud microphysics in the region.
Campaign overview
The springtime ACCACIA campaign flights were mainly conducted to the south-east of Svalbard, with the exception of flight B768, which was carried out to the north-west near the boundary with Greenland.Figure 1 details the science sections of each of the flights of interest, with direction from Svalbard to Kiruna, Sweden in all cases except B765.Corresponding dates are listed in Table 1.
As part of the springtime campaign, 47 mm diameter Nuclepore polycarbonate filters were exposed to ambient air from the FAAM BAe-146 aircraft to collect in situ samples of accumulation-and coarse-mode aerosol particles (sizes ∼ 0.1 to ∼ 10 µm).Such particle sizes are approximately applicable to the study of CCN and INPs (Pruppacher and Klett, 1997).Analysis of one below-cloud set of filters from each case is shown, followed by a comparison between a below-and above-cloud pair from a single case study.
Aircraft instrumentation and trajectory analysis
A range of cloud microphysics and aerosol instrumentation were used on board the FAAM BAe-146 aircraft to produce a detailed record of the observed Arctic atmosphere (as described by Liu et al., 2015;Lloyd et al., 2015).In this study, data from the Cloud Droplet Probe (CDP-100 Version 2, Droplet Measurement Technologies (DMT), Lance et al., 2010), the Cloud-Aerosol Spectrometer with Depolarisation (CAS-DPOL, DMT, Glen and Brooks, 2013) and the Passive Cavity Aerosol Spectrometer Probe (PCASP 100-X, DMT, Rosenberg et al., 2012) are used to provide context for and a comparison to the filter measurements.Throughout this article, the prefix s is imposed to represent number concentration measurements computed at standard temperature and pressure.
The accumulation-mode aerosol distribution was monitored by the PCASP.The CAS-DPOL measured both coarsemode aerosol and, along with the CDP, cloud droplet number concentration.These externally mounted aircraft probes size and count their relative species via forward-scattering of the incident laser light through angles 35-120 and ∼ 4-12 • (for both the CDP and CAS-DPOL), respectively.The PCASP measures particle concentrations and sizes in the range of 0.1 to 3 µm, the CAS-DPOL provides similar measurements from 0.6 to 50 µm (Glen and Brooks, 2013), and the CDP measures cloud droplets from 3 to 50 µm (Rosenberg et al., 2012).Out of cloud, the CDP was used to provide an indication of the wet-mode diameter of coarse-mode ambient aerosol particles.The CAS-DPOL also measures coarsemode aerosol concentrations when out of cloud.Within cloud, the liquid-water content (LWC) was derived from the observations of cloud droplet size.In this study, a LWC threshold of ≤ 0.01 g m −3 , derived from CDP measurements, was employed to distinguish between out-of-cloud and incloud measurements.This threshold was applied to the CAS-DPOL, CDP and PCASP data to obtain an estimate of the ambient aerosol size distributions.These out-of-cloud observations are used in this study to validate the collection efficiency of the filter inlet system.
In addition to the in situ data gained from the instrumentation aboard the aircraft, back trajectory analyses were carried out to further contextualise the filter exposures.This was achieved using the National Oceanic and Atmospheric Administration HYbrid Single-Particle Lagrangian Integrated Trajectory (NOAA HYSPLIT 4.0) model (Draxler and Hess, 1998), in a similar manner to Liu et al. (2015).Horizontal and vertical wind fields were derived from GDAS reanalysis meteorology (Global Data Assimilation System; NOAA Air Resources Laboratory, Boulder, CO, USA) and used to calculate trajectories at 30 s intervals along the FAAM BAe-146 flight path.This analysis allows for the direction of the air mass to be inferred; however, it does not explicitly account for turbulent motions along the derived path and therefore carries a degree of uncertainty (Fleming et al., 2012).Trajectories dating back 6 days are presented to provide an indication of the source regions of the particles collected during the ACCACIA filter exposures.
Filter collection
The filter collection mechanism on the FAAM BAe-146 aircraft comprises a stacked-filter unit (SFU), which allows for two filters (Whatman Nuclepore track etch membranes) to be exposed simultaneously to the air stream, allowing aerosol particles to be collected on both.In the ACCACIA campaign, a combination of two filters with different nominal pore sizes was used in each exposure -a 10 µm pore filter was stacked in front of a 1 µm pore filter -allowing sub-micron aerosol particles that may pass through the pores of the first to be collected by the second.The design of the inlet follows the same specifications as the UK Met Office C-130 aircraft filtration system described extensively by Andreae et al. (2000).Sub-isokinetic sampling conditions were maintained, potentially leading to a coarse-mode enhancement artefact (Chou et al., 2008).The design of the mechanism removes large cloud droplets from the sampled air using a bypass tube; therefore, contamination from droplets or rain is minimised (Chou et al., 2008;Johnson et al., 2012).Consequently, large particles (> 10 µm) are also thought to be removed from the collected sample, though the collection efficiency of the entire system is not known to have been formally quantified (Formenti et al., 2008;Johnson et al., 2012).Andreae et al. (2000) estimated the sampling efficiency of the inlet to be 35 % by mass for the coarse mode, with a 50 % cut-off threshold of ∼ 3 µm (Formenti et al., 2003) and no losses identified for the accumulation mode.Chou et al. (2008) demonstrated that data collected via this inlet deviated from externally mounted particle counters above ∼ 0.5 µm, after which the coarse-mode enhancement on the filter samples became evident.Additionally, the efficiencies of the filters themselves can be estimated: the 50 % cut-off diameter of the 10 µm Nuclepore filter is approximately 0.8-1 µm at the mean face velocity encountered during this study (∼ 100 cm s −1 ) (John et al., 1983;Crosier et al., 2007), whilst the 1 µm filter has a 50 % collection efficiency at approximately 0.2 µm (Liu and Lee, 1976).
The filters were exposed on straight, level runs for approximately 10-30 min to obtain a sufficient sample for chemically speciated mass loadings.Although the filter system was designed to remove cloud droplets, the filters were primarily exposed out of cloud to further minimise the potential for contamination.Chosen filters were all exposed within the boundary layer (< 1000 m, see Table 2).Samples from below cloud were preferentially studied in this investigation (cases 1-6) as they likely included the main contributions of CCN and INPs at this time of year; however, one exposure from above cloud (case 7) is considered in Sect.3.4.
Scanning electron microscopy
Using a Phillips FEI XL30 Environmental Scanning Electron Microscope with Field-Emission Gun (ESEM-FEG) in partnership with an energy-dispersive X-ray spectroscopy (EDS) system, automated single-particle analysis of the ACCACIA filter samples was undertaken at the University of Manchester's Williamson Research Centre (Hand et al., 2010;Johnson et al., 2012).The coupled EDS system moves the sample stage through a pre-set grid to produce automated particle analysis of each sample.Particles are detected via the intensity of the backscattered electron signal.Grey-scale thresholds were set to identify particles under contrast with the background filter.The electron beam was then rastered over 70 % of the detected particle surface to produce an X-ray spectrum: relative elemental weight percentages of elements from C to Zn were recorded from the spectrum, measured and fitted with the EDAX ™ Genesis software.For each measurement, standardless ZAF corrections were applied; corrections relating to atomic number, absorption and fluorescence.Parameters chosen for this analysis are listed in Table 3.A carboncoating was applied to each sample to allow high vacuum mode to be used.The minimum particle sizes detectable by each scan correspond to 4 pixels in the given image and are listed in Table 3.The total number of particles scanned by the seven cases presented in this study is also listed in Table 3.
To act as a calibration, a blank filter pair was also analysed as Nuclepore filters have been shown to carry contaminants (Behrenfeldt et al., 2008).These were taken aboard the aircraft and treated similarly to the exposed filters.A small number of particles were identified: these appeared almost transparent under contrast and the majority produced a spectrum similar to the background filter.There was also a notable metallic influence and some particles were found to have moderate Cr or Fe fractions.These particles were found to be few in number and so should not greatly affect the outcome of this analysis.
Previous studies (e.g.Kandler et al., 2007;Hand et al., 2010;Formenti et al., 2011;Weinbruch et al., 2012) have shown that there are limitations to consider with this technique.The polycarbonate filters used during ACCACIA contaminate measurements of C and O in each particle detected.Studies using these filters have excluded C and O from their analysis to combat this issue (e.g.Krejci et al., 2005;Behrenfeldt et al., 2008;Hand et al., 2010).In this study, approximate thresholds of C and O are used to identify carbonaceous and biogenic species.However, only elements with Z > 11 (sodium) are used precisely within the classification scheme for the compositional analysis presented.
The electron beam produced by the scanning electron microscope (SEM) can negatively interact with some particle species, causing them to deform (Behrenfeldt et al., 2008).This is caused by the evaporation of the volatile components of the particles, either under the electron beam or as a result of the high vacuum (Li et al., 2003;Krejci et al., 2005).Little can be done to prevent this and it is difficult to manage when applying automated particle analysis.Behrenfeldt et al. (2008) found that this phenomenon only had a small impact on their results and could be disregarded.As a result, it can be assumed that the particles analysed by this method are dry and that any volatile components will have evaporated (Li et al., 2003).a Filter was collected mostly under clear conditions, although some in-cloud sampling was encountered at the end of the exposure.b The total volume of air sampled during case 1 is high given its exposure length due to higher-than-average flow rates applied during that flight.c Contaminated measurement, likely due to condensation on detection surface.There are also several implicit factors, which may contribute some degree of uncertainty to the quantitative composition measurements gained.For example, errors can be introduced by uncertainties in the spectral fitting of the EDAX ™ software (Krejci et al., 2005) or from the differing geometries of the individual particles measured (Kandler et al., 2007).Also, compositional data for particles less than 0.5 µm suffer from increased uncertainty (Kandler et al., 2011).The sample sizes considered here were too large to consider individual corrections; therefore, the measurements from the EDS analysis were taken as approximate values.Similarly, manual inspection of the images and spectra was not feasible due to the sample size and so an algorithm was imposed to remove any filter artefacts.These were typically a result of the software misclassifying the filter background as a particle itself and therefore displayed only the distinctive background signature.This background spectrum presented different characteristics than those considered to be carbon based; the artefacts were noisy, with very low detections in all but a few of the elements, whereas the particles thought to be carbonaceous displayed zero counts in some elements as expected.The fraction of detected particles removed by this algorithm was typically low (∼ 4-5 %), yet it is not possible to conclude if any real particles were removed.Krejci et al. (2005) placed an estimate of the total error involved with this technique to be around 10 % and found this value to be dependent on the sample and elements analysed.
Classifications
Elemental information gained from EDS analysis was taken further to identify particle species relevant to the atmosphere.The classification scheme applied in this investigation was derived from a variety of sources (e.g.Krejci et al., 2005;Geng et al., 2010;Hand et al., 2010); however, it is most prominently based upon the detailed scheme presented by Kandler et al. (2011).This scheme is detailed in Table S1.
Carbonaceous and biogenic
Approximate thresholds of C and O were utilised to distinguish carbonaceous and biological particles (Mamane and Noll, 1985).This approach has been adopted by other studies that applied a polycarbonate substrate (e.g.Kandler et al., 2007;Behrenfeldt et al., 2008;Hand et al., 2010).For example, particles included in this category could be soot particles or pollen grains (Behrenfeldt et al., 2008).Soot has been previously identified by introducing other properties into the classification process; for example, Hara et al. (2003) and Hand et al. (2010) categorised it via its characteristic chain-aggregate morphology.Due to the sample size, inspection of particle morphologies was not feasible in this study; therefore, carbonaceous particles were not specifically categorised.
Carbonaceous and biogenic particles have been segregated using compositional information in previous studies.Mamane and Noll (1985) measured distinctive small peaks in P, S, K and/or Ca with a dominating C influence in pollen grains.Similarly, Geng et al. (2010) utilised a comparable threshold, also considering small amounts of Cl, S, K, N and/or P as indicators for biogenic species as these elements are important nutrients for plant life (Steinnes et al., 2000).
The carbonaceous and biogenic classifications likely include particles that may have some volatile component, which cannot be measured by this technique (see Sect. 2.3).The partial or complete evaporation of these particles therefore renders the presented fraction a lower limit; i.e.only the non-volatile cases could be measured.Coupled with the difficulty of distinguishing these particles from the filter background, it is important to note that the fractions of carbonaceous and biogenic classes presented by this study are approximations which are likely underestimating the true organic loading on these filters.
Sulfates, fresh and mixed chlorides
Sodium chloride (NaCl) from sea salt can enter the atmosphere as a consequence of sea-surface winds and these particles remain predominantly Na-and Cl-based for a short period of time.The lifetime of Cl is hindered by the tendency of these particles to accumulate sulfate in the atmosphere, thus producing particles primarily composed of Na-S (Hand et al., 2010).Due to this short lifetime, its presence is often used to indicate a fresh contribution from the sea surface (Hand et al., 2010).It is a common conclusion that a lack of Cl-containing particles and/or a significant fraction of S in a particulate sample is suggestive of aged aerosol (Behrenfeldt et al., 2008;Hand et al., 2010).
Aerosol containing S can infer an anthropogenic influence in a sample, as they are thought to have undergone a reaction with sulfur oxides (Geng et al., 2010).However, the Arctic Ocean is a natural source of dimethylsulfide (DMS), a gas which can also interact in the atmosphere to form sulfur dioxide.The contribution of this source is greater during the summer months due to decreased sea ice (Quinn et al., 2007), and is thought to have little influence during the dates of this study.The gas source cannot be concluded here but it can be stated that Na-S particles will have been present in the atmosphere for a sufficient length of time to allow for the interaction to take place.
The mixed chlorides category requires that particles must still be predominantly Na-and Cl-based, with a notable S contribution.This category also accounts for metallic contributions to the base NaCl species.The sulfates and fresh chloride categories are limited to the extremes of this distribution, with only S-and Cl-dominated signatures allowed respectively.
Silicates, mixed silicates, Ca-rich and gypsum
Complex internal mixing in particles is often indicative of a natural origin (e.g.Conny and Norris, 2011;Hoose and Möhler, 2012); however, coagulated particles can also be produced by high-temperature anthropogenic activities.A strong method of sourcing internally mixed particles involves the identification of Si: particles consisting of this element and various mixed metals are likely to be naturally occurring mineral dusts, and industrial by-products may lack this element in high quantities (Conny and Norris, 2011).Mineral dusts are typically composed of a variety of elements and tend to include significant fractions of Si and Al, with more minor contributions from Na, Mg, K, Ca and/or Fe amongst others.
Dusts are crucial constituents of the aerosol population as they are proven INPs (Zimmermann et al., 2008;Murray et al., 2012;Yakobi-Hancock et al., 2013).However, they can also act as CCN; for example, Ca-based dusts have been shown to form hygroscopic particles after reaction with nitrates in the atmosphere (Krueger et al., 2003).The springtime concentrations of nitrates in the Arctic (measured at the Alert sampling station in Canada) followed an increasing trend over 1990-2003(Quinn et al., 2007)), suggesting it is probable that this interaction could take place in this environment.Alternatively, internally mixed particles consisting of dusts, sulfates and sea salt can act as giant CCN (Andreae and Rosenfeld, 2008).In this study, the presence of such particles may be inferred by the detection of S or Cl with the typical dust-like signatures.This can occur if the dust in question has been transported over long distances and thus undergone cloud processing or acidification reactions (Mamane and Noll, 1985;Behrenfeldt et al., 2008).Or, more simply, these could be the result of a sea salt or sulfate coating on a mineral dust particle, and such mixtures have been modelled to have significant effects on warm clouds by augmenting the CCN population (Levin et al., 2005).Complex internal mixtures containing Si, S and/or Cl are therefore indicated in this study under the classification mixed silicates.
The mineral phase of aluminosilicates cannot be identified using the EDS method as these particles are closely related compositionally.The specific phases of dusts observed in SEM studies are often not quantified for this reason (Kandler et al., 2007;Hand et al., 2010).Instead, the individual X-ray counts and ratios between the elements measured were considered to classify their sampled particles into approximate groups such as silicates and carbonates.It has often been considered that Al, Ca and K are indicative of aluminosilicates (such as kaolinite), carbonate minerals such as calcite (CaCO 3 ) and dolomite (CaMg(CO 3 ) 2 ) -and clays/feldspars respectively (Formenti et al., 2011).Due to the lack of a quantitative C measurement, carbonate minerals were inferred from their Ca and Mg abundances in this study.Some mineral classes have a distinct elemental relationship and these can be classified; for example, gypsum (CaSO 4 •2H 2 O) samples typically do not deviate from their base chemical formulae (Kandler et al., 2007).By this reasoning, gypsum was included as its own classification, whereas the vast majority of mineral dusts observed were grouped into the silicates, mixed silicates and Ca-rich categories, dependent on the relative quantities of Si, S and Ca they contained.
Phosphates and metallics
These groups include particles with significant influences from P and transition metals.Particles classified as phosphates in this study may include those composed of apatite -a Ca-and P-based mineral group -as factories which process these minerals are common in the nearby Kola Peninsula, Russia (Reimann et al., 2000).
The presence of transition metals can be viewed as an indicator for an industrial origin (Weinbruch et al., 2012).Potential local anthropogenic sources for the metallic particles include the coal burning facilities on Svalbard (in Longyearbyen and Barentsburg) or various metal smelters in the Kola Peninsula, Russia (Weinbruch et al., 2012).The metals included in the EDS analysis were Ti, Cr, Fe, Ni, Cu and Zn.Contributions from these may be attributable to anthropogenic and/or natural sources and could be in the form of metal oxides or constituents of complex minerals (Hand et al., 2010).Of those measured in this study, Fe and Al are the most likely to originate from a variety of sources as they are processed widely (Steinnes et al., 2000) and are common constituents in silicate-based dusts.Similarly, Zn may also be associated with biological material in addition to smelting emissions (Steinnes et al., 2000).
Biomass tracers
This group was introduced out of necessity given the results obtained.The other classifications were expected from hypothesised local sources; however, this group was introduced to account for the high quantity of K-based particles observed in one of the flights.These particles have negligible measurements of Si and are not thought to be mineralogical in nature.This category has been dubbed "biomass tracers" as several studies (e.g.Andreae, 1983;Chou et al., 2008;Hand et al., 2010;Quennehen et al., 2012) have identified particles sourced from biomass burning events to be rich in this element.These K-rich particles have been found to be prominent in forest fire and anthropogenic combustion emissions.It is unlikely that such particles could be sourced in the Arctic; therefore, their presence may infer transport from elsewhere (Quennehen et al., 2012).Biomass burning produces particles known as bottom ashes, which differ from the fly ash particles that are typically emitted during fossil fuel incomplete combustion processes (Umo et al., 2015).Activities which may produce these constituents could include firewood or agricultural burning (Andreae, 1983), or wildfires in warmer climates (Seiler and Crutzen, 1980).
Other
Particles which are not classified by the applied scheme are classed as other.The implication is that these particles are mixed.Figure 2 illustrates the difficulty with mixed particles; though local sites on the particle may be dominated by cer- tain elements, the SEM analysis does not provide a spatial map of the elemental distribution across each particle surface.
Mixed particles are typically either unclassified or classified by their most abundant elements.The particle illustrated in Fig. 2 would be classified as a silicate dust as it is mixed but has a dominating Si influence.The size of the samples prevent manual inspection of every unclassified particle; therefore, the abundance of mixed particles within a data set must be inferred from the quantity quoted as "Other".
HYSPLIT back trajectories
Air mass histories were calculated using HYSPLIT for each of the filter exposures to provide context with the environmental conditions in which they were sampled.Figure 3 shows the spatial extent of these trajectories in the top two panels and the mean altitudes covered are displayed in the bottom panel.
The mean altitude of the trajectories remains within the lower 1.5 km of the atmosphere.The modelled altitude typically increases with increased time backwards.Case 5 is the exception to this trend, as consistent low-altitude trajectories are modelled for the full duration shown.Also, the majority of these trajectories are reasonably smooth; however, a significant descent in height is modelled in case 4 at approximately −2 days.
A north-easterly wind was observed for cases 1 to 3, bringing air from over the dense Arctic sea ice to the region of interest to the south-east of Svalbard.by 6 days, differences between the air mass histories can be seen.From Fig. 3, cases 1 and 2 show some similarities, with the latter displaying more curvature anticlockwise than the former.Trajectories from case 3 are distinct from these two, with cyclonic curvature around the immediate vicinity of Svalbard and Greenland.
There is a clear partition in the direction of the trajectories as the spring campaign progressed.The first three exposures had source regions to the north and west of the exposure locations, whilst the latter three primarily sampled from the east.These latter trajectories are also more compact than the first three cases (Fig. 3).The air from cases 4 and 5 is traced back across the northern coast of Russia, whilst case 6 covers both the northern coast of Russia and Scandinavia.A large portion of these trajectories are clustered towards the continent, suggesting a strong influence from this region.
These two trajectory groups can be dissected further; two specific pairs can be identified (cases 1 and 2; 4 and 5), which display similar paths, and cases 3 and 6 appear unique in comparison.Overall, there appears to be a clear shift in the source region of these boundary layer exposures as the campaign progresses; from over the dense Arctic sea ice, through Greenland and northern Russia to the European continent.
Aerosol size
To investigate any issues with inlet collection efficiency (see Sect. 2.2), size distributions from the filter data were constructed and compared with arithmetic means of the wingmounted probe data over each exposure period.Number size distributions were computed similarly to Chou et al. (2008); namely, the total number of particles detected in each scan was normalised by the area covered and total volume of air sampled, then scaled to the full filter area.Figure 4 illustrates these comparisons for each below-cloud filter pair analysed.Data from the PCASP, CAS-DPOL and CDP instruments are shown for comparison.These data use the standard scattering cross sections for the aircraft probes and no refractive index corrections were applied due to the expected mixed aerosol population.
Agreement between the filter-derived and the probe data appears dependent on the conditions sampled.For example, case 3 was exposed during a section where cloud haze was encountered, whereas cases 1, 5 and 6 were cleanly exposed out of cloud.Cases 2 and 4 sampled small amounts of cloud at the end of their exposures -at which point the probes measured some amount of cloud droplets and/or swollen aerosol particles -therefore, the probe distributions differ somewhat from the filter-derived particle distributions.Mean relative humidity (RH) values (Table 2) from each exposure were high (> 90 %) and the disagreement between filter and probe data in Fig. 4 appears to correlate with these values.Case 1 displays good agreement under lower RH conditions, whilst cases 2, 3, 4 and 5 display poorer agreement under higher RH conditions.However, case 6 displays good agreement under relatively high RH.The derived RH values are similar; therefore, these trends could be circumstantial.The RH measure for case 3 is not trusted and is likely a consequence of condensation on the detection surface.Qualitatively, there is reasonable agreement between the probe and SEM-derived number size distributions -providing confidence in the analysis presented -but this similarly highlights the limitations of the sample inlets on the aircraft for coarse aerosol as described by Trembath (2013).The discrepancies between these distributions, with relation to the inlet efficiency issues, are addressed further in Sect.4.1.
Aerosol composition
The particle classifications detailed in Table S1 in the Supplement were applied to the compositional data obtained for each analysed filter pair.The dependence of composition on size is shown in Fig. 5, where only sizes which display good agreement with the wing-mounted probes have been included (∼ 0.5-∼10 µm).Data out with this range was viewed as being unrepresentative of the population, given the discrepancies at small and large sizes in Fig. 4.
Clear trends become apparent when implementing this size-segregated approach.Silicate dusts are identified in all samples, with greater concentrations found at larger sizes in all cases except the last.These dusts are especially abundant in the first three cases.Cases 4 and 5 are dominated by fresh chlorides at all sizes except the largest bins, and cases 3 and 6 also contained significant fractions of this species.Case 6 differs from the others, displaying increased Ca-rich, mixed chloride and other fractions.Similarly, the high sulfate loading in case 1 is unique, yet the composition trends of this case can be associated with the subsequent flight via the abundance of silicates; a link that is not so clear between cases 5 and 6.
Although the mineral phase cannot be identified, elemental ratios can be used to identify trends in the dust samples.For example, feldspars can be rich in Ca, K or Na, whilst clays may have significant fractions of Mg and/or Fe.The elemental ratios displayed in Fig. 6 are variable across the campaign.This variability is heightened in some ratios with respect to others; from Fig. 6, the K / Al and Ca / Al ratios are changeable but the Mg / Si ratio is low for all cases.The mean and median values of the Si / Al ratio do not differ substantially between the flights, whilst the K / Al, Fe / Si and Ca / Al ratios are heightened in case 6.
Comparison between below-and above-cloud samples
The samples detailed previously were all exposed below cloud and were chosen as the particles collected likely influenced the microphysics of clouds that formed above these collection altitudes.Most of these cases appear to be influenced by local sources; cases 4 and 5 in particular are predominantly composed of fresh chlorides.However, these cases do not obviously address the involvement of aerosol particles from distant sources.
As a test case, a filter pair exposed above cloud was analysed to compare the particle compositions.A comparison study was chosen: flight B764 provided consecutive filter exposures below and above (cases 4 and 7) a stratus cloud deck, approximately 1 h apart, allowing for a comparison between the respective compositional characteristics.The cloud located between the exposures was mixed-phase, with a measured sub-adiabatic CDP liquid-water content profile.This suggests that entrainment of aerosol from above may have been an important source contributing to changes in the cloud microphysical properties (Jackson et al., 2012), or that the liquid-water in the cloud had been depleted via precipitation processes.Air mass back trajectories varied little between the exposures, with both cases influenced by air from over the Barents Sea and the coast of northern Russia (see Fig. 3).The conditions sampled during each of these exposures are summarised in Table 2.
Figure 7 displays the compositional differences between the below-and above-cloud samples.The fraction of unclassified particles is greater in the above-cloud example for sizes > 0.5 µm (panel b), whilst a similar fraction was observed in both cases for sizes ≤ 0.5 µm (panel a).Similarly, a comparable fraction of silicates is identified on both filter pairs.Greater fractions of fresh chlorides are found in case 4; however, a moderate loading of sea salt -and aged sea salt -is still identified in case 7. Case 7 also has a greater sulfate loading and the absolute number of particles detected was lower than in the below-cloud case.The size-segregated classifications, shown in panel c of Fig. 7, display significant unclassi-fied fractions across most sizes, with increased contributions at < 1 and > 3 µm.The dominating species changes from unclassified to fresh chlorides to silicates as particle size increases and significant mixed chloride fractions are also observed at small sizes.
Size distributions
The filter-derived and probe-averaged size distributions from Sect.3.2 compare reasonably well.The disagreement at the size limits ( 0.5 and 10 µm) of these distributions implies that the inlet collection and filter efficiency issues discussed in Sect.2.2 were influencing these samples.These collection issues have been found to have the greatest impact on the coarse mode (Andreae et al., 2000;Formenti et al., 2003).The results shown in Fig. 4 reflect this, where the agreement between the filter-and probe-distributions decreases with increasing size ( 1-2 µm).Coarse-mode enhancement relative to the probe data is not observed to the same extent as Chou et al. (2008).Reasonable agreement between these data is observed up to approximately 10 µm, as also concluded by Johnson et al. (2012) -whose samples were analysed using the same facilities in the Williamson Research Centre -and Chou et al. (2008).
Disagreement < 0.5 µm could be due to particles either passing through the filter pores at the time of exposure or being left undetected by the EDS analysis due to a decreasing signal-to-noise ratio and increasing interaction volume in this size limit (Kandler et al., 2011).Chou et al. (2008) found that their accumulation-mode filter size distributions derived from transmission electron microscopy (TEM) correlated better with observational data obtained from a cabinbased PCASP variation sampling from a Counterflow Virtual Impactor inlet (CVI-PCASP) than their SEM-derived distributions.Given the similarity between the filtration techniques applied, this may suggest that the disagreement between the accumulation-mode distributions observed here could be a result of the limitations of the SEM technique rather than an issue with the filter sampling on the aircraft.However, Chou et al. (2008) also identified differences between the performance of their CVI-PCASP and externally mounted PCASP -with the former consistently overcounting compared to the latter -suggesting that possible inlet losses could be similarly affecting the wing-mounted PCASP used in this study.In summary, the SEM technique, filter mechanism collection efficiency, and possible inlet losses could all be introducing some magnitude of error to the comparisons shown in Fig. 4, and it is not trivial to identify which source of error is the most influential in these cases.
Cases 1 to 5
The compositional trends observed in Fig. 5 are typically different between each case.Compositional dominance varies from sulfates to silicates to fresh chlorides through the first five cases.Some particle classes, e.g.carbonaceous or sulfates, are mostly observed at sizes < 1 µm (excluding case 1), whilst others (e.g.silicates) are more common at larger particle sizes.
The influence of sulfates, silicates and fresh chlorides varies substantially in the first five cases; variability which could be inferred from the differences in the respective back trajectories.There are distinct similarities between the trends derived for cases 4 and 5, with dominant fresh chloride and silicate signatures observed (Fig. 5).Both cases display a similar mixed chloride loading between sizes 0.5 and 1 µm; particles which are likely sea salts mixed with sulfates.
The chloride classifications are not ubiquitously observed in the first five cases, with particularly low measurements of these species in cases 1 and 2. This suggests that the ocean was not a strong source of particles in these cases, whereas the significance of this source is clear in cases 3, 4 and 5.This hypothesis is strengthened by the back trajectories calculated for these exposures (Fig. 3); the air mass source for cases 1 and 2 was the frozen Arctic Ocean, whilst cases 4 and 5 both had low-altitude trajectories across the sea surface.During the transition over the ocean, sea salts could have been lifted into the air stream.Case 1 displays a high sulfate signature -a characteristic unique from the other cases -suggesting that these particles had sufficient time to interact with sulfate gases (from either anthropogenic or marine sources, see Sect.2.4.2) during transit over the sea ice.There is a common link between the first three cases in their respective silicate loadings; the measured amount of silicate-based dusts is high in these cases, with a maximum reached during case 2. Potential sources of these dusts are discussed further in Sect.4.2.
Case 6
Case 6 was exposed in a different location -to the northwest of Svalbard instead of the south-east -than the first five cases (see Table 1).The particle loading was much greater for this case, as indicated by the large number of particles collected (Fig. 5) and the very short sampling time (Table 2).The comparatively greater number concentration measured agrees with the aerosol climatology presented by Tunved et al. (2013) and results from the Arctic Study of Tropospheric Aerosol and Radiation (ASTAR) 2000 campaign (Hara et al., 2003), where trajectories from northern Russia and Europe coincided with noted "haze" events with increased particle loadings.Additionally, there are distinct compositional differences between cases 1-5 and case 6.This case is the only one not to be dominated by silicates at super-micron sizes and has the greatest proportion of Ca-rich particles, biomass tracers and unclassified particles across the sizes considered.Case 6 is unique in its dominant particle categories, their respective size evolution, and air mass back trajectory, emphasising its contrast to the other cases.
The magnitude of the biomass tracer fraction is only sufficient enough to be observed in case 6.These particles are mostly small in size, as shown in Fig. 5. Andreae (1983) have previously shown that there is a strong relationship between biomass particle species and particle size below 2 µm.The K measurements in these particles mirror the quantities measured by Umo et al. (2015) for bottom ashes, adding confidence to their identification as biomass products.Modelled back trajectories for case 6 hail from northern Russia and the European continent.Potential sources of these particles could include similar boreal forest fire events as those sampled by Quennehen et al. (2012), which were also observed at approximately the same time of year, or from European biomass activities.
The Ca-rich particles observed strongly in case 6 are distinct and not observed to the same magnitude in the other flights, implying a unique source.It is possible that these are naturally occurring carbonate dusts; however, Umo et al. (2015) also measured several species of Ca-based dusts in their wood and bottom ash samples, suggesting that these could also be from biomass burning activities.The strong detection of Ca-rich particles alongside the K-dominant biomass particles supports this conclusion here.The relative prevalence of K-rich and Ca-rich particles found in the suband super-micron ranges mirrors the relationship observed in the biomass burning study by Andreae (1983).The large Ca signature is also observed in the silicate and mixed silicate spectra for this case (Fig. S2 in the Supplement), and consequently affects the K / Al and Ca / Al ratios (shown in Fig. 6).It is unclear whether these enhanced values are a result of internal mixing of silicates with the Ca-or K-rich biomass particles or if they are real feldspar signatures (as K-feldspar or plagioclase).The Fe / Si ratio is also elevated for this case and this could be due to increased detection of clay-like dusts or hematite, and/or internal mixing with anthropogenic smelting emissions.
Sourcing the dust
Unexpectedly, large fractions of silicate dusts are observed in every case.These filters were collected in March when the majority of the surrounding surface was snow covered; therefore, there is no obvious local source of mineral dust.Weinbruch et al. (2012) also identified large dust fractions in their samples collected at Ny-Ålesund in April 2008, and these dusts would likely act as a source of ice nucleating particles for clouds in this region.The presence of dust in such quantities could either be due to some local source, long-range transport or a combination of these two avenues.To better understand the characteristics of these dusts, the elemental ratios in Fig. 6 can be considered.In general, the consistency in the median Si / Al ratio between each case suggests that the typical composition of the aluminosilicates has low variability, with each distribution skewed differently to account for the differences in the mean and variance values.
Elemental ratios can be used to infer a source of the mineral dusts.Several studies have investigated characteristic ratios of dusts from a variety of arid regions.For example, the African dust study by Formenti et al. (2008) calculated these ratios from airborne filter data and derived Si / Al, K / Al and Ca / Al ratios of approximately 3, 0.25 and 0.5 respectively.These values are within the limits of those calculated in this study (Fig. 6); however, a lack of good agreement suggests that these sources may not be related to the dusts analysed here.Zhang et al. (2001) presented these ratios for dusts collected at various Asian sites, and their Tibetan and Loess Plateau samples were found to have Si / Al ratios of 4.6 and 2.5 respectively.The Loess values are consistent with the mean values obtained in all cases, whereas the Tibetan values lie within the upper bounds of samples 3 and 5.The Loess samples also had a Ca / Al ratio of 2.7, lying between the median and mean values obtained for case 6 and within the upper bound of case 3; however, it is much greater than the average ratio derived for the majority of these cases.Their K / Al ratio was found to be 0.95, consistent with the first five cases but not case 6.This could be due to the heightened K influence from biomass sources in case 6, but could also be coincidental and care must be taken when attributing a transported dust sample to a given source via this method.The dust collected here does appear to have more in common with the Asian samples than the African samples; however, the composition of dusts originating from the same source region is not always consistent and can vary between close geographical locations (Glen and Brooks, 2013).It is also unclear how these ratios would be affected by transportation, as atmospheric processing would likely alter the composition of ageing dust with respect to the freshly emitted dust characteristics reported in these studies.Despite this, it is worth noting that Liu et al. (2015) identified high-altitude plumes during the springtime ACCACIA campaign, which hailed from the Asian continent.It could be possible that dusts from these sources were advected over large distances in addition to the black carbon explicitly measured and modelled by Liu et al. (2015).The increase in mean trajectory altitude with time, as shown in Fig. 3, supports this theory as the descent of air from > 1000 m could be drawing dusts down to the lowaltitudes considered.The theory that Asian dust contributes to the Arctic haze phenomenon is not new, and observations have indicated that this is the case (e.g.Rahn et al., 1977).However, models have not been able to produce conclusive evidence (Quinn et al., 2007).A key question in this hypothesis is theorising how the dust is lofted up to high altitudes in the atmosphere, and subsequently undergoes this long-range transportation, without experiencing cloud processing.It is possible that frontal uplifts at the source are responsible, with weakly scavenging mixed-phase clouds along the trajectories allowing the dust loading to remain so high.
Mixed aerosol particles
The degree of mixing in each case is different -as displayed by the variability in mean fractions shown in Fig. S2 in the Supplement -thus tying in with the differences between the air mass histories.Particles that have undergone long-range transport likely would have enhanced internal mixing and may not be adequately classified by the scheme employed here.Unclassified particles are prevalent in cases 3, 6 and 7 (Fig. 5).Variability within the categories (as seen in Fig. S2 in the Supplement) highlights the importance of treating the classifications with caution: they provide a good representation of the particle species collected, yet the ability of the criteria to account for mixed species is not always efficient.
The influence of unclassified particles on the population is most evident in the higher-altitude case: case 7 (Fig. 7) is distinctly different from its below-cloud counterpart (case 4, Fig. 5).In addition to the enhanced other fraction, large mixed chloride, sulfate and mixed silicate loadings are also identified above cloud (Fig. 7); classifications which could be attributed to anthropogenic influences.In this case, it is likely that these particles had undergone mixing over longrange transport.The contrast between the below-and abovecloud cases emphasises the segregation of the Arctic aerosol sources: whilst being influenced by local surface sources, the Arctic atmosphere is also affected by this influx of longrange transported aerosol particles -the Arctic haze -during the spring months (Barrie, 1986;Shaw, 1995;Liu et al., 2015).Both of these aerosol pathways will affect the cloud microphysics, and further investigation is required to better understand the importance of each.The particle classes detected in cases 4 and 7 could have interacted with the cloud layer as CCN or INPs, whilst the differences between them can be explained by the cloud restricting any direct mixing between the two populations.
The extent of internal and external mixing observed indicates that some INP predictions may be fraught with inaccuracy in this region; for example, DeMott et al. ( 2010) related INP concentration to the total aerosol concentration > 0.5 µm under the assumption that most of these aerosol particles are INPs.However, efficient INPs (e.g.mineral dusts) were not found to be consistently dominant in this limit.As suggested by DeMott et al. (2010), this relation may not be applicable in cases heavily influenced by marine sources, and the high loadings of super-micron sea salt identified in some of the ACCACIA cases would qualify these as such.The use of dust-based parameterisations such as Niemand et al. (2012) or DeMott et al. (2015) may provide a more accurate prediction of the INP concentration in these cases.
Whilst it is likely that the dusts observed in this study would act as INPs, it cannot be determined how the unclassified and mixed particle categories would interact with the clouds in this region.In particular, the lack of sound quantitative C and O measurements prevents organic coatings from being identified; coatings which are important in interpreting aerosol-cloud interactions.The mixed particles identified here could likely act as CCN as they would possess a soluble component provided by the Cl or S signatures.However, it is also likely that they could influence the INP population; whilst soluble coatings may suppress ice nucleating ability, the presence of IN-active coatings and/or complex internal mixing could act to enhance it.Examples of IN-active coatings could include biological material, as some strains of bacteria have been observed to be efficient INPs in laboratory studies (Möhler et al., 2007;Hoose and Möhler, 2012).
G. Young et al.: Particle composition in the European Arctic
Some studies have identified cases where bacteria has survived long-range atmospheric transport by piggybacking dust particles (Yamaguchi et al., 2012).It is possible that such bacteria could influence the Arctic atmosphere via a similar transportation mechanism.Fundamentally, comprehending how these mixed particles interact and impact the cloud microphysics is a significant step to take towards improving our understanding of aerosol-cloud interactions in the Arctic springtime.
Conclusions
During the Aerosol-Cloud Coupling and Climate Interactions (ACCACIA) springtime campaign, in situ samples of Arctic aerosol particles were collected on polycarbonate filters.Analysis of these samples has been detailed, with a focus placed upon identifying the composition of the collected particles and investigating their potential sources.In total, six below-cloud exposures were analysed to infer how the local sources may influence the cloud microphysics of the region (Fig. 1) and one above-cloud case was considered to investigate the composition of transported particles (Fig. 7).The main findings of this study are as follows: -Single-particle analysis of the filters produced number size distributions which were comparable (from approximately 0.5-10 µm) to those derived from the wingmounted optical particle counters (Fig. 4).Better agreement between these distributions was achieved in lower RH sampling conditions.The composition of the particles collected was strongly dependent upon size across all samples, with crustal minerals and sea salts dominating the super-micron range.Carbon-and sulfur-based particles were mostly observed in the < 1 µm limit (Fig. 5).Large fractions of mixed particles -as shown by the other, mixed silicate and mixed chloride categories in Figs. 5 and 7 -were identified in each case.
The impact of these particles on cloud microphysics as potential INPs and/or CCN is not quantifiable by this study.
-Distinct size-dependent compositional trends were observed in each sample, with stark differences between cases (Fig. 5).These differences were attributed to variations in the air mass histories; cases 1 and 2 presented a silicate dust dominance, whilst cases 4 and 5 had similar chloride and silicate loadings.These similarities were mirrored by their closely related source regions (Fig. 3).The relationship between composition and trajectory was strengthened by the unique attributes of case 6; both the trends and trajectory were distinct in this case and the particle classifications identified can be explained by hypothesised sources along the trajectory presented.
-Crustal minerals were identified in all cases, despite the seasonal local snow cover.The HYSPLIT back trajectories (Fig. 3) were variable in direction, yet typically increased in mean altitude over time.These dusts were therefore hypothesised to have undergone long-range, high-altitude transport from distant sources, through regions containing weakly scavenging mixedphase clouds.Some elemental characteristics (Fig. 6) were found to be consistent with Asian dust sources; however, it is not known how long-range transport may affect the composition of these dusts and so this theory cannot be proven with these data.
The non-volatile, coarse-mode Arctic aerosol particles analysed by this study showed great variation between subsequent days and different meteorological conditions; therefore, it would be difficult to incorporate these findings into models.However, the measurements from the springtime ACCACIA campaign provide a good opportunity to simultaneously investigate both the properties of aerosol particles in the region and the microphysical characteristics of the clouds observed.Further study of the cloud microphysics of these cases, with reference to these aerosol observations, will allow us to improve both our understanding and the representation of aerosol-cloud interactions in climate models and act to reduce the uncertainty in forecasting the Arctic atmosphere in the future.
The Supplement related to this article is available online at doi:10.5194/acp-16-4063-2016-supplement.
Figure 1 .
Figure 1.ACCACIA flight tracks of the main science periods undertaken for each flight where aerosol composition analysis was conducted.
Figure 2 .
Figure 2. Mixed particle from case 5.The circles denote the spots scanned to give the following dominating elements: Red -Fe, Si and Al; Yellow -Fe, Cr, Ni, Si and Al; Blue -Fe, Cr, Ca, Cl, S, Si and Al.Scan of full particle indicates Si dominance.
Figure 3 .
Figure3.HYSPLIT air mass back trajectories for cases 1-6, initialised at the aircraft's position and calculated 6 days backwards.Trajectories at the beginning and end of each exposure are shown.Top left panel: cases 1 (black), 2 (green) and 3 (purple); top right panel: cases 4 (red), 5 (orange) and 6 (blue).The mean altitude covered by each of these trajectory groups is shown in the bottom panel.
Figure 4 .
Figure 4. Size distributions (dN/dlog 10 D) of particle data obtained via SEM analysis compared with averaged distributions from the optical particle counters at the relevant filter exposure times.Number concentrations are quoted with standard temperature and pressure corrections (s cm −3 ).PCASP, CAS and CDP data are shown in red (diamonds), green (circles) and blue (squares) respectively.Only upwards error bars are shown for clarity.SEM data are shown as scatter points (grey, crosses) and the arithmetic mean of these data is shown in black.
Figure 5 .
Figure 5. Size-segregated particle classifications applied to each below-cloud case, with each size bin normalised to show the fraction (by number) occupied by each classification.The sizes indicated are the bin centres.The number of particles scanned in each case is listed at the top of each panel.
Figure 6 .
Figure 6.Mean elemental ratios from each case.Data from the silicates and mixed silicates categories only are included to provide an indication of the mineral phases measured.Box edges indicate the 25th and 75th percentiles, and the cross and the horizontal line dissecting the boxes represent the mean and median values respectively.The outliers extend to the 10 and 90 % thresholds of the data.
Figure 7 .
Figure 7. Compositional comparison between the below-and above-cloud samples (cases 4 and 7) from flight B764.(a) Averaged particle classifications ≤ 0.5 µm; (b) averaged particle classifications > 0.5 µm; (c) size-segregated classifications from the above-cloud exposure.Each bin is normalised to show the fraction (by number) occupied by each classification and the number of particles analysed are listed above each panel.The sizes indicated in (c) are the bin centres.
Table 1 .
Details of FAAM flights undertaken during the spring segment of the ACCACIA campaign which had viable filter exposures.Corresponding filter case studies per flight are listed for reference.
Table 2 .
Summary of sampling conditions during each filter exposure.The geographic positions are also listed.Values quoted are arithmetic means, with 1σ in brackets where appropriate.In situ temperature data were collected with a Rosemount de-iced temperature sensor and the relative humidity (RH) data were derived from Buck CR2 hygrometer measurements.
Table 3 .
Main parameters applied with SEM and EDAX ™ Genesis software to carry out analysis of the ACCACIA aircraft filters.
|
v3-fos-license
|
2022-01-09T16:10:32.345Z
|
2022-01-06T00:00:00.000
|
245819220
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/09544062211056890",
"pdf_hash": "058813c79612b9b50056fa0b3a475d2ebe0d447e",
"pdf_src": "Sage",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2731",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "df3682e69b5c0242f3ce16661a7ee6539e13f625",
"year": 2022
}
|
pes2o/s2orc
|
A semi-analytical model on the critical buckling load of perforated plates with opposite free edges
Perforated plates are widely used in thin-walled engineering structures, for example, for lightweight designs of structures and access for installation. For the purpose of analysis, such perforated plates with two opposite free edges might be considered as a series of successive Timoshenko beams. A new semi-analytical model was developed in this study using the Timoshenko shear beam theory for the critical buckling load of perforated plates, with the characteristic equations derived. Results of the proposed modelling were compared with those obtained by FEM and show good agreement. The influence of the dividing number of the successive beams on the accuracy of the critical buckling load was studied with respect to various boundary conditions. And the effect of geometrical parameters, such as the aspect ratio, the thickness-to-width ratio and the cutout-to-width ratio were also investigated. The study shows that the proposed semi-analytical model can be used for buckling analysis of a perforated plate with opposite free edges with the capacity to consider the shear effect in thick plates.
Introduction
In box girders and some load-bearing spars, cutouts are made for special purposes, for example, material saving and weight reduction, access for installation and inspections. [1][2][3][4] Generally, plates with cutouts have a relatively lower structural strength in comparison with those without, and the buckling behaviour is one of the most critical considerations in safety and reliability of these structures. The buckling analysis of a plate with cutouts is more complicated than that of an intact plate. 5 Many relevant studies have involved numerical, experimental and analytical techniques, and their combinations. Recent works in numerical simulation include Tao, 6 who employed a FEM approach for elastic stability of perforated plates under uniaxial compression, and the parameters with significant effects on the performance of the plates were proposed. Paslara 7 investigated infill plate boundary condition effects by FEM on the overall performance of steel plates with circular openings. And Loughlan 8 adopted finite element modelling strategies and solutions procedures to enable the determination of post-buckling failure responses of steel plates with cutouts. More studies were seen on the elastic buckling behaviour of rectangular plates with cutouts for partial edge loading, 9 the elastoplastic buckling behaviour of simply supported rectangular plates with elliptic cutouts 10 and the post-buckling behaviour of thin plates with central circular cutouts subjected to biaxial load 11 using FEM. Experimental studies were also frequently seen together with simulations. [12][13][14] Shin 15 tested five perforated web specimens subjected to simulated loadings, and nonlinear buckling analyses were performed by FEM to compare with the observed inelastic mechanisms. A series of experimental tests 16 were carried out for the ultimate buckling load of perforated steel plates, and FEM was used to investigate the coupling relations between the geometrical parameters and the buckling behaviour.
Though there have also been numerous reports on analytical approaches for intact plates, for example, Refs. 17-22, few are seen in open literature on plates with cutout. Ovesy 23 adopted a Reddy-type, third-order shear deformation theory of plates for two versions of the finite strip method to predict the behaviour of the moderately thick rectangular plates containing central cutouts, though the approaches given are not applicable for all boundary conditions. Abolghasemi 24 applied the Ritz method and expanded the stress function in polar coordinates for circular cutouts to calculate the buckling load. The buckling behaviour of the panel with the rectangular cutout was predicted by applying Lekhnitskii theory and the complex variable method 25 to express the strain distribution around a rectangular opening of an infinite anisotropic plate. These analytical solutions were obtained by expressing the stress and strain distributions along the cutout edges, resulting in rather complicated analytical solutions. Furthermore, these analytical solutions are generally applicable only to special cases, such as circular or rectangular cutouts, 26 and in simple boundary conditions. More recently, a number of studies were seen making use of the energy methods. 27,28 And the critical buckling load of new materials such as cellular or corrugated materials were considered, including the adoption of equivalent shapes of opening. [29][30][31][32] Some simplified approaches on the structural profiles were also made, such as equivalent cross-sections for engineering applications. [33][34][35] For perforated plates with opposite free edges, the local stress and strain distributions along the cutout edges are not significant to affect the buckling behaviour of the plate. The buckling behaviour is more of a global one, thus less sensitive to local elements. To provide a relatively straightforward approach for buckling analysis of plates with cutouts, this work proposes a new semi-analytical model based on the Timoshenko beam theory to solve the critical buckling load, which can be reasonably easily applied to plates with symmetric cutout shapes. The new approach differs from existing published work in its advantage of simplicity in the numerical solution of the analysis model and the versatility in handling a variety of symmetric cutouts.
Problem description
The geometry of the cutout considered in this study can be of the shape of a circle, an ellipse, a rhombus or an evensided polygon, or others, which can be symmetrical to the central axis of the plate. As a circular cutout is most widely used in practice, it is chosen here to demonstrate the analysis. The geometry is described in Figure 1 with the plate of length a and width b. The circular cutout in the middle of the plate has a diameter d. The plate edges AB and CD can be simply supported, fully clamped or have no constraint, respectively, as the boundary condition. And edges AD and BC are assumed free with no constraint. A uniformly distributed compressive load, F P , is applied on AB and CD. Figure 2 shows that the perforated plate is decomposed into three connected sections in the axial direction: the left and right full-width sections, ABB'A' and D'C'CD, respectively; they are intact beam sections, with A'B' and C'D' being tangential to the cutout circle, which is centred at (a/2, b/2). Both sections can be considered as two separated Timoshenko 36 beams. The middle section, A'B'C'D', has a circular cutout, for which the remaining part of the section can be treated as a series of successive Timoshenko sub-beams, with each sub-beam of a rectangular shape in various heights fitting the circumference of the circular cutout. Note that due to the symmetry to the beam axis, there is an identical upper and lower group of sub-beams, correspondingly.
Setting the point O at the middle of AB as the coordinate origin with the x axis being the central line of the beam, the y coordinate of the circular cutout outline can be expressed by equation (1) The total division number of sub-beams, n, is assumed to be even. And if the sub-divisions are taken at equal length for simplicity, for the i th sub-beam, its end coordinate, x i , and width (or height), b i , can be given by equations (2) and (3) Note that both the upper and lower sub-beam groups need to be considered. And as an approximation for simplicity, they are 'lumped' together in height as b i accordingly.
Analytical buckling model of perforated plate structures with opposite free edges
Buckling model expressed by the Timoshenko beam theory
The infinitesimal segment in the i th Timoshenko sub-beam is shown in Figure 3 with the transverse displacement dw i . Its corresponding bending and transverse shear stiffness are being the Young's modulus, the moment of inertia, the transverse shear modulus, the area of the cross-section and the shear correction coefficient, respectively. ψ = 1.2 for a rectangular cross-section. The rotation angles of crosssection in Euler-Bernoulli and Timoshenko beam theory are θ i and φ i , respectively, and the angle γ i is caused by considering the shear deformation in the Timoshenko beam.
Considering the shear deformation shown in Figure 3, the rotation angles of the cross-section can be given as where F Qi ðxÞ ≈ F Vi ðxÞ þ F Pi ðxÞdw i ðxÞ=dx is the shear force perpendicular to the segment axis. With and equation (4) can be written as where B i = G i A 0i . As there is no transverse load in the i th sub-beam, by taking the derivative of equation (5) with respect to x, we can get The governing buckling equations of the i th (i = 1, 2, …, n) Timoshenko sub-beam can be expressed by equations (7) and (9). And the general solution can be obtained by the transverse displacement and the angle of rotation ) are coefficients to be determined. The bending moment and shear force can be obtained as
Boundary conditions
Various boundary conditions for edge AB and CD can be expressed. Three of the commonly seen types are given as follows: (1) Simply supported (S): At x = 0 and x = a where b Sn2 ¼ cosðk n aÞ, b Sn3 ¼ sinðk n aÞ, b Sn4 ¼ ÀD n β n k 2 n cosðk n aÞ, b Sn5 ¼ ÀD n β n k 2 n sinðk n aÞ (2) Fully clamped (C): At x = 0 and x = a Figure 3. Infinitesimal segment in the i th Timoshenko sub-beam.
in which b Fn1 ¼ ÀD n β n k 2 n cosðk n aÞ, b Fn2 ¼ ÀD n β n k 2 n sinðk n aÞ b Fn3 ¼ B n ðβ n k n À k n Þsinðk n aÞ, b Fn4 ¼ B n ðk n À β n k n Þcosðk n aÞ
Continuity conditions
The physical requirement of continuity between neighbouring sub-beams of a smooth plate structure requires the following conditions to be satisfied for the transverse displacement, the angle of rotation, the bending moment and the shear force: (1) Transverse displacements: At x i = a i , i = 1, 2, …, nÀ1 (2) Angles of rotation: At x i = a i , i = 1, 2, …, nÀ1 where
The buckling solution matrix
For simplicity, simply supported edges were chosen as the example for discussion. One set of equations consists of four boundary conditions from equations (14) and 4(nÀ1) continuity conditions from equations (17)(18)(19)(20), leading to a total of 4n equations for 4n unknown coefficients. These equations are linear and can be written into a matrix format where T i (a i ) and T i+1 (a i ) can be given as The critical buckling force, F Pcr , can be obtained by setting the determinant of the 4n × 4n matrix in equation (21) to zero. By using the bisection method to solve the nonlinear eigenvalue buckling equation, the solution of the lowest value is the first-order critical buckling load of the beam with cutouts.
Finite element model for verification
Detailed results corresponding to buckling of plates with cutouts are very limited from open literature. In order to verify the outcome of the proposed semi-analytical model, a FE model using ANSYS 37 was developed to compare the results, including a parametric study. In the FE model, SHELL181, a four-node element with six degrees of freedom at each node, was chosen. It is suitable for analysis of thin to moderate thick shell structures with large rotations and strains. In this work, the quadrilateral gridding was adopted to mesh the perforated plate.
Edges AB and CD, as shown in Figure 1, were set to either simply supported or clamped boundary condition and edges AD and BC to free of constraints. A uniformly distributed compressive load was applied on AB and CD. Convergence tests were carried out to ensure good results. Figure 4 shows a typical mesh pattern of a quarter of the model.
Validation with the FE Model
The FE model outcome was first compared with published literature results 5 of solid plates with no hole (d = 0) as shown in Table 1 for various values of the length-to-width ratio a/b and the thickness-to-width ratio h/b. Table 2 gives the comparison with Ref. 24 for perforated plates of different cutout-to-width ratio d/b. Good agreement can be observed. The FE model was therefore used as the benchmark for comparison with the proposed semianalytical model.
Division number of the sub-beams
As the division number of the sub-beams in the proposed model, N (= nÀ2) can be selected differently, the influence of the choice was investigated. Due to the geometric symmetry, the division was always evenly numbered and tested from 4 to 18, respectively, with the results compared to the FEM results.
Considering the geometrical parameters in practical engineering applications, cases of three different plate thickness were chosen for the relative errors in the critical buckling load with respect to the division number, as shown in Figure 5. For the thinner plate ( Figure 5(a), h/b = 0.001 and Figure 5(b), h/b = 0.01), for any division number from 4 to 18, the relative errors are always within 5%. For the thicker plate ( Figure 5(c), h/b = 0.1), the relative errors also will not exceed 6% for any division number from 4 to 18. As shown in Figure 5, the relative errors to the corresponding FEA results are broadly small, for instance, within 5% for a division number in the range from 4 to 12 or 4 to 18 for 6% relative error. In other words, results are not sensitive to the selection of the division number, and a larger one does not necessarily help to improve accuracy. The recommended number is between 4 and 18 for calculation efficiency.
Applicability of various boundary conditions
Four different boundary conditions were considered, starting from the left side in the clockwise direction, SFSF, CFCF, SFCF and CFFF. The critical buckling loads corresponding to the four boundary conditions are illustrated in Figure 6. Five values of the aspect ratio (a/b = 1-9) were analysed for d/b = 0.4 and h/b = 0.1. Results from the proposed model are virtually identical, especially as the aspect ratio a/b is from 6 to 9, to those of the corresponding FE model, and the aspect ratio will be chosen from 1 to 5 in the following parametric analysis. The critical buckling load can be seen reducing with respect to the aspect ratio.
Effect of geometrical parameters on the critical buckling load
Three non-dimensional geometrical parameter sets, that is, the length-to-width (aspect) ratio a/b, the thickness-towidth ratio h/b and the cutout-to-width ratio d/b, were studied, respectively, for their influence on the critical buckling load with the boundary condition case SFSF.
The aspect ratio a/b. Five values of the aspect ratio (a/b = 1-5) were considered with two cutout-to-width ratios (d/b=0.2 and 0.6) and two thickness-to-width ratios (h/b = 0.001 and 0.1), respectively, as shown in Figure 7. It illustrates that the critical buckling load reduces with respect to the aspect ratio. And the orders of the magnitude of the critical buckling load are different due to the difference in the plate thickness, but the changing trend of the critical buckling loads is similar.
A bigger cutout understandably yields a lower critical buckling load due to the long and thin successive subbeams above and below the cutout. And the critical buckling loads of the two different-sized cutouts show a converging trend in terms of the aspect ratio. This is due to the increasing slenderness of the plate in terms of the aspect ratio in which buckling becomes more of a global effect and less sensitive to local features such as the cutout. The thickness-to-width ratio h/b. The influence of the thickness-to-width ratio was studied in five cases (h/b = 0.001-0.1) with two cutout-to-width ratios (d/b = 0.2 and 0.6) and two aspect ratios (a/b = 1 and 5). The critical buckling loads are given in Figure 8; for the thickness-towidth ratio h/b and the critical buckling load, the load can be seen to increase with respect to the thickness-to-width ratio due to the higher moment of inertia of the cross-section of thicker plates. A bigger cutout yields a lower buckling load.
And a bigger aspect ratio leads to a lower critical buckling load, as the slenderness increases.
The cutout-to-width ratio d/b. The effect of the cutout-towidth ratio (d/b = 0-0.6) on the critical buckling load is given in Figure 9, where two thickness-to-width ratios (h/ b = 0.001 and 0.1) and two aspect ratios (a/b = 1 and 5) were considered, respectively. The critical buckling load of solid plates with no hole (d/b = 0) are also included for comparison. It can be seen that the critical buckling load decreases with the cutout diameter over the diameter range considered. As expected, a bigger cutout weakens the plate more, leading to a lower buckling strength. The aspect ratio also has a significant effect on the critical buckling load with scale changes in the magnitude of the buckling load for both plate thicknesses.
Applicability of different cutouts 1-Side Cutouts. The proposed new model can also be applied to analyse plates with side semi-circle cutouts and rhombic cutouts. Figure 10(a) shows two semi-circular cutouts of the same diameter on the opposite sides of the beam parallel to the beam axis, and the ligament part between the two-sided cutouts and can be divided into subbeams and treated as Timoshenko beams. Figure 10(b) shows one rhombic cutout with the length of the long diagonal line, e, and the length of the short diagonal line, f. The successive Timoshenko sub-beams of the plate can be obtained by the same method as that for the central circular cutout shown in Figure 2. As a case study, five values of the aspect ratio (a/b = 1-5) were analysed for the critical buckling load with one cutout-to-width ratio d/b = 0.4 and two thickness-to-width ratios (h/b = 0.001 and 0.1). As illustrated in Figure 11, results from the proposed model are very close to those of the corresponding FE model. The critical buckling load reduces with respect to the aspect ratio. And the plate thickness makes a big difference as shown by the scale of the magnitude of the buckling load. The overall trend is similar to Figure 7 for a central circular cutout.
Conclusions
A new semi-analytical modelling technique based on the Timoshenko shear beam theory was introduced to calculate the critical buckling load of perforated plates with opposite free edges. The rectangular plate was treated as a series of successive sub-beams using the Timoshenko beam theory. Plates with a central circular cutout were discussed as case studies, and the results were compared with those obtained from FEM, showing good agreement. The selections of the division number of the sub-beams can be flexible within a practical range from 4 to 18 for computation, over which a good accuracy can be maintained (within an error of 6% to the FEA results) with little sensitivity shown in the results to the division number selection. Overall, the proposed model is relatively simple and straightforward to use for calculation of the buckling load of perforated plates with opposite free edges with cutouts.
Calculations show that with the boundary condition SFSF, the critical buckling load increases by reducing the aspect ratio a/b and increasing the thickness-to-width ratio h/b, respectively, and by the cutout-to-width ratio d/b in an approximately linear relationship or a weak quadratic one in normal linear scales.
One of the clear advantages of the proposed model is its capacity to handle different geometries of cutouts. Cutouts of elliptical, rhombic, evenly sided polygonal and other shapes of profile with a symmetric character to the axis of the perforated plates can be analysed accordingly, including both central and sided cutouts. In fact, one may combine different geometric shapes together for the cutouts.
However, it needs to be pointed out that specifically for rectangular-shaped cutouts, if the cutout-to-width ratio is big, the proposed model will not give accurate results. As the difference in the heights of the neighbouring sub-beams along the vertical cut line could become too big, there would be a significant jump in the distributed load between the neighbouring sub-beams, yielding big errors. This particular case remains to be studied further. Natural Science Foundation of China (51811530311) and the China Scholarship Council (201808515166).
|
v3-fos-license
|
2022-11-05T15:48:57.090Z
|
2022-11-01T00:00:00.000
|
253335688
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/22/21/8398/pdf?version=1667303452",
"pdf_hash": "497e099a522c295b6748592e59a3c9c5d30fd4b4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2732",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"sha1": "ddd9cec708c7f3d4de1dc844750638bbcd4a2a3a",
"year": 2022
}
|
pes2o/s2orc
|
Evaluation of Error-State Kalman Filter Method for Estimating Human Lower-Limb Kinematics during Various Walking Gaits
Inertial measurement units (IMUs) offer an attractive way to study human lower-limb kinematics without traditional laboratory constraints. We present an error-state Kalman filter method to estimate 3D joint angles, joint angle ranges of motion, stride length, and step width using data from an array of seven body-worn IMUs. Importantly, this paper contributes a novel joint axis measurement correction that reduces joint angle drift errors without assumptions of strict hinge-like joint behaviors of the hip and knee. We evaluate the method compared to two optical motion capture methods on twenty human subjects performing six different types of walking gait consisting of forward walking (at three speeds), backward walking, and lateral walking (left and right). For all gaits, RMS differences in joint angle estimates generally remain below 5 degrees for all three ankle joint angles and for flexion/extension and abduction/adduction of the hips and knees when compared to estimates from reflective markers on the IMUs. Additionally, mean RMS differences in estimated stride length and step width remain below 0.13 m for all gait types, except stride length during slow walking. This study confirms the method’s potential for non-laboratory based gait analysis, motivating further evaluation with IMU-only measurements and pathological gaits.
Introduction
The study of human lower-limb kinematics is critical for understanding and improving human function in many contexts including injury prevention, elderly fall risk, rehabilitation, and athletic performance [1][2][3][4][5][6][7][8]. Importantly, many of these contexts require studying kinematics from a wide range of gait types including abnormal gaits; for example, in clinical applications where gait analysis is utilized in identifying injury risk, assessing level of gait pathology, and informing and assessing treatment plans [5,9,10]. Such research is often conducted in controlled, laboratory environments using optical motion capture systems (MOCAP) that track the positions of reflective markers attached to the skin in order to estimate underlying bone movement. However, these traditional MOCAP methods incur many disadvantages starting with relatively high cost, long setup times, and the need for skilled researchers. These disadvantages limit the populations who might participate in and benefit from biomechanical assessments, studies, and treatments. They also limit the researcher's ability to generalize findings from laboratory-based biomechanical studies to broad populations. Additionally, research questions may require continuous monitoring of subjects (e.g., in [11,12]) which is not possible using traditional lab-based (MOCAP) methods. Methods using body-worn inertial measurement units (IMUs) to study human kinematics address many of the disadvantages of MOCAP due to their relative low cost, parameters could be used in conjunction with the presented ErKF method to yield IMUonly estimates (albeit likely with lower accuracy).
The aims of this paper are to: (1) present a new IMU-based ErKF method for evaluating human lower-limb kinematics and (2) compare kinematic estimates from this ErKF method to those obtained from MOCAP on human subjects walking with a variety of gaits, including abnormal gaits such as side stepping. To this end, the prior ErKF method [37] is extended to a full seven-body model of the human lower limbs. Additionally, a novel joint axis measurement is developed for the hip and knee to reduce orientation drift errors without assumptions of gait type or how the hip and knee behave (e.g., hinge-like behavior or not). The method is evaluated across a wide range of six walking gaits, including abnormal gaits. Joint angles, joint angle ranges of motion, stride length, and step width estimates are compared to reference MOCAP data that is processed using two different methods.
ErKF Method for Seven-Body Lower-Limb Model
The underlying formulation of this ErKF method for a seven-body model of the lower limbs (refer to Figure 1) follows and extends that introduced in [37] for a simplified, threebody model of the lower limbs. The ErKF method estimates the state (position, velocity, and orientation) of each IMU (one per body) from which the lower-limb kinematics are derived via rigid body assumptions. Because the primary ErKF equations used in this study are largely the same as those in [37] (which extend from Sola's ErKF formulation for a single IMU [41]), Sections 2.1.1-2.1.3 closely follow [37] and are included here for the reader's convenience. In Section 2.1.4, we summarize the main differences and extensions between the method used for the 3-body lower-limb model of a mechanical walker in [37] and that for the 7-body lower-limb model of human subjects used here. Note that while OpenSim's Gait2354 skeletal model is used here for visualization, the ErKF method treats each segment as an independent body possessing six degrees of freedom.
ErKF States
Each IMU within the lower-limb model is treated as an independent (i.e., six degree of freedom) rigid body. Thus, the state for the jth IMU, x j , is the (10 × 1) vector (1) where p j is the (3 × 1) position vector of the IMU in a world (i.e., lab-fixed) frame, v j is the (3 × 1) velocity vector of the IMU, and q j is the (4 × 1) quaternion rotation vector (Hamiltonian convention) that relates a vector in the IMU sense frame, y b , to its corresponding representation in the world frame, y w , according to where ⊗ denotes quaternion multiplication and q * denotes the quaternion inverse.
In an ErKF, the filter equations are formulated with respect to the error state (i.e., errors in the state estimates), thus tracking the error state's mean, δx, and covariance matrix, P. These error state estimates are then used to update the state estimates. The ErKF has several advantageous properties over other Kalman filter formulations for this type of application (see [41,42]). The error state for the jth IMU, δx j , is the (9 × 1) vector where δp j and δv j denote errors in the position and velocity, respectively, and δθ j is the (three-component) attitude error vector (assumed to be small) defined such that the quaternion error, δq j , obeys where b is the unit vector in the direction of δθ j (i.e., the axis of rotation) and · is the Euclidean vector magnitude. The full state, x, and full error state, δx, are the concatenations of those for all n IMUs in the system, namely and
Process Model
The prediction step of the ErKF uses the process model for each IMU wherex j denotes the prediction of x j , the additional subscript k denotes the kth time-step, and u j denotes the raw IMU data (acceleration and angular velocity).
Because each IMU is treated as an independent rigid body, the predicted state of each IMU at time-step k + 1 is only a function of the estimated state and the IMU data at the previous time-step k. Thus, strapdown integration is used to write this process model as p j,k + v j,k ∆t + 1/2 R j,k a j,k + g ∆t 2 v j,k + R j,k a j,k + g ∆t where ∆t is the sampling period of the IMU, R is the rotation matrix corresponding to q, g is the gravitational acceleration vector (in the world frame), a j is the acceleration measured by the jth IMU, and ω j is the angular rate measured by the jth IMU.
As shown in [41], the mean of the error state, δx, will always be zero during the Process model in this formulation and thus does not need to be calculated. However, the error-state covariance matrix, P, is not zero and must be predicted during this step. To accomplish this, the Jacobian of the process model for the jth IMU at time-step k with respect to its error state vector, F xj,k , is calculated as where I m×m represents an m × m identity matrix, 0 m×m represents an m × m matrix of zeros, the superscript T denotes the transpose of a matrix, [y] x corresponds to the skew-symmetric form of y, specifically y 1 y 2 y 3 and S{w} applies the Rodrigues' rotation formula on the vector w. To compute S{w}, w is separated into its magnitude, ϕ, and unit direction vector, s, to yield S{w} = S{ϕs} = I 3×3 cos(ϕ) + sin(ϕ)[s] x + ss T (1 − cos(ϕ)) (11) The process noise covariance for the jth IMU, Q j , is where σ 2 a and σ 2 ω are the noise variances for the acceleration and angular rate signals, respectively.
Because each IMU's process model is independent from that of all other IMUs, the corresponding Jacobian and process noise covariance matrix for the full system (i.e., all seven IMUs) are formed by the block diagonal matrix composition of those for the individual IMUs. Specifically, the Jacobian of the full system process model relative to the full error state at time-step k, F x,k , is F x,k = blkdiag(F x1,k , F x2,k , . . . , F xn,k ) (13) where blkdiag denotes the block diagonal matrix composition. Similarly, the full system process noise covariance matrix, Q, is The prediction of the full error-state covariance matrix,P, is then calculated aŝ Within this ErKF formulation, known kinematic states (i.e., when the IMU is still) and constraints (i.e., relationships between the IMUs) are applied probabilistically through the measurement model to correct estimation errors. The four specific measurements used in this ErKF are described in detail below. When multiple measurements are applied during a single time step, batch processing is used to apply the measurement corrections simultaneously. Each measurement model takes the general form where z is the observed measurement, h(x) is the expected measurement represented as a function of the state x, and c is Gaussian white noise with covariance C. Specific measurement model equations are linearized by defining the Jacobian H evaluated at x according to Consistent with [41], the chain rule decomposes H as where H x is dependent on the specific measurement model and X δx depends only on the estimated orientation at that time. The Kalman gain, K, and error-state mean,δx, are then calculated as andδ The error-state mean associated with the jth IMU,δx j , updates the state mean for the jth IMU per whereδ After the state mean is updated for all IMUs, the full error-state mean is reset to zero. The error-state covariance is updated to account for the measurement(s) and the error-state mean reset per where G k is the Jacobian of the error-state reset operation with respect to the error state at time-step k, defined as G k = blkdiag(G 1,k , G 2,k , . . . , G n,k ) where Note that the process and measurement models are applied for each time step. If no measurements are observed during a time step, the equations are used in place of the measurement model equations.
The four specific measurement corrections used in the ErKF consist of zero-velocity update (ZUPT), gravitational tilt, joint center, and joint axis corrections. The specific equations for these corrections are detailed next.
Measurement Model 1: ZUPT correction. When a foot IMU is (nearly) still (i.e., at some point during stance), a zero-velocity update (ZUPT) measurement is utilized to correct error in the estimated velocity. Many studies demonstrate that such ZUPT corrections yield accurate foot trajectory estimates across a wide variety of gait speeds [26,43,44]. For the present ErKF formulation, the measurement equation is written as where h ZUPT (x) is the expected measurement of foot velocity and v I MU is the (3 × 1) velocity vector for a foot-mounted IMU. This measurement is applied when the foot IMU is determined to be momentarily still, yielding the (virtual) observed measurement of foot velocity Measurement Model 2: Gravitational tilt correction. When any IMU is determined to be momentarily still, its accelerometer gives an estimate of the direction of the gravitational vector. Thus, a gravitational (tilt) correction can be applied for the IMU orientation. Assuming an xyz world-frame convention where gravity acts in the −z direction, this correction yields the measurement model where h tilt (x) is the expected measurement of the gravitational direction (unit vector) in the IMU frame and R is the rotation matrix representing the orientation of the still IMU. Note that Equation (30) can easily be modified when using other world-frame conventions. The observed tilt measurement is where a is the IMU-measured acceleration. Measurement Model 3: Joint center correction. At all times, it is assumed that a worldresolved joint center location must be the same as estimated by the two segment IMUs adjacent to that joint [45]. Thus, for IMUs on adjacent limbs 1 and 2, the measurement equation takes the form where h JC (x) is the expected difference between the joint center locations, the subscript i = 1, 2 denotes IMU i , r i denotes the known position of the joint center from IMU i (in the IMU frame), and R i denotes the rotation matrix (i.e., orientation) for IMU i . The (virtual) observed measurement for the difference between the joint center locations is Measurement Model 4: Joint axis correction. At times, a joint is expected to have certain axes of the adjacent limbs approximately aligned in the world frame (e.g., the flexion/extension axes of the shank and thigh when the knee acts as a hinge [20,21]). This leads to the measurement model where h J A (x) is the expected difference between the joint axis vectors, the subscript i indexes the two adjacent limbs, and e i is the aligned joint axis (unit vector) for limb i in the frame of IMU i . The (virtual) observed measurement for the difference between the joint axis vectors is
Details of ErKF Method Specific to Human Subjects
In the present method, zero-velocity update (ZUPT) measurements are applied to the foot-mounted IMUs at detected footfalls, gravitational tilt correction measurements are applied to any IMU that is detected to be nearly still, and joint center correction measurements are applied at all time steps for all (six) joints. Note that in this study we refer to near-still instances of the foot where a ZUPT correction is applied as "footfalls" (one per stance phase) to avoid confusion with other still instances (i.e., where gravitational tilt corrections are applied). Measurement noises are summarized in Table 1 and are generally the same as those used in [37], except the process model noises are adjusted to reflect the sampling rate used in the present study (128 Hz) and the hip joint axis correction noise reflects the "soft" hinge constraint described below. In contrast to the three-body mechanical model in [37], the joints of the human lower limbs are not single degree of freedom joints, necessitating a modified approach for the joint axis correction measurements for the seven-body model. Similar to [20,21], our method exploits the fact that the knee predominantly acts like a hinge (i.e., small internal/external rotation and abduction/adduction) during normal gait. Unlike [21], where a hinge constraint is applied only during periods of detectable hinge-like knee movements, the present method assumes a "soft" hinge constraint at all time steps, assuming the knee flexion/extension axes, as estimated separately from the thigh and shank, remain generally aligned. This correction respects the fact that the knee predominantly acts in flexion/extension while still enabling measurements in the other two rotational degrees of freedom. The mathematical formulation for this measurement correction is the same as detailed above (Measurement Model 4). Because this constraint for human subjects is an approximation, a larger measurement noise may be used (depending on the hinge-like nature of a particular joint) than in [37] rendering this a "soft" constraint.
Unlike the three-body model of the walker [37], the seven-body model of the human considers soft-tissue deformations of the lower limbs. In particular, soft tissue of the thigh allows significant relative motion between the thigh IMU and the underlying femur. When this movement is ignored (as in the ErKF formulation used here), the hip joint center measurement induces significant bias and/or drift in the estimated hip joint angles as described next, and particularly the internal/external rotation. The joint center Sensors 2022, 22, 8398 9 of 32 measurement correction relies on accurate estimates of the joint center locations in the IMU frames (obtained from the sensor to segment alignment) to accurately enforce the kinematic constraints. Additionally, we assume the sensor to segment alignment is constant; however, soft tissue motion causes the sensor to segment alignment to be time-variant which leads to inaccurate corrections and thus the aforementioned bias or drift in hip joint angle estimates. To diminish this effect, a joint axis correction measurement is employed at all times for the hip that mimics the "soft" hinge constraint for the knee. However, because the hip exhibits full three degree of freedom rotations during gait, a much higher measurement noise is used (57.3 deg for the hip versus 1.15 deg for the knee, refer to Table 1) for the hip joint axis correction compared to the knee. As with the knee, this measurement aids in constraining the estimated hip joint angles to anatomically realistic ranges while permitting three degree of freedom rotations. Note that no joint axis measurement corrections are used for the ankle because the ankle joint angle estimates are typically constrained to anatomically realistic ranges without such a correction.
Human Subject Experiment
Twenty-three healthy adult subjects (inclusion: ability to perform basic tasks of daily living; exclusion: diagnosis of a balance or mobility impairment, inability to perform experimental tasks without assistance, opioid-dependence) participated in a University of Vermont Institutional Review Board approved study (protocol code #08-0518). All subjects gave written informed consent before participating in the study. Subjects wore IMUs (Opal, APDM, ±16 g and ±200 g accelerometers, ±2000 deg/s gyros) and reflective motion capture (MOCAP) markers (14 mm). Importantly, the study used a custom MOCAP marker set with markers shown in Figure 2 which includes markers placed on bony landmarks (ASIS, PSIS, lateral femoral epicondyle, fibular head, tibial tuberosity, lateral malleolus, heel, and the second metatarsal for all trials; medial femoral epicondyle, greater trochanter, medial tibial condyle, medial malleolus, and first and fifth metatarsals for calibration), markers placed on other locations shown, and three markers on each IMU, enabling two different methods for comparisons to MOCAP estimates (detailed in Section 2.6, "Estimated kinematics from two MOCAP methods"). Markers were tracked using a 19-camera system (Vero V2.2 cameras, Vicon, Oxford, UK). Subjects performed various activities of daily living in a laboratory including the six walking gaits described below. MOCAP data and IMU data were collected synchronously at 100 Hz and 128 Hz, respectively. Some data files from three subjects were either missing or created incorrectly, yielding data from twenty subjects analyzed in the present study (11 female, 9 male; mean (standard deviation) age 22.7 (±5.5), height 1.73 (±0.09 m, height not available for one subject), mass 70.3 (±12.7) kg).
Measurements from the following activities are used in this study: (1) static standing calibrations (three seconds), (2) functional calibrations (set of movements including a modified version of the StarArc hip calibration movements [29,46], knee flexions, and ankle flexions and rotations; performed on both sides), and (3) six constant-speed walking gaits on a treadmill. The treadmill walking gaits include separate trials of forward walking at three speeds (slow, normal, and fast), backward walking, lateral left walking, and lateral right walking. Each walking trial lasted one minute at a self-selected speed. Additionally, for all trials, the subject began standing on the side rails of the treadmill and transitioned to the treadmill belt within the first five seconds. Only data after both feet have left the railing are used to evaluate gait. For normal walking, only nineteen subjects are analyzed due to missing marker data for one subject. For fast and lateral left walking, only nineteen subjects are analyzed due to obvious belt speed changes during the trial for one subject each; thus, these trials are not at constant speed. For slow walking, only eighteen subjects are analyzed for the following reasons. For one subject, the belt speed obviously changed during the trial. For a second subject, the walking speed was particularly slow (<0.2 m/s) and deemed an extreme outlier. Measurements from the following activities are used in this study: (1) static standing calibrations (three seconds), (2) functional calibrations (set of movements including a modified version of the StarArc hip calibration movements [29,46], knee flexions, and ankle flexions and rotations; performed on both sides), and (3) six constant-speed walking gaits on a treadmill. The treadmill walking gaits include separate trials of forward walking at three speeds (slow, normal, and fast), backward walking, lateral left walking, and lateral right walking. Each walking trial lasted one minute at a self-selected speed. Additionally, for all trials, the subject began standing on the side rails of the treadmill and transitioned to the treadmill belt within the first five seconds. Only data after both feet have left the railing are used to evaluate gait. For normal walking, only nineteen subjects are analyzed due to missing marker data for one subject. For fast and lateral left walking, only nineteen subjects are analyzed due to obvious belt speed changes during the trial for one subject each; thus, these trials are not at constant speed. For slow walking, only eighteen subjects are analyzed for the following reasons. For one subject, the belt speed obviously changed during the trial. For a second subject, the walking speed was particularly slow (<0.2 m/s) and deemed an extreme outlier.
Kinematic Comparisons
To evaluate the performance of the ErKF method, we compare relevant kinematic measures (e.g., joint angles, stride lengths) estimated by the ErKF method to those estimated using two MOCAP-based methods, which is considered the gold standard for clinical gait analysis. Root mean square (RMS) differences between kinematic estimates from the ErKF method and each MOCAP method are calculated for each subject and trial. We report the mean and standard deviation of these RMS differences across all subjects and separately for each type of gait. Additionally, Bland-Altman plots [47] are used to assess agreement between the ErKF method and a MOCAP method for select metrics that can be obtained from joint angle waveforms (e.g., mean knee range of motion).
In order to facilitate direct comparison of the estimation methods, a similar underlying skeletal model (i.e., same segment lengths and joint center locations) is used for both IMU-based and MOCAP-based methods. OpenSim's Gait2354 model [48,49] is used as the base human skeletal model, but with the knee joint modified to allow three degrees of rotational freedom. Note, this model also allows three degrees of rotational freedom for
Kinematic Comparisons
To evaluate the performance of the ErKF method, we compare relevant kinematic measures (e.g., joint angles, stride lengths) estimated by the ErKF method to those estimated using two MOCAP-based methods, which is considered the gold standard for clinical gait analysis. Root mean square (RMS) differences between kinematic estimates from the ErKF method and each MOCAP method are calculated for each subject and trial. We report the mean and standard deviation of these RMS differences across all subjects and separately for each type of gait. Additionally, Bland-Altman plots [47] are used to assess agreement between the ErKF method and a MOCAP method for select metrics that can be obtained from joint angle waveforms (e.g., mean knee range of motion).
In order to facilitate direct comparison of the estimation methods, a similar underlying skeletal model (i.e., same segment lengths and joint center locations) is used for both IMU-based and MOCAP-based methods. OpenSim's Gait2354 model [48,49] is used as the base human skeletal model, but with the knee joint modified to allow three degrees of rotational freedom. Note, this model also allows three degrees of rotational freedom for the hip and two for the ankle. This skeletal model is scaled for each subject using a procedure detailed later in Section 2.4, "Calibration of ErKF and MOCAP models". All joint angles are calculated according to the ISB recommended conventions [50,51] with the modification proposed by Dabirrahmani and Hogg [52] and based on the anatomical frame conventions defined for the Gait2354 model.
In some trials, a simple offset (bias difference) is observed between the joint angles estimated from these two modalities (ErKF and MOCAP) despite otherwise highly consistent estimates of the underlying joint angle waveforms. Consequently, we also report the range of motion of the joint angles for each stride since (1) it is highly relevant for biomechanical studies (e.g., [53][54][55]), and (2) it is also a measure of consistency of the underlying waveforms. For each joint angle, the range of motion is calculated as the difference between the maximum and minimum value of the joint angle during that stride (i.e., between successive footfalls). Range of motion is not reported for any stride should any of the associated joint angle data be missing during that stride (e.g., due to marker occlusion). Additionally, if range of motion estimates are not reported for more than 30% of the strides during a trial, no summary statistics for range of motion are reported for that trial.
Similar to [37,56], stride length is calculated as the total horizontal displacement of the heel between consecutive footfalls of the same foot and step width is calculated as the orthogonal distance between the stride length vector and the heel location of the opposite footfall during the intermediate footfall. These definitions are illustrated in Figure 3 for forward, backward, and lateral walking. The first stride and last two strides represent transition strides during a trial, and they are not included in the reported stride length and step width results.
biomechanical studies (e.g., [53][54][55]), and (2) it is also a measure of consistency of the underlying waveforms. For each joint angle, the range of motion is calculated as the difference between the maximum and minimum value of the joint angle during that stride (i.e., between successive footfalls). Range of motion is not reported for any stride should any of the associated joint angle data be missing during that stride (e.g., due to marker occlusion). Additionally, if range of motion estimates are not reported for more than 30% of the strides during a trial, no summary statistics for range of motion are reported for that trial.
Similar to [37,56], stride length is calculated as the total horizontal displacement of the heel between consecutive footfalls of the same foot and step width is calculated as the orthogonal distance between the stride length vector and the heel location of the opposite footfall during the intermediate footfall. These definitions are illustrated in Figure 3 for forward, backward, and lateral walking. The first stride and last two strides represent transition strides during a trial, and they are not included in the reported stride length and step width results.
Calibration of ErKF and MOCAP Models
Both the IMU (ErKF) and MOCAP-based methods require determination of the mapping between the skeletal model and the IMUs and markers, respectively. MOCAP data during the static calibration and star calibration movements are used to determine both mappings as follows.
Joint centers of the hips, knees, and ankles are estimated during the static calibration as follows. Hip joint centers are calculated following Hara et al. [57] using the two ASIS and two PSIS markers to determine the pelvic frame and the average distance between the ASIS and medial malleoli to determine the leg length. The knee joint center is estimated following Davis et al. [9] as the midpoint of the lateral and medial femoral epicondyle markers. The ankle joint center is estimated following the recommendation of Siston et al. [58] as the midpoint of the lateral and medial malleoli markers.
OpenSim is then used to scale the base skeletal model to each subject using marker data from the static calibration trial, including the appended joint center estimates. Open-Sim's scale tool also determines the location of each marker in its parent segment's frame using the same static calibration MOCAP data and the scaled skeletal model. The three markers attached to each IMU ( Figure 2) define an IMU cluster frame and are assigned to the IMU's respective parent segment. Thus, the rotation matrix from each segment's anatomical frame to the attached IMU's marker cluster frame, R C A , is determined from the IMU marker locations in the parent segment's frame. To obtain the rotation matrix from the segment's anatomical frame to the IMU sense frame, the cluster to sensor frame rotation matrix for each segment, R S C , must first be calculated. As done in [29], R S C is computed using the procedure of Challis [59] and comparing the raw IMU angular velocity data to the estimated angular velocity of the cluster frame (i.e., calculated by differentiating the MOCAP-determined cluster orientations) using the data from the star calibration trial. The rotation matrix from the anatomical frame to the IMU sense frame, R S A , is then calculated as The location of each joint center in the IMU frame is determined from the static calibration trial as well as the location of heel markers in their respective foot IMU's frame. The rotation matrices between IMU and associated anatomical frames and the joint center locations in the IMU frames make up the sensor to segment alignment required for the seven-body ErKF method.
Note that, in this study, MOCAP data are utilized for these calibrations in the ErKF method; thus, it is not yet an IMU-only method. This decision was made for the following reasons. First, because both IMU-and MOCAP-derived joint kinematic estimates are impacted by the determination of the anatomical axes and joint center locations, the IMU and MOCAP methods are more directly compared by controlling for these parameters (i.e., using similar underlying skeletal models). Second, accurately determining these calibration parameters from IMU data alone remains an open challenge in its own right [38][39][40]. Thus, using MOCAP data to establish these calibration parameters enables more direct evaluation of the ErKF method itself (i.e., independent of the input parameters) by reducing the risk of obtaining inaccurate calibration parameters which may significantly affect kinematic estimates.
In summary, the calibration procedures described above establish the required mappings between sensors/markers and the underlying bones for both ErKF and MOCAP methods. More precisely, for the ErKF method, these mappings are required inputs for the joint center and joint axis measurement corrections and critical to estimating segment poses (and thus the lower-limb kinematics) from the estimated IMU poses. In the MOCAP methods, these mappings are critical to estimating the skeletal poses from individually tracked markers.
Estimated Kinematics from the ErKF Method
The ErKF method yields estimates of major kinematical variables including the threedimensional angles across all six skeletal joints as well as the stride length and step width. The method begins with estimating the positions and orientations of the IMUs (and thus the seven body segments) throughout each trial. MOCAP data are used to estimate the initial pose of each IMU (after both feet are off the rails) for establishing the initial states of the seven body segments for the ErKF. As with the model calibration step, utilizing MOCAP data for these initial poses enables more direct comparison of the IMU and MOCAP-based methods by reducing errors in the initial pose estimates. Still periods for all IMUs are determined using the same criteria as for the experimental walker in [37], but with the angular velocity magnitude threshold set to 60 deg/s. Footfall instances are identified from IMU data during each detected stance. ZUPT and tilt measurement corrections are applied at identified footfall and still period instances, respectively. After IMU poses are estimated through the ErKF, the sensor to segment alignment parameters are utilized to estimate the segment orientations from estimated IMU poses throughout the trial which are then used to estimate the three-dimensional joint angles across the hips, knees, and ankles. The joint angle estimates are then low-pass filtered using a zero-lag 4th order Butterworth filter at 6 Hz to parallel the filtering used for MOCAP estimates (described below), enabling direct comparison between the methods. To estimate stride metrics, each heel trajectory is estimated using its respective foot's estimated IMU pose combined with knowledge of the heel marker location with respect to the IMU frame (per the above calibration procedure). Estimated heel locations at identified footfalls are then used to calculate stride lengths and step widths as previously described above in Section 2.3, "Kinematic comparisons".
Estimated Kinematics from Two MOCAP Methods
The estimated kinematics from the ErKF method are compared to those estimated from MOCAP. Estimates of stride length and step width are obtained from heel marker locations at IMU-identified footfall instances. To reduce noise in the measured trajectories, heel marker trajectories in the lab frame are low-pass filtered using a zero-lag 4th order Butterworth filter at 6 Hz. Because some trajectories are missing data (i.e., due to marker occlusion), the filter is applied individually to each continuous segment of trajectory data. However, applying the filter to segments that are too short may lead to erroneous results; thus, continuous segments shorter than 0.2 s are removed. Finally, short gaps in marker trajectories (less than 0.1 s) are filled with cubic splines. Recall that the ErKF method estimates positions relative to the treadmill belt frame (due to the ZUPT measurement model), whereas the MOCAP method estimates positions in the lab frame. To compare the results from these two measurement modalities, the MOCAP-based trajectories are converted to the treadmill frame as follows. Because independent belt speed measurements are not available for all trials, the average velocity of the foot IMU markers during the first two stance phases is used as the estimated belt speed, which is assumed to remain constant. The distance the belt has traveled at each instant is then estimated by multiplying the estimated belt speed by time and that distance is added to the heel position (in the direction of travel) to estimate the heel trajectory in the belt frame. MOCAP estimates of stride length and step width are then estimated using the heel marker positions in the belt frame at the IMU-identified footfall instances.
Two different MOCAP-based methods are employed for estimating the joint angles during walking, using the markers shown in Figure 4. For both MOCAP-based methods, all IMU marker trajectories are low-pass filtered using a zero-lag 4th order Butterworth filter at 6 Hz. Due to marker occlusion, segments of missing marker data are removed and repaired as described above for the heel trajectories. Individual details of these two MOCAP-based methods follow. The first method, called the cluster method, provides estimates that more closely capture the motion of each IMU because it employs the marker data solely from the IMUmounted markers (i.e., the marker clusters). In this method, the joint angles are estimated based solely on the estimated orientations of the IMU marker clusters. This method is The first method, called the cluster method, provides estimates that more closely capture the motion of each IMU because it employs the marker data solely from the IMUmounted markers (i.e., the marker clusters). In this method, the joint angles are estimated based solely on the estimated orientations of the IMU marker clusters. This method is expected to yield estimates nearer to the ErKF method because any soft tissue motion affects the motion of both the IMU and attached cluster markers equally. However, we do not expect them to be identical because this soft tissue motion does not affect each method equally (i.e., violations of rigid segment assumptions affect estimates differently in the two methods).
Whenever all three markers on an IMU are observable, their positions determine a cluster-based orientation of the IMU frame from which the corresponding body segment's orientation is estimated via the previously computed segment to cluster rotation matrix. Joint angles are then calculated from the estimated segment orientations. Note that if marker positional data of any of the six IMU markers adjacent to a joint are missing, no angles across the intervening joint can be calculated at that time step.
Inverse Kinematics Method (IK)
The second method, called the inverse kinematics method, utilizes all marker locations shown in Figure 4 (i.e., both bony landmark and IMU markers) along with the scaled skeletal model (refer to Section 2.4, "Calibration of ErKF and MOCAP models") with its associated kinematic constraints to solve an inverse kinematics problem that estimates the lower-limb kinematics [49]. The inverse kinematics tool within OpenSim is utilized to solve for all segment orientations using all observed markers and the subject-specific skeletal model. To ensure good inverse kinematics solutions, marker weightings are chosen such that marker errors generally remain below 1 cm RMS error and 4 cm maximum lower-limb marker error for each trial per recommendations in the OpenSim documentation [60]. Next, the joint angles are calculated from these segment orientations. Finally, the joint angle estimates are low-passed using a zero-lag 4th order Butterworth filter at 6 Hz.
Results across All Twenty Human Subjects
We first report the overall performance of the ErKF method across all twenty subjects and across all joints (hips, knees, and ankles) and all six types of gait before more closely examining representative results on a single subject in Section 3.2, "Representative results on a single subject" (refer to the Supplementary File S1 for results for each individual subject). To begin, we focus on the performance of the ErKF method compared to the cluster method for the joint angles. The RMS difference between the ErKF and cluster estimate of each joint angle (flexion/extension, FE; internal/external rotation, IE; abduction/adduction, AbAd; dorsiflexion/plantarflexion, DP; inversion/eversion, InEv; positive/negative reported values) is calculated for each subject and trial. Table 2 reports the mean and standard deviation of the RMS differences across all subjects and separately for each type of gait. The green, yellow, and red highlighting denotes mean RMS differences less than 5 deg, less than 10 deg, and greater than 10 deg, respectively. Note that mean RMS differences are generally less than 5 degrees for FE (DP for ankle) and AbAd (InEv for ankle) across all joints and across all types of gait. By contrast, mean RMS differences are typically higher for IE joint angles across all gait types, except for the ankle.
Next, we evaluate the performance of the ErKF method compared to the inverse kinematics method for joint angle estimates. Table 3 reports the RMS differences between the ErKF and inverse kinematics estimates of each joint angle across all subjects and separately for each type of gait. Note that mean RMS differences generally remain less than 5 degrees for AbAd (InEv for ankle) across all joints and across all types of gait. By contrast, mean RMS differences are typically higher for FE (DP for ankle) and IE joint angles across all gait types, except FE for the knee and IE for the ankle. Table 4 compares the range of motion estimates from the IMU ErKF method and MOCAP cluster method. This table reports the mean (and standard deviation) of the (stride by stride) RMS differences in range of motion for each joint angle across all subjects and separately for each type of gait. In the vast majority of trials, frequent marker occlusion (especially of shank IMU cluster markers) precluded estimates of knee and ankle range of motion using the cluster method. Thus, we only report range of motion differences for the hip joint angles here. While such occlusion also affected hip range of motion estimates using the cluster method, these estimates were successfully obtained for at least ten subjects for each hip and gait type (i.e., minimum of ten subjects represented in each entry of Table 4). Next, we compare the range of motion estimates of the ErKF method and the inverse kinematics method in Table 5. This table reports the mean (and standard deviation) of the (stride by stride) RMS differences in range of motion for each joint angle across all subjects and separately for each type of gait. Unlike the cluster method, the inverse kinematics method is capable of estimating joint kinematics even when some markers are occluded. Thus, range of motion is successfully estimated for all joint angles in each trial as reflected in Table 5.
We also compare key stride metrics, namely stride length (SL) and step width (SW), estimated by the ErKF method to those estimated using the MOCAP heel trajectories. Table 6 reports both the mean (and standard deviation) SL and SW from MOCAP as well as the mean (and standard deviation) of the RMS differences in SL and SW between the two methods across all subjects and separately for each gait type. Mean RMS differences in stride length are 0.07 and 0.05 m (6% and 3.4% of the mean), respectively for normal and fast walking. Mean RMS differences in stride length for both lateral walks also remain below 0.08 m (below 16% of the mean), noting the mean stride length is much smaller than for the forward walks. For normal and fast walking, mean RMS differences in step width are 0.05 m (~45% of the mean). Note that for both stride length and step width, mean RMS differences for forward and backward walking are much smaller for the faster gait speeds (>0.8 m/s) compared to the slower speeds (<0.5 m/s). Finally, Bland-Altman plots are used to evaluate the agreement between ErKF and IK estimates of two exemplary metrics that are derived from the joint angle waveforms. First, we present Bland-Altman plots for estimates of each subject's mean left knee FE range of motion in Figure 5 for normal ( Figure 5A) and lateral left walking ( Figure 5B). This metric is selected because FE range of motion is relevant in many clinical settings [61,62]. Both Figure 5A,B demonstrate good agreement between the ErKF and IK methods for estimating this metric despite the very different gait type represented in each. We also present Bland-Altman plots for estimates of each subject's mean left hip AbAd range of motion in Figure 6 for normal ( Figure 6A) and lateral left walking ( Figure 6B). This case represents an extreme condition because: (1) lateral left walking induces significantly more hip AbAd than normal walking and (2) the hip clearly does not act as a hinge during lateral left walking, but the ErKF method applies a "soft" hinge-like correction at all times. Even with these considerations, Figure 6 demonstrates close agreement of the left hip AbAd range of motion estimates in both movement conditions. Bias (thick, red) and 95% limits of agreement (thin, green) are also displayed. Limits of agreement calculated as Bias ± 1.96 SD where Bias is the mean and SD is the standard deviation of the differences in the estimates.
Representative Results on a Single Subject
We next examine representative results on a single subject. These sample results are selected to highlight key points about the performance of the ErKF method in estimating joint kinematics. We start by comparing the differences (compared to the cluster method) in joint angle estimates for the left hip using raw integration of IMU data (no error models employed) versus application of the ErKF method (with all error models employed). We note that it is often recommended that a static period be utilized to correct for static bias in the angular rate signals to reduce the rate of orientation drift due to raw integration [63]; however, this strategy could not be employed in the present method because the validation data set does not include sufficiently long still periods to estimate the gyroscope bias. Despite this, the results in Figure 7 demonstrate no observable drift for the ErKF method in any of the three estimated joint angles for the left hip over the one-minute trial while raw integration (i.e., without the ErKF) results in differences greater than 20 deg over this same time due to drift. Comparison is between IMU and IK methods (IMU-IK). Plot represents matched estimates for all 19 subjects evaluated in these movements. Bias (thick, red) and 95% limits of agreement (thin, green) are also displayed. Limits of agreement calculated as Bias ± 1.96 SD where Bias is the mean and SD is the standard deviation of the differences in the estimates. Bias (thick, red) and 95% limits of agreement (thin, green) are also displayed. Limits of agreement calculated as Bias ± 1.96 SD where Bias is the mean and SD is the standard deviation of the differences in the estimates.
Representative Results on a Single Subject
We next examine representative results on a single subject. These sample results are selected to highlight key points about the performance of the ErKF method in estimating joint kinematics. We start by comparing the differences (compared to the cluster method) in joint angle estimates for the left hip using raw integration of IMU data (no error models employed) versus application of the ErKF method (with all error models employed). We note that it is often recommended that a static period be utilized to correct for static bias in the angular rate signals to reduce the rate of orientation drift due to raw integration [63]; however, this strategy could not be employed in the present method because the validation data set does not include sufficiently long still periods to estimate the gyroscope bias. Despite this, the results in Figure 7 demonstrate no observable drift for the ErKF method in any of the three estimated joint angles for the left hip over the one-minute trial while raw integration (i.e., without the ErKF) results in differences greater than 20 deg over this same time due to drift. Plot represents matched estimates for all 19 subjects evaluated in these movements. Bias (thick, red) and 95% limits of agreement (thin, green) are also displayed. Limits of agreement calculated as Bias ± 1.96 SD where Bias is the mean and SD is the standard deviation of the differences in the estimates.
Representative Results on a Single Subject
We next examine representative results on a single subject. These sample results are selected to highlight key points about the performance of the ErKF method in estimating joint kinematics. We start by comparing the differences (compared to the cluster method) in joint angle estimates for the left hip using raw integration of IMU data (no error models employed) versus application of the ErKF method (with all error models employed). We note that it is often recommended that a static period be utilized to correct for static bias in the angular rate signals to reduce the rate of orientation drift due to raw integration [63]; however, this strategy could not be employed in the present method because the validation data set does not include sufficiently long still periods to estimate the gyroscope bias. Despite this, the results in Figure 7 demonstrate no observable drift for the ErKF method in any of the three estimated joint angles for the left hip over the one-minute trial while raw integration (i.e., without the ErKF) results in differences greater than 20 deg over this same time due to drift. We next report in Figure 8 the estimated hip joint angles based on the ErKF (IMU) and cluster (MOCAP) methods for the same subject and trial considered previously in Figure 7, but now for both the left ( Figure 8A) and right ( Figure 8B) hips. Note that different sized offsets may occur depending on the hip (i.e., right versus left) and joint angle, but these offsets typically converge. Such offsets are commonly observed in this study, but vary in size depending on the subject and joint angle. Typically, the offsets (when they do arise) are smaller for FE and AbAd angles (<10 deg) than for IE angles (<30 deg) as shown in Figure 8B. Despite such We next report in Figure 8 the estimated hip joint angles based on the ErKF (IMU) and cluster (MOCAP) methods for the same subject and trial considered previously in Figure 7, but now for both the left ( Figure 8A) and right ( Figure 8B) hips. Note that different sized offsets may occur depending on the hip (i.e., right versus left) and joint angle, but these offsets typically converge. We next report in Figure 8 the estimated hip joint angles based on the ErKF (IMU) and cluster (MOCAP) methods for the same subject and trial considered previously in Figure 7, but now for both the left ( Figure 8A) and right ( Figure 8B) hips. Note that different sized offsets may occur depending on the hip (i.e., right versus left) and joint angle, but these offsets typically converge. Such offsets are commonly observed in this study, but vary in size depending on the subject and joint angle. Typically, the offsets (when they do arise) are smaller for FE and AbAd angles (<10 deg) than for IE angles (<30 deg) as shown in Figure 8B. Despite such Such offsets are commonly observed in this study, but vary in size depending on the subject and joint angle. Typically, the offsets (when they do arise) are smaller for FE and AbAd angles (<10 deg) than for IE angles (<30 deg) as shown in Figure 8B. Despite such offsets, the joint angle waveforms are often similar between ErKF and MOCAP methods. For example, Figure 9 shows a similar waveform between the ErKF and cluster estimates of the right hip IE angle over two stride cycles (zoom in from Figure 8B). offsets, the joint angle waveforms are often similar between ErKF and MOCAP methods.
For example, Figure 9 shows a similar waveform between the ErKF and cluster estimates of the right hip IE angle over two stride cycles (zoom in from Figure 8B). Finally, we examine the performance of the ErKF method for the other types of gait included in the study. Figure 10 illustrates a side-by-side comparison of the left hip joint angle trajectory estimates for ( Figure 10A) normal treadmill walking and ( Figure 10B) walking laterally left using all three estimation methods (ErKF, Clust, IK) for the same representative subject. As described above, these two gaits induce limiting case motions of the hip with normal walking inducing predominantly hip FE and lateral walking inducing predominantly hip AbAd. The left hip is chosen for this subject (same considered above) to evaluate the method without the confounding effects of likely sensor to segment alignment errors for the right hip (refer to Section 4.6, "Factors leading to abnormally poor estimates"). Refer to Appendix A for example joint angle estimates over two stride cycles for all six gait types for the same representative subject. Finally, we examine the performance of the ErKF method for the other types of gait included in the study. Figure 10 illustrates a side-by-side comparison of the left hip joint angle trajectory estimates for ( Figure 10A) normal treadmill walking and ( Figure 10B) walking laterally left using all three estimation methods (ErKF, Clust, IK) for the same representative subject. As described above, these two gaits induce limiting case motions of the hip with normal walking inducing predominantly hip FE and lateral walking inducing predominantly hip AbAd. The left hip is chosen for this subject (same considered above) to evaluate the method without the confounding effects of likely sensor to segment alignment errors for the right hip (refer to Section 4.6, "Factors leading to abnormally poor estimates"). Refer to Appendix A for example joint angle estimates over two stride cycles for all six gait types for the same representative subject. offsets, the joint angle waveforms are often similar between ErKF and MOCAP methods.
For example, Figure 9 shows a similar waveform between the ErKF and cluster estimates of the right hip IE angle over two stride cycles (zoom in from Figure 8B). Finally, we examine the performance of the ErKF method for the other types of gait included in the study. Figure 10 illustrates a side-by-side comparison of the left hip joint angle trajectory estimates for ( Figure 10A) normal treadmill walking and ( Figure 10B) walking laterally left using all three estimation methods (ErKF, Clust, IK) for the same representative subject. As described above, these two gaits induce limiting case motions of the hip with normal walking inducing predominantly hip FE and lateral walking inducing predominantly hip AbAd. The left hip is chosen for this subject (same considered above) to evaluate the method without the confounding effects of likely sensor to segment alignment errors for the right hip (refer to Section 4.6, "Factors leading to abnormally poor estimates"). Refer to Appendix A for example joint angle estimates over two stride cycles for all six gait types for the same representative subject.
Discussion
This paper extends the ErKF method developed in [37] for a three-body mechanical walker to a full seven-body model of the human lower limbs and then evaluates its performance across a large range of gait types. The results demonstrate that the ErKF method estimates important kinematic parameters comparable to those estimated using two different MOCAP methods across six different walking gaits.
ErKF Estimates of Instantaneous Joint Angles
In general, the method yields small mean RMS differences (<5 deg in 94% of the cases) in estimates of FE and AbAd joint angles (DP and InEv for the ankle) across all joints and gait types with low offsets when compared to the cluster method (refer to Table 2). Higher offsets typically occur in IE joint angle estimates; however, these higher IE offsets vary greatly between subjects as evidenced by the high standard deviations reported in Table 2 for IE angles compared to the other joint angles. Additionally, these higher IE offsets generally stabilize over the one-minute trials to near-constant values with the remaining joint angle waveform well-estimated; see, for example, Figure 9. Importantly, the differences in range of motion estimates remain small (generally <5 deg) between the ErKF and the MOCAP methods, even for the IE angles for the hips and knees (refer to Tables 4 and 5). While RMS differences in IE joint angle estimates are generally higher than the other degrees of freedom (refer to Table 2), this trend appears typical of IMU-based methods for the lower limbs, perhaps due to low signal-to-noise ratio in the transverse plane kinematics [18]. Thus, researchers should exercise caution when interpreting IE estimates obtained from IMU-based methods (including the method presented in this paper), especially when RMS differences in these estimates are large compared to the expected ranges of motion. The differences in estimates of the joint angle between the ErKF and inverse kinematics (MOCAP) methods are generally greater than those between the ErKF and cluster methods (refer to Table 3). This likely arises from several factors including the movement of the underlying bones relative to the sensors/markers.
ErKF Estimates of Stride Parameters
Mean RMS differences in stride length remain below 0.08 m for normal, fast, and both lateral walking gaits with mean RMS differences in step width of 0.05 m for normal and fast walking (refer to Table 6). Importantly, recall that MOCAP estimates of stride length in this study rely on accurate estimates of belt speed. While not reported here, we observe that differences between belt speeds estimated via MOCAP (as used in this paper) and those reported by the treadmill (which were not available for all trials) are often on the order of 0.03 m/s. Thus, errors in MOCAP-estimated belt speed may account for a significant portion of differences in estimated stride length between the ErKF method and MOCAP in this study. Additionally, we observe that trials with low offsets in joint angles typically exhibited smaller differences in stride metric estimates. Thus, similar error sources may be responsible for poor stride metric and joint angle estimates. Such error sources include soft tissue artefacts, sensor to segment alignment errors, MOCAP errors, and/or the imposition of the hinge-like joint measurement.
Utility of ErKF Joint Angle Range of Motion Estimates
In addition to comparing joint angle estimates, we also compare joint angle range of motion estimates between the ErKF and MOCAP methods. Range of motion, and particularly changes in range of motion, have demonstrated significance in biomechanical studies in a variety of contexts as we emphasize by citing four examples. Devita and Hortobagyi [64] evaluate hip and ankle flexion range of motion during stance for walking and find significant differences between elderly and young populations (3.3 deg for hip and −2.7 for ankle, elderly-young). Qu and Yeo [55] observe that fatigue ultimately increases the sagittal-plane hip and knee range of motion by an average of 1.3 and 1.9 deg, respectively, in their study. Sofuwa et al. [65] examine differences between cohorts with and without Parkinsons's disease and conclude that the "healthy" cohort exhibits 4.8 and 4.0 deg greater ankle DP range of motion during the push-off and swing phases, respectively. Carmo et al. [61] contrast post-stroke patients and healthy controls and find that knee flexion range of motion is 17.4 and 20.0 degrees lower for stroke patient's affected side compared to their unaffected side and healthy controls, respectively. These example studies highlight the important need to measure joint range of motion for biomechanical studies. Consequently, we anticipate that the ErKF method may prove valuable for future studies, and particularly if one can also establish the measurement resolution of range of motion and changes in range of motion. While the present study was not designed to establish measurement resolution, the favorable comparisons across the methods may indeed suggest sufficient resolution for biomechanical studies as explained next.
The ErKF method demonstrates average RMS differences for hip FE, IE, and AbAd range of motion less than 2, 3, and 4 deg, respectively compared to the cluster method during normal walking (refer to Table 4; recall cluster estimates of range of motion are not available for comparison on the knees and ankles due to frequent marker occlusion). Importantly, note that range of motion can often be estimated similarly between ErKF and both MOCAP methods even in the presence of systematic offsets between the ErKF-and MOCAP-estimated joint angle waveforms (see for example, Figures 9 and 10). Further note that these waveform estimates are more commonly similar in joint degrees of freedom that dominate a particular gait (for example, hip FE for normal walking, hip AbAd for lateral walking; refer to Figure 10). Comparing the ErKF method to the inverse kinematics method, range of motion differences are slightly higher than compared to the cluster method, but still generally below 5 degrees across joints and gait types (refer to Table 5). However, note that the generally larger differences in Table 5 (comparison to inverse kinematics method) versus those of Table 4 (comparison to cluster method) may result from differences between the two MOCAP-based methods themselves (i.e., offsets between Clust and IK estimates; refer to Figure 10). Additionally, the larger differences in ranges of motion for the ankle angles versus those for the knee and hip may derive from increased complexity of the ankle joint in the Gait2354 model used for the IK estimates versus the simpler model used in the ErKF and cluster methods. Nevertheless, the low RMS differences in range of motion estimates between the ErKF method and MOCAP methods (Tables 4 and 5) establish that the ErKF method yields very similar estimates of range of motion compared to MOCAP. Importantly, note also that the differences between the ErKF and MOCAP methods in this study are similar to and often smaller than the changes in range of motion observed in the studies highlighted above. This fact supports the claim that the ErKF method may possess sufficient resolution in range of motion estimates to support meaningful biomechanical studies outside (and within) the laboratory.
Further, in clinical settings, typically specific mean ranges of motion (i.e., averaged across a trial) are of greater interest than stride-by-stride changes in range of motion. Thus, Bland-Altman plots are used to evaluate the agreement between ErKF and MOCAP estimates of mean range of motion for two exemplary joint angles, namely the left knee FE angle ( Figure 5) and the left hip AbAd angle ( Figure 6). These plots demonstrate that the ErKF method agrees well with the inverse kinematics method for estimates of mean range of motion for some joint angles across two very different gait types (normal walking verses lateral left walking). These results motivate future investigation into clinical applications where the ErKF method may prove valid.
Comparison of ErKF Method to other IMU-Based Methods
The differences between ErKF and MOCAP-based estimates of joint angles and ranges of motion in this study are comparable to prior IMU-based methods that focused on normal walking, as detailed below. Importantly though, the present method advances well beyond the prior methods in: (1) estimating three-dimensional joint angles across all lower-limb joints, (2) succeeding over a wide variety of gait types (beyond normal walking), and/or (3) eliminating reliance on prior assumptions (e.g., clean magnetic field, [29], who employ an overlapping data set with that used in this study, present a method specific for the hip joint and observe mean RMS differences in hip FE, IE, and AbAd of 8.6, 10.0, and 8.0 deg, respectively during normal walking when compared to a similar cluster-based MOCAP method. The present ErKF method demonstrates superior mean RMS differences for hip FE, IE, and AbAd of 2.4, 7.4, and 3.8 deg, respectively for normal walking (refer to Table 2). However, caution should be exercised when comparing these results since the method of Adamowicz et al. is an "IMU-only" method versus the present study which leverages MOCAP-based initializations. Weygers et al. [24] develop a method specific for the knee joint and observe RMS differences from MOCAP less than 5 deg during walking for knee Euler angles (as opposed to anatomical angles). While acknowledging the differences in Euler angles versus anatomical angles, similar differences (<5 deg) arise in the ErKF method for the knee during normal walking for FE and AbAd, but not for IE (6.6 deg); refer to Table 3. Importantly, these prior studies offer "single-joint" (two-body) methods as opposed to the multi-joint (seven-body) method developed herein.
The present ErKF method also compares well with prior multi-joint methods and also removes assumptions and/or reliance on magnetometer data. Teufl et al. develop a seven-body model of the lower limbs and compare IMU-derived results with those from MOCAP using both cluster and inverse kinematics methods for normal speed overground walking [22,56] and for short dynamic movements [28]. For normal walking, they observe smaller RMS differences in joint angles than reported in this study. For example, in [22] they observe RMS differences in FE (or DP), IE, and AbAd (or InEv) across all joints up to 1.6, 2.3, and 1.6 deg, respectively compared to a similar cluster method and up to 5.4, 5.5, and 4.2 deg, respectively compared to a MOCAP method relying on bony landmarks. In the current study, the analogous RMS differences are up to 4.2, 8.8, and 4.6, deg, respectively compared to the cluster method (Table 2) and up to 7.2, 8.6, and 4.7 deg, respectively compared to the inverse kinematics method which primarily relies on bony landmarks (Table 3). They also report stride length and step width estimates during walking [56], having RMS differences of 0.04 m and 0.03 m, respectively compared to 0.07 and 0.05 m, respectively in this study; refer to Table 6. Note that in [22,56], Teufl et al. studied overground walking and thus their results are unaffected by errors in MOCAPestimated belt speed encountered herein. Additionally, their method relies critically on a level ground assumption (for both footfall identification and to correct drift errors) which is not assumed in the present ErKF method. Thus, while the present method can be used on level or uneven terrain (i.e., outdoor environments), the method of [22] is restricted to level terrain (i.e., likely restricted to indoor, single-level environments). Additionally, the process model in [22] assumes constant linear acceleration and angular velocity (with IMU measurements being used in the measurement model) and thus it may not be suited for more dynamic movements. Finally, their method is not evaluated for any gait types other than normal speed walking in contrast to this study that examines six types of gait. McGrath and Stirling [23] develop a seven-body method for lower-limb kinematic estimation. However, they evaluate it solely for knee FE yielding mean RMS differences of 4.3 deg compared to a MOCAP-based inverse kinematics method and using a specific set of calibration motions designed to excite all lower-limb degrees of freedom (i.e., different than the walking gaits in this paper for the ErKF method). As shown in Table 3, the ErKF method yields similar differences with MOCAP for the knee FE (4.1 deg), although for a different set of movements. Zhang et al. [66] validate the performance of a commercial Xsens system for walking and stair ascent/descent. They report mean and standard deviation joint angle differences up to 5.1 and 4.2 degrees, respectively for walking (depending on the joint). However, they do not report RMS differences and so direct comparisons cannot be made to this study. Nüesch et al. [67] validate the commercial RehaGait system for treadmill walking and report RMS differences of 9.6, 7.6, and 4.5 deg for FE of the hip, knee, and ankle, respectively. Compared to this study, the ErKF method demonstrates superior or comparable FE estimates for all joints with RMS differences of 7.1, 4.1, and 5.3 deg for the hip, knee, and ankle, respectively (refer to Table 3). As a further distinction, we also note that commercial systems generally also employ magnetometer data (not used for the ErKF) and rely on proprietary algorithms; thus, it is difficult to evaluate their limitations.
Performance of Novel Hip "Soft" Hinge Correction
Recall that an important contribution of the present ErKF method is the novel "soft" hinge correction applied to the hip at all time points (but with a large measurement uncertainty to acknowledge that the hip is often not acting as a pure hinge) to aid in constraining orientation integration drift. In applying this correction, it is critical to assess whether the ErKF yields accurate hip joint angle estimates, particularly the hip AbAd angle for movements where the hip manifestly does not act as a hinge (e.g., lateral walking). In Tables 2-5, we do not observe any clear degradation in estimates of the instantaneous hip AbAd angle or its range of motion between normal walking (predominantly FE) and lateral walking (predominantly AbAd). Further, in Figure 10, observe the remarkably close agreement between the ErKF and cluster estimates of the left hip joint angles for both normal walking and lateral left walking for a representative subject, especially for FE and AbAd angles. Finally, Figure 6 shows Bland-Altman plots of the mean left hip AbAd range of motion over a trial for normal walking ( Figure 6A) and lateral left walking ( Figure 6B). Notice the close agreement in these estimates for both movement conditions. These results confirm the success of the novel hip soft hinge constraint as the hip joint angles are estimated consistently between the methods even for lateral walking gait where the hip rotation is dominated by AdAd and not FE.
Factors Leading to Abnormally Poor Estimates
While MOCAP data are used in the present ErKF method in part to reduce the risk of obtaining poor sensor to segment alignment parameters and initializations, we emphasize that MOCAP does not yield ground truth estimates. Thus, large differences between ErKF and MOCAP estimates may be the results of errors in either (or both) method types. Figure 8 provides evidence that errors in the sensor to segment alignment (obtained using MOCAP data) can significantly impact estimated joint angles; however, the ErKF method is capable of overcoming such errors in estimating several key metrics. We arrive at this conclusion from the following observations. For the left hip (A), observe the excellent agreement between the ErKF and cluster method for all three joint angles over the entire trial (as expected from Figure 7). However, for this very same subject and trial, the right hip ( Figure 8B) exhibits an offset between the two methods. This offset develops when the joint center measurement is utilized for a trial and for two likely reasons. First, this offset is influenced by the joint center measurement which relies on accurate estimates of the joint center locations in the IMU frames (obtained from the sensor to segment alignment) to accurately enforce the kinematic constraints. Second, given that this offset can vary between the hip joints for the same subject and trial, the offset likely originates from errors in the sensor to segment alignment parameters (determined using MOCAP data) as they are distinct for the two hip joints. To further support this claim (i.e., that the observable differences likely result from MOCAP-aided sensor to segment alignment rather than the ErKF method), note that the MOCAP-based FE estimates for the right hip appear to be qualitatively incorrect because they obviously oscillate between different extremes than the left hip and the peak extension for each stride (local minima in FE) is much larger than expected for healthy subjects in normal walking [8]. These obvious errors likely occur from errors in marker placement and model scaling assumptions, thus resulting in sensor to segment alignment errors. However, because no sensor to segment alignment method is immune to errors, it is critical that the ErKF method still provide meaningful estimates even in the presence of such errors. Thus, we do not exclude any trials from our analysis for obvious errors in sensor to segment alignment. Additionally, observe that despite these errors, estimates from the ErKF method do not rapidly diverge like estimates using raw integration (refer to Figure 7). Thus, errors in sensor to segment alignment may still lead to converged estimates, only with systematic offsets between methods due to the errors in sensor to segment alignment which affect both ErKF and MOCAP estimates (albeit affecting the two methods differently). In rare cases, the differences in the joint angle waveforms between different methods did not converge to a steady offset over the oneminute trials. However, we observed these cases of larger drift arise for specific subjects rather than to gait types, further indicating that poor sensor to segment alignment may be primarily responsible.
We duly note that a minority of the kinematic estimates in this study are poorer for specific gaits. For example, while we observe overall excellent estimates of ankle, knee, and hip joint angles across gait types, the hip IE estimates exhibit much higher differences for slow, backward, and lateral walking than for the faster (>0.8 m/s) forward walking trials (refer to Tables 2 and 3). We observe this same trend for estimates of stride length and step width (i.e., smaller differences for the two faster forward walks than for the other gaits; refer to Table 6). However, while not reported here, we also observe that these cases of larger mean RMS differences for specific gaits are largely attributed to certain subjects who consistently exhibited higher differences in estimated joint angles and stride metrics across gait types compared to others. This suggests that subject-specific systematic errors (i.e., in sensor to segment alignment, marker placement, joint center locations) may be the primary cause behind these rare, but poorer, kinematic estimates. Thus, better methods for sensor to segment alignment may yield significant improvements to the results presented here. Finally, we note the poorer stride metric estimates for slow walking compared to the other gaits; refer to Table 6. However, this gait is much slower (<0.5 m/s) than is typical of most populations and the larger differences may be associated with lower signal-to-noise ratios in the IMU data (assuming the same sensor hardware selection). Despite the exceptions duly noted above, the ErKF method provides kinematic estimates that closely replicate those from MOCAP and across a broad range of gait types. Additionally, the method does not rely on "laboratory-like" assumptions (e.g., level ground) nor does it rely on magnetometer data (susceptible to pollution by magnetic interferences in both indoor and outdoor environments)
Limitations
Several limitations also exist with the current method. First, MOCAP data are used to determine sensor to segment alignment; thus, it is not yet an "IMU-only" method. However, determining sensor to segment alignment for wearable IMUs represents a major topic of research in itself [38][39][40] and promising existing methods could be incorporated in the ErKF method to yield an IMU-only method. Another promising solution is to expand the ErKF method to a "self-calibrating" method that estimates the sensor to segment alignment parameters simultaneously with the kinematic estimates (i.e., as part of the full state). As described in Section 4, while this study does not assume laboratory-like conditions (e.g., level ground), the method is evaluated only on data from treadmill walking trials. Future studies should examine the accuracy of the method on uneven terrain including outdoor environments. In the present study, trials are only one minute in length, and future work should evaluate the method on longer trials to ensure that drift errors remain constrained over long times (e.g., hours). The accuracies of joint angle estimates are compared to two different MOCAP-based methods to evaluate the performance of the ErKF method. Comparison against the MOCAP cluster method enables evaluation of the ErKF method without some of the soft tissue artefacts. However, we emphasize that errors in sensor to segment alignment parameters (including those due to marker misplacement and inaccurate joint center location estimates) and movement of the IMU relative to the underlying bone all affect the method. Thus, errors in MOCAP marker placement will affect all three estimation methods used in this study. Similarly, the current methods rely on static estimates of joint centers (i.e., estimation from marker locations and anthropometrics). These methods rely on new subjects having similar characteristics to those used to determine the estimation equations and then on accurate marker placement (as emphasized above). While the present study demonstrates good accuracy for select kinematic metrics, future studies should evaluate accuracy for other relevant biomechanical metrics (e.g., segment velocities). Finally, note that comparisons between this study and other methods are inherently difficult due to a multitude of differences in experimental hardware and procedures including in marker placement, recruited subject populations, MOCAP reference systems used, IMU hardware selection (see [26]), tasks performed, study design, and data processing techniques. Thus, future comparison of filtering methods alone will require commonly shared data sets, including those containing highly accurate ground truth data for reference.
Conclusions
In this study, a novel ErKF method is extended to a full (seven-body) model of the lower limbs for human subjects. Doing so brings new challenges including: (1) increasing the degrees of freedom, (2) characterizing complex biological (versus mechanical) joints (e.g., joint center location and sensor to segment alignment), and (3) soft tissue artefacts. Importantly, this paper contributes a novel application of the joint axis measurement correction in the ErKF for the hip and knee to reduce angle drift errors. In contrast to previous IMU-based joint axis correction methods (specific to the knee), this new method reduces drift errors without assuming strict hinge-like behavior during certain times. Thus, the correction herein is also applicable to the hip (but with different measurement noise parameters). Significantly, this work validates the ErKF method on human subjects walking with six different gait types including forward walking (at slow, normal, and fast speeds), backward walking, and lateral walking (both left and right). The method's demonstrated agreement (compared to MOCAP) in estimating joint angles, joint angle range of motion, stride length, and step width across all six gait types studied with healthy subjects (including forced abnormal gaits) motivates its future evaluation on subjects with gait abnormalities (e.g., due to injury and/or disease). In particular, for all gait types studied, RMS differences between ErKF and MOCAP cluster-based joint angle estimates generally remain below 5 degrees for all three ankle joint angles and for flexion/extension and abduction/adduction of the hips and knees. Additionally, RMS differences between ErKF and MOCAP inverse kinematics estimates in (stride to stride) range of motion generally remain below 5 degrees for all hip and knee joint angles (with slightly higher differences for the ankle joint angles) and across all gait types. Finally, mean RMS differences between ErKF and MOCAP estimates for both stride length and step width remain below 0.13 m across all gait types (except stride length for slow forward walking) and below 0.07 m for the two fastest walking gaits (>0.8 m/s). The overall comparability between estimates obtained using the ErKF method and MOCAP confirm the significant promise for using the ErKF method for non-laboratory based biomechanical studies of the human lower limbs in broad contexts.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of The University of Vermont (protocol code #08-0518, initial approval 24 May 2018).
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: All relevant data can be found in the Supplementary spreadsheet file, File S1.
Conflicts of Interest: RDG and RSM receive funding for consulting with the University of Washington. RSM reports stock ownership in Epicore Biosystems, Inc.; Impellia, Inc.; and Allostatech, LLC. RSM reports research funding from MC10, Inc.; Epicore Biosystems, Inc.; US National Science Foundation; US National Institute of Health. RSM also receives funding from consulting for HX Innovations Inc. and Happy Health Inc. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. under Grant NNX15AP86H received by RDG. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.
Appendix A. -Example Trajectories for Six Walking Gaits
Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of The University of Vermont (protocol code #08-0518, initial approval 24 May 2018).
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: All relevant data can be found in the supplementary spreadsheet file, File S1.
Conflicts of Interest: RDG and RSM receive funding for consulting with the University of Washington. RSM reports stock ownership in Epicore Biosystems, Inc.; Impellia, Inc.; and Allostatech, LLC. RSM reports research funding from MC10, Inc.; Epicore Biosystems, Inc.; US National Science Foundation; US National Institute of Health. RSM also receives funding from consulting for HX Innovations Inc. and Happy Health Inc. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
|
v3-fos-license
|
2024-04-27T06:18:04.393Z
|
2024-04-25T00:00:00.000
|
269384811
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "d00f15726ac2249c3d81a6e2e1b3097082d2bf72",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2733",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "ddf61887c08778f95dd30f2337afadf382dcbe3d",
"year": 2024
}
|
pes2o/s2orc
|
Genetic regulation of l-tryptophan metabolism in Psilocybe mexicana supports psilocybin biosynthesis
Background Although Basidiomycota produce pharmaceutically and ecologically relevant natural products, knowledge of how they coordinate their primary and secondary metabolism is virtually non-existent. Upon transition from vegetative mycelium to carpophore formation, mushrooms of the genus Psilocybe use l-tryptophan to supply the biosynthesis of the psychedelic tryptamine alkaloid psilocybin with the scaffold, leading to a strongly increased demand for this particular amino acid as this alkaloid may account for up to 2% of the dry mass. Using Psilocybe mexicana as our model and relying on genetic, transcriptomic, and biochemical methods, this study investigated if l-tryptophan biosynthesis and degradation in P. mexicana correlate with natural product formation. Results A comparative transcriptomic approach of gene expression in P. mexicana psilocybin non-producing vegetative mycelium versus producing carpophores identified the upregulation of l-tryptophan biosynthesis genes. The shikimate pathway genes trpE1, trpD, and trpB (encoding anthranilate synthase, anthranilate phosphoribosyltransferase, and l-tryptophan synthase, respectively) were upregulated in carpophores. In contrast, genes idoA and iasA, encoding indole-2,3-dioxygenase and indole-3-acetaldehyde synthase, i.e., gateway enzymes for l-tryptophan-consuming pathways, were massively downregulated. Subsequently, IasA was heterologously produced in Escherichia coli and biochemically characterized in vitro. This enzyme represents the first characterized microbial l-tryptophan-preferring acetaldehyde synthase. A comparison of transcriptomic data collected in this study with prior data of Psilocybe cubensis showed species-specific differences in how l-tryptophan metabolism genes are regulated, despite the close taxonomic relationship. Conclusions The upregulated l-tryptophan biosynthesis genes and, oppositely, the concomitant downregulated genes encoding l-tryptophan-consuming enzymes reflect a well-adjusted cellular system to route this amino acid toward psilocybin production. Our study has pilot character beyond the genus Psilocybe and provides, for the first time, insight in the coordination of mushroom primary and secondary metabolism. Supplementary Information The online version contains supplementary material available at 10.1186/s40694-024-00173-6.
Introduction
The Basidiomycota have collectively evolved a prolific specialized, so-called secondary metabolism.These pathways elaborate a rich and structurally diverse repertoire of bioactive natural products, among them toxicologically, pharmaceutically or ecologically relevant molecules [1].Ubiquitous compounds of the central or primary metabolism, such as acetyl-CoA or amino acids, serve as precursors to supply the main building blocks to the biosynthesis pathways [2,3].Generally, primary metabolism uses salvage pathways to regenerate metabolites whereas secondary metabolism culminates in accumulated or secreted end products.Therefore, upon eliciting natural product pathways, the demand for the precursors increases massively which implies a well-adjusted interplay between primary and secondary metabolism.However, knowledge of how basidiomycetes coordinate their primary and secondary metabolism is very limited.
Mushrooms of the basidiomycete genus Psilocybe, notorious for its perception-altering effects [4][5][6], produce psilocybin which serves as prodrug for psilocin, the psychotropic and chemically reactive dephosphorylated follow-up compound (Fig. 1).Psilocybin biosynthesis is initiated by l-tryptophan decarboxylation, mediated by the decarboxylase PsiD [7].The activity of this metabolic pathway depends on the developmental stage and increases strongly upon fructification that, in return, is triggered by light [8,9].Consequently, during carpophore formation, the demand for l-tryptophan increases drastically, given that psilocybin accounts for up to 2% of the mushroom dry mass [10][11][12][13][14][15].In P. cubensis, the psiD gene is 395-fold upregulated when mushroom primordia are formed [7,8].However, the adjustment of metabolic pathways supplying or degrading l-tryptophan is unknown and it has remained shrouded how the fungus meets the demand when psilocybin production sets in.
Aromatic l-amino acids are biosynthesized by the shikimate pathway [16].From the intermediate chorismate, the anabolism of l-tryptophan branches off by anthranilate synthesis, catalyzed by TrpE (Fig. 1 and Additional file 1: Figure S1).Three further reactions ultimately lead to the formation of l-tryptophan to supply protein biosynthesis and other pathways that require tryptophan and that represent tryptophan sinks, besides psilocybin assembly.For example, indole-2,3-dioxygenases (IDOs) initiate the pathway to 3-hydroxyanthranilate via kynurenine as the starting point for nicotinamide metabolism [17].Likewise, indole acetaldehyde synthase depends on l-tryptophan supply (Fig. 1).In this study, we present a transcriptomic analysis of P. mexicana with particular Fig. 1 Selected pathways and enzymes of the tryptophan metabolism in P. mexicana.Tryptophan catabolism occurs via the kynurenine pathway, psilocybin biosynthesis and aromatic acetaldehyde synthesis.Indole-3-acetaldehyde was reduced to tryptophol in vitro by adding NaBH 4 emphasis on genes involved in the l-tryptophan metabolism.We investigated how the genes of the tryptophan branch of the shikimate pathway are regulated along with genes encoding IDOs as well as an indole-3-acetaldehyde synthase.The latter was recombinantly produced and biochemically characterized to verify its activity, given that microbial indole-3-acetaldehyde synthases have not been investigated yet.
Transcriptomic analysis of P. mexicana
For insight into the regulation of tryptophan biosynthetic genes, a transcriptomic study was performed.First, we needed to design a robust experimental set-up to compare psilocybin-producing and non-producing conditions.Previous investigations of dried P. mexicana sclerotia and carpophores determined psilocybin contents up to 0.65% and 0.39%, respectively [13,18].Prior efforts to optimize media usually aimed at increased psilocybin concentrations [19].We systematically tested various media and found FB3G medium suitable for comparison as vegetative mycelium grown in this medium was virtually free of psilocybin whereas BNM medium stimulated psilocybin production (Additional file 1: Figures S2 and S3, media composition described in methods section) [19].Consequently, comparative RNA-Seq was performed with RNA samples isolated from vegetative mycelium, grown either in FB3G or BNM medium, and from P. mexicana carpophores.Overall, 289,463,012 reads yielding over 86 Gb of sequence data were obtained with a mean quality score of 35.57.Details of the DESeq2 analysis are shown in Additional file 1: Figures S4-S10, the numbers of up-and downregulated genes (threshold criteria: log 2 -fold change > │1│ and adjusted p-value (p adj ) < 0.05) are provided in Additional file 1: Table S1.
Differential expression of genes for l-tryptophan anabolism
We first investigated genes implicated in tryptophan anabolism, a generally well understood process in model organisms such as yeast and Aspergilli [20,21].The conversion of chorismate to anthranilate and further to l-tryptophan is catalyzed by the combined action of four mono-or multifunctional enzymes that form a branch of the shikimate pathway (Additional file 1: Figure S1).These include (i) anthranilate synthase TrpE as the first enzyme of the branch, (ii) anthranilate phosphoribosyltransferase TrpD, (iii) TrpC, a tri-functional enzyme providing glutamine amidotransferase (G domain), phosphoribosyl anthranilate isomerase (F domain) and indole-3-glycerol phosphate synthase activity (C domain), and finally (iv) the homodimeric tryptophan synthase TrpB featuring an α-and a β-domain per monomer [22].Prior to investigating the transcriptional dynamics, the respective genes needed to be identified in the genome of P. mexicana.Therefore, BLAST analyses were performed with annotated fungal tryptophan pathway genes [23] (Additional file 1: Table S2).In fact, pronounced transcriptional changes were found when comparing the data of FB3G mycelium (psilocybin biosynthesis suppressed) with the carpophore samples (psilocybin biosynthesis induced, Fig. 2, Additional file 1: Figure S11 and Table S3) for the expression of the genes putatively encoding TrpE, TrpD and TrpB.These were strongly upregulated in carpophores (trpE1: 2.7-fold; trpD: 10.5-fold; trpB: 8.8fold, corresponding log 2 -fold values are 1.45, 3.39 and 3.14).A gene putatively encoding a second anthranilate synthase, TrpE2, was only minimally downregulated (1.7fold) which may reflect the frequently observed phenomenon of multiple (yet possibly non-functional) alleles of biosynthetic genes encoded in basidiomycete genomes [24][25][26].With a 1.9-fold upregulation, the transcriptional activity of the trpC gene changed at a lower degree.Still, the more strongly upregulated tryptophan biosynthesis genes trpE1, trpD and trpB are consistent with the increasing demand for l-tryptophan in carpophores when psilocybin biosynthesis sets in.
Differential expression of genes for l-tryptophanconverting enzymes
Subsequently, we analyzed the genes encoding key enzymes that convert l-tryptophan (Fig. 1).Aromatic acetaldehyde synthases (AASs) draw on the l-tryptophan pool by producing indole-3-acetaldehyde in a single combined decarboxylation/deamination step.Likewise, indoleamine-2,3-dioxygenases (IDOs) degrade l-tryptophan as they catalyze the oxidative cleavage of the pyrrol ring to yield N-formylkynurenine, thereby supplying various pathways with substrate, among them one leading to 3-hydroxyanthranilic acid and nicotinamide/NAD + .In fact, the expression of putative genes for an acetaldehyde synthase (IasA) as well as for IDOs was downregulated in mushrooms (iasA: eight-fold; idoA: 350-fold; idoC: 1.7-fold).The corresponding log 2 -fold changes are − 3.0, -8.45, and − 0.76, respectively (Additional file 1: Table S3).A pathway-specific l-tryptophan decarboxylase is the gateway enzyme of the psilocybin biosynthesis [7] and, thus, represents an l-tryptophan sink as well.In contrast to the downregulated genes for IDOs and IasA, the psiD gene encoding this decarboxylase [27], was 170-fold upregulated in carpophores.The latter value confirms previous findings for P. cubensis psiD that is massively expressed in primordia and carpophores as well [8].To confirm the RNA-Seq data, expression of these genes was independently investigated by qRT-PCR that yielded perfectly congruent results (Fig. 3).Collectively, these findings further support the notion that l-tryptophan-related genes are regulated in a fashion to supply PsiD with a maximum quantity of this aromatic amino acid upon beginning psilocybin production in carpophores.Generally, the comparison between the three conditions (carpophores, and mycelium grown in BNM and FB3G media (Additional file 1: Table S3, Figure S11)) also underlines and confirms the relevance of medium composition and developmental stage for psilocybin content.
Characterization of P. mexicana IasA
Aromatic aldehyde synthases (AASs) and aromatic amino acid decarboxylases (AAADs) share common ancestry and, consequently, very similar amino acid sequences.The decision between the two catalytic activities (decarboxylation and oxidative deamination by AASs versus decarboxylation by AAADs) is primarily mediated by one signature amino acid residue located in the large loop close to the active site (phenylalanine for AAS, tyrosine for AAADs) [28][29][30].The amino acid sequence alignment of P. mexicana IasA with previously described AASs and AAADs identified a phenylalanine residue at position 329, which points to a function as acetaldehyde synthase (Additional file 1: Figure S12).To confirm the catalytic activity, IasA was heterologously produced and assayed in vitro.The enzyme is encoded by a 2064 bp gene, which is interrupted by ten introns between 50 and 62 bp in length.The fully spliced iasA reading frame is 1503 bp long and encodes a 500 aa protein with a predicted mass of 55.9 kDa.The amino acid sequence of P. mexicana IasA is 80% identical and 85% similar to that of P. cubensis l-3,4-dihydroxyphenylacetaldehyde synthase PcDHPAAS (AYU58583) (Additional file 1: Table S4).To produce recombinant enzyme, the P. mexicana iasA cDNA was cloned to create expression plasmid pPS66, which was used to transform E. coli KRX.IasA was produced as a 56.9 kDa C-terminally tagged hexahistidine fusion protein (Additional file 1: Figure S13) and purified by metal affinity chromatography.Size exclusion chromatography with urea-denatured IasA resulted in a single symmetrical peak at an elution volume of 13.4 mL (Additional file 1: Figure S14), which is consistent with the calculated monomeric mass (56.9 kDa).When native protein was loaded, IasA eluted as a single peak at 14.4 mL, corresponding to the size of a homodimer (Additional file 1: Figure S14).This result is consistent with previously described homodimeric AAADs and AASs [30].When the in silico modeled structure of P. mexicana IasA was superimposed with the experimentally determined protein structure of Arabidopsis thaliana phenylacetaldehyde synthase (PDBe 6eei [30]), a high degree of structural similarity was found (Additional file 1: Figure S15).Subsequently, the enzymatic activity of IasA was assayed in PLP-containing sodium phosphate buffer (pH 7.5) and the product detected with Brady's reagent [31].Substrates tested included l-and d-configured tryptophan, 4-hydroxyl-tryptophan, 5-hydroxy-l-tryptophan, l-tyrosine, l-phenylalanine, l-histidine and 3,4-dihydroxy-l-phenylalanine (l-DOPA).Reactions with heat-inactivated enzyme were used as negative controls.IasA accepted l-tryptophan and its hydroxy-derivatives (Fig. 4) while d-tryptophan was only minimally turned over and l-histidine was not accepted altogether.As l-tryptophan most likely represents the physiologically relevant substrate, its turnover was set to 100%.Highest turnover was found with 5-OH-l-tryptophan (132%) while l-DOPA, l-phenylalanine and l-tyrosine were turned over to a lesser extent (68, 61, and 43%, respectively).This substrate profile distinguishes IasA from PcDHPAAS, which was previously described as l-3,4-dihydroxyphenylacetaldehyde synthase [32].Optimum turnover with IasA occurred at pH 9.0 in TRICIN buffer (Additional file 1: Figure S16) within a temperature plateau of 30-34 °C (Additional file 1: Figure S17).To verify indole-3-acetaldehyde as the IasA product, the reactions were treated with sodium borohydride which reduces the aldehyde to tryptophol.In the reactions, but not in the controls, a new chromatographic signal appeared at the same retention time as the synthetic tryptophol standard (t R = 3.9 min, Fig. 5) with the matching mass to charge ratio (m/z 162.1 [M + H] + ).Therefore, we unambiguously identified P. mexicana IasA as indole-3-acetaldehyde synthase, which represents the first characterized microbial acetaldehyde synthase accepting l-tryptophan as main substrate.
Comparison of indoleamine-2,3-dioxygenases
The second gene whose transcription decreases as psilocybin is produced encodes an indoleamine-2,3-dioxygenase (IDO).Typically, the Agaricomycotina encode three types of IDOs (a-c) that share a common phylogenetic origin.However, some of the genes can be absent or duplicated, depending on the species [33], and variation occurs even within the genus Psilocybe.Both P. cubensis and P. mexicana each encode one IdoA (type a) and IdoC (type c) enzyme.However, unlike P. cubensis, the sister species P. mexicana lacks genes for IdoB enzymes (type b, Additional file 1: Figure S18).P. mexicana IdoA and IdoC are equivalent to the counterparts S5).In contrast, P. cubensis encodes two type b IDOs, whose genes were found upregulated in carpophores.Some fungal representatives, i.e., type c IDOs, show very low catalytic activity and their meta-bolic role is still unclear [33].We suggest it is IdoA in P. mexicana that is primarily involved in l-tryptophan metabolism, as it is downregulated up to 350-fold under psilocybin production conditions (Additional file 1: Table S3, Figures S11 and S18).This transcriptional pattern correlates with the demand of l-tryptophan when psilocybin biosynthesis begins.
Differential expression of tryptophan metabolism genes in Psilocybe spp
The transcriptional dynamics of pertinent genes in P. mexicana carpophores was compared with prior data from P. cubensis mushrooms [32].Surprisingly and contrasting P. mexicana, most of the investigated P. cubensis genes (Additional file 1: Table S6) related to l-tryptophan metabolism showed only marginal up or down regulation.The transcriptional changes of the genes coding for the tryptophan biosynthesis enzymes TrpE, TrpD, TrpC and TrpB, the indoleamine-2,3-dioxygenases IdoA, IdoB1 and IdoC and the aromatic acetaldehyde synthase PcDHPAAS range between − 2.1-fold and + 2.9-fold (log 2 -fold − 1.1 and + 1.6, Fig. 6, Additional file 1: Table S7).However, both species showed the pronounced regulation of psiD (54-fold and 170-fold for P. cubensis and P. mexicana, respectively, log 2 -fold values: 5.8 and 7.4).Another putative indoleamine-2,3-dioxygenase gene in P. cubensis, referred to as idoB2 and for which a homolog does not exist in P. mexicana, was found to be 78-fold upregulated in P. cubensis carpophores (log 2 -fold 6.3), whereas either of the investigated ido genes of P. mexicana was downregulated.The expression pattern of the homologous genes encoding aromatic acetaldehyde synthases (PcDHPAAS in P. cubensis, log 2 -fold + 1.6; and iasA in P. mexicana, log 2 -fold − 3.0) is also diverging between the two investigated representatives of the Psilocybe genus.The phenomenon of oppositely regulated enzymes PcDHPAAS in P. cubensis and IasA in P. mexicana likely reflects the respective substrate preferences.Without downregulation, the latter enzyme would compete with PsiD for its substrate while the substrate of the former enzyme, l-DOPA, does not interfere.Hence, regulation of PcDHPAAS does not need to be adjusted relative to the l-tryptophan-requiring enzyme PsiD.
Discussion
To ensure adequate supply of building block substrates and cofactors for enzymatic reactions, natural product pathways closely root in the cell's central metabolism.The specialized purpose of the often bioactive and highly functionalized natural products, along with the demand for substrates of the central metabolism require that their assembly is a genetically tightly regulated process.Previous research predominantly emphasized ascomycetes and identified various levels of regulation.These include epigenetic modification as well as pathway-specific and global transcriptional control, e.g., by the prototypical pathway-specific regulator AflR for aflatoxin biosynthesis, the global regulator LaeA, or the regulatory circuits around penicillin biosynthesis [34][35][36][37][38]. Little is known about natural product pathway regulation in basidiomycetes, yet a correlation of blue light exposure and posttranscriptional regulation by light-dependent splicing has been shown [39].
Metabolic flux is a second important aspect of how central and secondary metabolism interface and contribute to regulation.Penicillin biosynthesis is arguably among the most prominent and best investigated examples.The analysis of central and amino acid metabolism in Penicillium chrysogenum revealed that the metabolic flux toward l-cysteine and l-valine strongly increases under penicillin production conditions to supply these amino acids as pathway substrates.Furthermore, an increased flux through the acid cycle and the pentose phosphate pathway were observed to supply the energy-intensive synthetase reaction with ATP and the NADPH-intensive l-cysteine biosynthesis with reduction equivalents [40].Likewise, production of the pharmaceutically invaluable polyketide lovastatin was enhanced in a genetically engineered Aspergillus terreus [41].By overexpressing the gene for the acetyl-CoA carboxylase in A. terreus, an increased malonyl-CoA supply was offered to the lovastatin polyketide synthases, resulting in enhanced product titers.This substantial body of research related to the metabolic flux for important ascomycete products is contrasted by our only rudimentary knowledge for basidiomycetes.For these, it has remained largely shrouded how natural product pathways are regulated and how the substrate supply is optimized to support a particular pathway.In the case of psilocybin, an interplay between primary metabolism and natural product biosynthesis has been reported for P. cubensis [8].Adenosine kinase AdoK and S-adenosyll-homocysteine hydrolase (SahH) directly or indirectly remove the methyltransferase-inhibiting second product S-adenosyl-l-homocysteine and regenerate S-adenosyll-methionine (SAM), hence supporting the SAM-dependent methyltransfer as the final biosynthetic step.However, little is known about how the supply and degradation of the substrate l-tryptophan is genetically regulated except for the gene encoding the previously characterized tryptophan synthase TrpB [22], that is six-fold upregulated in carpophores of P. cubensis, compared to vegetative mycelium [8].Furthermore, regulators that bind to promoters of genes encoding pathway and catabolic genes of l-tryptophan are unknown for the genus Psilocybe.In the medicinal mushroom Ganoderma lucidum, the basic leucine zipper (bZIP) transcription factor GCN4 serves as a master regulator for amino acid biosynthesis [42], which confirms earlier findings with Saccharomyces cerevisiae and Aspergilli, where cpcA encodes the gene homologous to S. cerevisiae GCN4 and e.g., controls trpB expression [20,43,44].P. mexicana encodes three genes homologous to GCN4.Only one of these (Additional file 1: Sequence data 1) showed an increase of transcription (log 2 -fold value 2.1) under psilocybin-producing conditions which might point to a function in upregulating amino acid metabolism.However, regulatory mechanisms other than on the transcriptional level appear possible as well.For example, import into nucleus [45,46], posttranslational modification [47], or alternative splicing [39], although our P. mexicana transcriptomic data did not indicate the presence of differently spliced mRNA populations of the investigated genes.Hence, future work needs to establish the regulatory mechanism(s) of amino acid metabolic genes in Psilocybe.
In addition to analyzing anabolism and substrate supply, our study design also covered catabolism, which revealed the role of IasA, the indole-3-acetaldehyde synthase of P. mexicana.A similar enzyme, PcDHPAAS of P. cubensis, was previously characterized but found to prefer l-DOPA over l-tryptophan as substrate [32].This finding underscores, once more, that subtle yet relevant differences between these closely related species and their enzymatic repertoire exist.Investigation of IasA is warranted for two reasons.First, it represents the first characterized microbial indole acetaldehyde synthase.Furthermore, it may play a role for chemical ecology as it catalyzes a key reaction toward indole acetic acid.This microbial, insect and auxin-type plant signal compound mediates interspecies interactions and insect gall formation [48,49].
In conclusion, our results help understand the regulation of primary metabolism around tryptophan levels to optimize psilocybin-related secondary metabolic processes in P. mexicana.This knowledge will support efforts to control and increase the psilocybin content in mushrooms grown in certified facilities for legitimate purposes without any genetic manipulation.As mushrooms are notoriously difficult to modify genetically and given the status of psilocybin as a candidate drug to potentially treat major depressive disorders, the outcome of our study may promote biotechnology with Psilocybe.Beyond this particular metabolite and genus, our current work has pilot character as it addresses, for the first time, that mushrooms match primary and secondary metabolism through a coordinated regulation of anabolic and catabolic routes.
Materials and general procedures
Chemicals, media ingredients, and solvents were purchased from Carl Roth, Sigma-Aldrich, and VWR.Oligonucleotides were synthesized by Integrated DNA Technologies and are listed in Additional file 1: Tables S8 and S9.Restriction enzymes were purchased from NEB. Procedures to handle and modify DNA (extraction from agarose gels, restriction, dephosphorylation, ligation, and plasmid isolation) followed the manufacturers' instructions (Macherey-Nagel, NEB).
Microbial strains and growth conditions
Psilocybe mexicana SF013760 was maintained on malt extract peptone (MEP) agar plates (per liter: 30 g malt extract, 3 g peptone, 18 g agar, pH 5.5).To collect biomass from liquid cultures for nucleic acid extraction, P. mexicana was cultivated for 7 days in liquid MEP medium at 25 °C and 140 rpm.To find conditions suitable for RNA-Seq analysis, P. mexicana was precultured in 450 mL FB3G medium (per liter: 10 g malt extract, 10 g glucose, 5 g yeast extract, 3 g peptone, 0.1 g KH 2 PO 4, pH 5.5) for 7 days at 21 °C and 180 rpm.The preculture was dispersed and 10 mL each were used to inoculate 150 mL of different media.Selected media were: FB3G, MEP, BNM (as described in [19]), FB5B (similar to BNM but d-glucose increased to 7.5 g, and 6 g d-galactose per liter as additional carbon source), FB3B (similar to FB5B but yeast extract increased to 5 g per liter).The cultivation was continued for 7 days at 21 °C, 180 rpm in sextuplicates.Carpophore formation was induced as described [50].Fungal biomass was collected, filtered through Miracloth (Merck) and washed with water if harvested from a liquid culture, shock-frozen in liquid nitrogen and lyophilized prior to nucleic acid or metabolite extraction.Escherichia coli KRX (Promega) was used for routine cloning, plasmid propagation and heterologous production of IasA.For cultivation of E. coli, LB medium (per liter: 5 g yeast extract, 10 g tryptone, 10 g NaCl, and 18 g agar if applicable) supplemented with 50 µg mL − 1 kanamycin sulfate was used.For heterologous production, 2 × YT medium (per liter: 10 g yeast extract, 20 g tryptone, 5 g NaCl) was used instead of LB medium.
Nucleic acid isolation, first strand synthesis and qRT-PCR
Genomic DNA was isolated following a described protocol with a slight modification (isopropanol instead of ethanol precipitation) [51].RNA isolation, reverse transcription, and qRT-PCR were performed as described [8,52,53].The housekeeping reference gene enoA, encoding enolase, served as internal standard.Oligonucleotides with a primer efficiency of at least 90% were used for qRT-PCR (Additional file 1: Table S8).Gene expression levels were determined as described [54].
RNA-Seq of P. mexicana
RNA was isolated from three biological replicates of P. mexicana grown in BNM and FB3G liquid medium as well as from carpophores produced in an axenic laboratory culture.RNA-Seq and parts of the bioinformatic analysis including the differential gene expression analysis, was performed by GENEWIZ.Sequences of 2 × 150 bp paired end reads were generated on an Illumina NovaSeq platform.Sequence fastq files were trimmed using Trimmomatic (v.0.36) [55] and mapped to the respective genome (GenBank: GCA_023853805.1) using the STAR aligner (v.2.5.2b) [56].Unique gene hit counts were calculated using featureCounts [57] from the Subread package (v.1.5.2) [58].Differential gene expression analysis was performed using DESeq2 [59].log 2 -fold changes and p-values were generated by applying the Wald test [60].The Benjamini Hochberg method [61] was used to calculate adjusted p-values.Trinity (v2.13.2) was used for RNA-Seq de novo assembly applying the standard settings [62,63].
Expression analysis of P. cubensis RNA-Seq raw reads with Geneious Prime software
The raw data published by Torrens-Spence et al. [32] (NCBI SRA: SRR7028478 and SRR7028479) was mapped to the P. cubensis genome (GenBank: GCA_017499595.2).The expression levels were calculated and compared with the Geneious method to measure the differential expression.As a result, log 2 -fold change values and p values were obtained (Fig. 6, Additional file 1: Table S7).
Heterologous production of IasA
The iasA coding sequence was PCR-amplified (Additional file 1: Table S10, PCR method A) from P. mexicana cDNA using oligonucleotides oPS628/629 (Additional file 1: Table S9).The agarose gel-purified fragment was ligated to the NcoI-XhoI-restricted and dephosphorylated (QuickCIP, NEB) plasmid pET28a using the NEBuilder HiFi DNA Assembly Cloning Kit (NEB) to yield expression plasmid pPS66.Correct assembly of insert and vector was verified by colony PCR (Additional file 1: Table S10, PCR method B), analytical restriction digests and DNA sequencing (GENEWIZ Inc.).IasA was produced in E. coli KRX × pPS66 essentially as described [27].The protein was concentrated on an Amicon Ultra-15 centrifugal filter and eluted with 50 mm sodium phosphate buffer (pH 7.5).Protein concentrations were determined using the Pierce BCA-Protein Assay Kit (Thermo).The protein production was verified by SDS-polyacrylamide gel electrophoresis (SDS-PAGE) (Additional file 1: Figure S13).
In vitro aldehyde formation assays
Aldehyde formation by IasA was monitored using a photometric assay and Brady's reagent (2,4-dinitrophenylhydrazine, 2,4-DNPH) [31].As described in [72], the freshly prepared detection solution consisted of 0.1% (w/v) 2,4-DNPH dissolved in MeOH with 1% (v/v) sulfuric acid.100 µL of ice-cold detection solution were used to stop enzymatic reactions with the same volume following a 20 h incubation at 25 °C.Product formation was detected photometrically by measuring the absorption at λ = 500 nm (and 800 nm as reference wavelength) in a CLARIOstar plate reader (BMG LABTECH).Control reactions without substrates, without enzyme, neither with substrate nor with enzyme, or with heat-inactivated enzyme were run in parallel.The assay was performed twice in triplicates in 50 mm buffer (sodium phosphate, pH 7.5) with 1 mm of the respective substrate, 0.1 mm pyridoxal 5′-phosphate (PLP) and hexahistidine-tagged IasA at a final concentration of 13 µm.
UHPLC-MS analysis of tryptophol formation in vitro
The assays were performed in triplicate at 25 °C for 20 h in 50 mm sodium phosphate buffer (pH 7.5) with 1 mm l-tryptophan, 0.1 mm pyridoxal 5′-phosphate (PLP) and hexahistidine-tagged IasA at a final concentration of 1 µm in a final volume of 50 µL.Reactions with heat-inactivated enzyme served as negative control.To analyze aldehydes reliably by high-performance liquid chromatography (HPLC), every reaction was stopped with 200 µL of sodium borohydride-saturated ethanol solution for reduction [29,30,73].Formic acid (250 µL 0.8 m) was added after 5 min incubation at room temperature to decompose remaining borohydride and for an acidic pH (pH 4 to 5).Reactions were frozen in liquid nitrogen and subsequently lyophilized.The samples were dissolved in 200 µL methanol, centrifuged (10 min, 20,000 × g), and the supernatants were chromatographically analyzed by measuring areas under curves (AUCs) of extracted ion chromatogram (EIC) peaks.To determine optimal reaction conditions, the incubation time was shortened to 2 h and the final concentration of enzyme was increased to 2 µm.The pH was varied between 5 and 11 (5.0 to 6.5 in citrate, 6.0 to 8.0 in sodium phosphate, 7.5 to 9.0 in TRICIN, 8.5 to 10.0 in CHES, 9.5 to 11.0 in CAPS buffers) and the temperature was varied between 14 and 50 °C (TRICIN pH 9.0).
Size exclusion chromatography
To verify that IasA is a homodimer, fast protein liquid chromatography (FPLC, Äkta Pure 25, GE Healthcare) equipped with a Superdex 200 increase 10/300 GL column with 24 mL bed volume was used.Binding and elution were performed at a flow of 0.5 mL min − 1 (i) with 50 mm sodium phosphate, 150 mm NaCl, pH 7.2 or (ii) with additional 6 M urea (denaturing conditions).Chromatograms were recorded at λ = 280, 340 and 400 nm.
Chemical synthesis of tryptophol
The synthesis of tryptophol (2-(indol-3-yl)ethanol) was performed as described [74].NMR spectroscopic data is listed in the supplementary material, 1 H and 13 C NMR spectra are shown in Additional file 1: Figures S19 and S20.
Liquid chromatography and mass spectrometry
Methanol extracts of in vitro experiments with IasA were subjected to UHPLC-MS analysis on an Agilent 1290 Infinity II instrument, interfaced to an Agilent 6130 single quadrupole mass detector, operated in alternating positive/negative mode.The chromatograph was fitted with an Ascentis Express F5 column (100 × 2.1 mm, 2.7 μm particle size).Separation was at 35 °C.Solvent A was 0.1% formic acid in water, solvent B was methanol.A linear gradient at a flow rate of 0.4 mL min − 1 was applied: within 8 min from 10 to 100% B, held for 2 min at 100%.Diode array detection was performed between λ = 200-600 nm.Chromatograms were extracted at λ = 205, 224, 254, 269 and 280 nm.To analyze methanolic extracts of P. mexicana mycelium, the same instrument, equipped with a Luna Omega Polar C18 column (50 × 2.1 mm, 1.6 μm particle size) was used.Solvent A was 0.1% formic acid in water, solvent B was acetonitrile.The flow was 1 mL min − 1 .The gradient was: initially 1% B, increase to 5% B within 3 min, to 100% B within further 1 min, held at 100% B for 2 min.Chromatograms were extracted at λ = 254 and 280 nm.
Fig. 3
Fig. 3 Expression analysis of selected genes involved in the tryptophan metabolism in P. mexicana based on qRT-PCR results.The analysis compared mycelium submerse-grown in FB3G medium and carpophores.Shown values represent log 2 -fold changes (positive, if genes are upregulated in carpophores) and standard deviations of means (n = 3).The values are normalized to the expression of enoA (encoding enolase) as a control gene.Color coding: green -tryptophan biosynthesis, orange/browntryptophan degradation, blue -psilocybin biosynthesis, maroon -aromatic acetaldehyde synthesis
Fig. 4
Fig. 4 Substrate specificity of P. mexicana IasA.Photometric detection of hydrazone formation from IasA-produced aldehydes and 2,4-dinitrophenylhydrazine (2,4-DNPH).Absorption was measured at λ = 500 nm and 800 nm (reference wavelength).The value of the heat-inactivated control thus obtained was subtracted from the respective value of the reactions with native enzyme.The experiment was performed with two biological replicates and three technical replicates each.Mean values and standard deviations are shown
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.