added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2014-10-01T00:00:00.000Z
|
2012-04-19T00:00:00.000
|
3925817
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0035203&type=printable",
"pdf_hash": "27a0ea0b7d17b5e412a4a78922a8c9665f267006",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46024",
"s2fieldsofstudy": [
"Chemistry",
"Engineering"
],
"sha1": "27a0ea0b7d17b5e412a4a78922a8c9665f267006",
"year": 2012
}
|
pes2o/s2orc
|
Development of a Tetrameric Streptavidin Mutein with Reversible Biotin Binding Capability: Engineering a Mobile Loop as an Exit Door for Biotin
A novel form of tetrameric streptavidin has been engineered to have reversible biotin binding capability. In wild-type streptavidin, loop3–4 functions as a lid for the entry and exit of biotin. When biotin is bound, interactions between biotin and key residues in loop3–4 keep this lid in the closed state. In the engineered mutein, a second biotin exit door is created by changing the amino acid sequence of loop7–8. This door is mobile even in the presence of the bound biotin and can facilitate the release of biotin from the mutein. Since loop7–8 is involved in subunit interactions, alteration of this loop in the engineered mutein results in an 11° rotation between the two dimers in reference to wild-type streptavidin. The tetrameric state of the engineered mutein is stabilized by a H127C mutation, which leads to the formation of inter-subunit disulfide bonds. The biotin binding kinetic parameters (koff of 4.28×10−4 s−1 and Kd of 1.9×10−8 M) make this engineered mutein a superb affinity agent for the purification of biotinylated biomolecules. Affinity matrices can be regenerated using gentle procedures, and regenerated matrices can be reused at least ten times without any observable reduction in binding capacity. With the combination of both the engineered mutein and wild-type streptavidin, biotinylated biomolecules can easily be affinity purified to high purity and immobilized to desirable platforms without any leakage concerns. Other potential biotechnological applications, such as development of an automated high-throughput protein purification system, are feasible.
Introduction
Wild-type streptavidin is a tetrameric protein with four identical subunits. Each subunit has a biotin binding pocket and can bind biotin tightly with a dissociation constant (K d ) around 10 214 M [1]. This binding is considered to be irreversible and has been applied in a wide range of biomedical and biotechnological applications [2,3]. However, the tight biotin binding also makes streptavidin not suitable for affinity purification of biotinylated molecules. It would be ideal to develop engineered streptavidin muteins with reversible biotin binding capability. These engineered muteins can be applied to purify biotinylated molecules, develop automated high-throughput protein purification systems, reusable biosensor chips and bioreactors, study protein-protein interactions and design strippable probing agents (e.g. engineered muteins conjugated to horseradish peroxidase) for blot reprobing.
To understand the strategies applied in engineering streptavidin with reversible biotin binding ability, it is vital to understand the structural features of streptavidin, its biotin binding pocket and the subunit interfaces. Each streptavidin subunit contains eight antiparallel strands that form a b-barrel structure [4,5]. Two of these subunits (A and B as well as C and D in Fig. 1b) have extensive interfacial interactions to form a relatively stable dimer. Two dimers then assemble into a tetramer via a weaker interface. Although each subunit can bind a biotin molecule, each of the four complete biotin pockets in the tetramer relies on the donation of Trp-120 [6,7,8] located in loop 7-8 from the neighboring subunit (e.g. subunit A needs Trp-120 from subunit D and vice versa, Figure 1b). Exceptionally tight biotin binding in streptavidin is mainly contributed by three sets of interactions [9]. The first set involves at least 6 residues (N23, S27, Y43, S88, T90 and D128) in the biotin binding pocket to form an extensive hydrogen bonding network with biotin. The second set involves strong hydrophobic interactions [7] between biotin and four tryptophan residues (79, 92, 108 and 120). Finally, residues in loop 3-4 (S45, V47, G48, N49 and A50) play a critical role in binding and trapping biotin to the biotin binding pocket [10,11,12]. In the absence of biotin, loop [3][4] has been shown to be flexible and is mainly in an open configuration. However, after biotin binding, loop 3-4 becomes immobilized and is in a closed position (Figure 1, panels a and b) because of its interactions with biotin. Furthermore, biotin binding can also strengthen subunit interactions [13], in particular, via interactions between biotin in one subunit and Trp-120 in loop [7][8] from the neighboring subunit [6]. In fact, the majority of the streptavidin-biotin complexes are in the tetrameric state in SDSpolyacrylamide gel even if the samples have been boiled before loading [14,15].
Two approaches have been taken to develop streptavidin muteins with reversible biotin binding ability. The first approach is to replace amino acid residues that are critical in hydrogen bonding to biotin. Although these changes can indeed lower the biotin binding affinities [16,17], many of the mutations also affect inter-subunit interactions in the streptavidin tetramer. Weakening inter-subunit interactions typically generates a heterogeneous population of streptavidin oligomers which leads to many practical problems. A second approach is to develop recombinant monomeric streptavidin [18] which has lower biotin binding affinity. This approach exploits the fact that an individual streptavidin subunit lacks a complete biotin binding pocket since Trp-120 from a neighboring subunit forms a key part of the binding site ( Figure 1, panels a and b). Replacing Trp-120 with alanine (W120A) results in a tetrameric streptavidin mutein with a K d of 3610 29 M for biotin [17]. To create monomeric streptavidin, charge repulsion and steric hindrance were introduced at the subunit interface [18]. The resulting monomeric streptavidin has a biotin binding constant around 10 27 M. However, this mutein can be in the monomeric state only under certain conditions. First, the salt concentration has to be low to maximize electrostatic repulsion between streptavidin subunits. Second, the concentration of the monomers has to be low as exposure of the hydrophobic interface promotes non-specific aggregation.
In this study, a new approach was designed to develop an engineered tetrameric streptavidin with reversible biotin binding capability and other desirable features.
Rationale for the design of novel streptavidin muteins
Our novel approach relies on the effects of two loops (loop [3][4] with residues 45-52 and loop 7-8 with residues 114-121) on biotin binding ( Figure 1). Loop 3-4 forms the lid and Trp-120 from loop 7-8 forms part of the wall of the biotin binding pocket. With various hydrogen bonding and hydrophobic interactions in the biotin binding pocket and the closure of the lid, biotin can hardly escape from the binding pocket.
In this study, an attempt was made to create a dynamic ''back door'' formed by loop [7][8] in streptavidin to allow biotin a second route to escape from the binding pocket when the ''main door'' primarily formed by loop 3-4 is closed. This objective can potentially be achieved by several approaches. One is to develop a DW120 mutein. Since Trp-120 in loop 7-8 is known to interact strongly with biotin [6,7,8], the deletion of Trp-120 may allow the modified loop 7-8 to become more flexible. A second approach is to create a mobile 8-amino-acid-loop (8-aa-loop) mutein in which loop 7-8 is engineered to have a completely different sequence but retains the same length as the original loop in wild-type streptavidin (Table 1). Asparagine and glycine were introduced at the center of the loop since they are known to introduce a turn in the loop structure [19]. The DSS (aspartate, serine and serine) and SDG (serine, aspartate and glycine) sequences were introduced to form the left and right arms of the loop, respectively (Table 1). These amino acids were selected because they have a high propensity for intrinsic disorder [20]. This design minimizes interactions between loop 7-8 and biotin, and was intended to allow the engineered loop to act as an unlocked mobile door swinging between the open and closed states even in the presence of biotin (Figure 1, panels c and d). To improve the chance of obtaining a mutein with its engineered ''back door'' open wide enough for the exit of the bound biotin, a third approach was applied to create a series of muteins (2-aa-loop, 4-aa-loop and 6-aa-loop muteins, Table 1) with both the sequence of loop 7-8 redesigned and the length of the loop shortened. Muteins with shorter loops were hypothesized to provide bigger openings around the biotin binding pocket which should facilitate the release of biotin. In a preliminary study, the 4-aa-loop mutein was immobilized on agarose matrix. Although this matrix could bind biotinylated proteins, streptavidin mutein was seen leaking from the column during washing and elution. Since the coupling condition allows on average one out of four subunits in the tetrameric streptavidin mutein to be coupled to the matrix to maximize the accessibility of the biotin binding sites, the leakage of streptavidin subunits suggests that changes in the loop 7-8 structure might result in weakening of the inter-subunit interactions in the streptavidin mutein. To avoid this complication, the H127C mutation [21,22] was introduced to two constructs (8-aa-loop mutein and DW120 mutein) to create 8-aa-loop-H127C mutein and DW120-H127C mutein, respectively. The H127C mutation has been reported [21,22] to allow crosslink between subunits A and C (and also between subunits B and D) through the formation of a disulfide bond. These disulfide bonds can strengthen inter-subunit interactions. Since the matrix for the 4-aa-loop mutein had a subunit leakage problem, this mutein was not further characterized. The remaining four muteins (DW120-H127C, 2-aa-loop, 6-aa-loop and 8-aa-loop-H127C) were used for further analyses.
Production and purification of streptavidin muteins
To avoid the formation of inclusion bodies and the need to refold the engineered streptavidin, all streptavidin muteins were produced in their soluble state from B. subtilis via secretion [23]. The production yield of the muteins in a semi-defined medium [24] was 40-60 mg/liter. Each mutein could be affinity purified in one step using the biotin-agarose matrix. The purification of the 8aa-loop-H127C mutein showed typical results ( Figure 2a). 8-aaloop-H127C mutein was efficiently captured by the biotin-agarose matrix and could be eluted by buffer containing 4 mM biotin. The recovery was ,90%. After dialysis to remove biotin from the pooled elution fractions, the purified mutein could rebind the biotin-agarose matrix and could be eluted again using biotincontaining buffer (data not shown). This result demonstrates the reversible biotin binding ability of the mutein.
Determination of kinetic parameters
The kinetic parameters for biotin binding to various muteins were determined by surface plasmon resonance using the BIAcore biosensor with biotinylated IgG proteins as the ligand (Table 2). DW120-H127C had a binding affinity for biotin (K d = 8.1610 29 M) that was comparable to that of the W120A mutein (K d ,3610 29 M) [17]. Although muteins with smaller loops (2-and 6-aa-loop muteins) were expected to have lower biotin binding affinities, their binding affinities (,1-3610 29 M) were actually comparable to that of the W120A mutein. Streptavidin muteins with nano-molar binding affinity tend to bind biotinylated molecules too tightly for affinity chromatography purification, leading to the poor recovery of target proteins. The mutein with the lowest biotin binding affinity (1.9610 28 M) in this study is the 8-aa-loop-H127C mutein. It was further characterized and its ability to act as an affinity agent for purifying biotinylated proteins was explored.
Tetrameric state of 8-aa-loop-H127C mutein
Since the matrix with 4-aa-loop mutein immobilized showed leakage of streptavidin subunits during chromatography, this observation prompted the examination of the oligomeric state of the 8-aa-loop-H127C mutein. The migration pattern of the 8-aaloop-H127C mutein resolved by SDS-PAGE was examined in the presence or absence of mercaptoethanol under boiled and nonboiled conditions ( Figure 2b). 8-aa-loop mutein and wild-type streptavidin were included in the study as a comparison. Since the streptavidin inter-subunit interactions are weaker in the absence of biotin [6], these analyses were performed in the absence of biotin. Under all conditions, 8-aa-loop mutein migrated like a monomer. By contrast, over 98% of wild-type streptavidin remained as tetramers if the sample was not boiled. This suggests that changes in the amino acid sequence of loop 7-8 weaken inter-subunit interactions. A significant difference in the oligomeric states of 8aa-loop and 8-aa-loop-H127C muteins was observed in the absence of reducing agent. Whereas 8-aa-loop mutein remained as monomers whether the sample was boiled or not, ,80% of 8aa-loop-H127C mutein remained in the dimeric form even after boiling. This indicates the successful formation of disulfide bonds between subunits. With the unboiled sample, most of 8-aa-loop-H127C mutein adopted the tetrameric state. Since this mutein exists mainly as tetramer even in the presence of detergent in SDS-PAGE, it should be predominantly in the tetramer state under non-denaturing conditions. Thus, the H127C mutation did strengthen inter-subunit interactions in 8-aa-loop-H127C mutein.
The amino acid sequences in the natural and engineered loop [7][8] Purification of biotinylated protein G and biotinylated IgG using 8-aa-loop-H127C mutein-agarose matrix To explore the feasibility of using 8-aa-loop-H127C mutein for the affinity purification of biotinylated molecules, chemically biotinylated protein G and IgG were captured on the 8-aa-loop-H127C mutein-agarose matrix ( Figure 3, panels a and b). After removal of nonspecifically bound proteins, the bound biotinylated proteins could be eluted from the column using 4 mM biotin. Absence of a 20-kDa streptavidin band in the flow-through, wash and elution fractions suggested that streptavidin subunit leakage was not a problem. Estimation of the amount of IgG molecules captured on the column and quantification of biotinylated IgG in the elution fractions indicated the recovery to be approximately 95%. To demonstrate binding specificity, HeLa cell extracts were applied to the column. Non-specific binding was not observed (Figure 3c). Non-biotinylated protein G and IgG also could not bind to the matrix (data not shown). When biotinylated IgG was mixed with the HeLa cell extract, biotinylated IgG could be affinity purified to homogeneity in one step (Figure 3d).
To regenerate the column for repeated rounds of purification, the matrix was simply washed with 10 column volumes of binding buffer. Using biotinylated BSA as the target protein for purification, the 8-aa-loop-H127C mutein matrix saturated with excess amounts of biotinylated BSA could be regenerated in this manner to purify biotinylated BSA for 10 rounds with no observable loss in binding capacity (data not shown).
Structural characterization of the 8-aa-loop-H127C mutein
To assess the conformation of the 8-aa-loop-H127C mutein, the protein was crystallized in the presence of biotin and its structure was solved using X-ray crystallography. Using data extending to 2.0 Å resolution, the structure clearly shows that the mutein crystallizes as a tetramer with a single subunit in the asymmetric unit ( Figure S1). Biotin is bound in a manner indistinguishable from that of wild-type streptavidin, but there is very little electron density for loop 7-8 (residues 114-121) and its neighboring residue 113 ( Figure 4). The lack of electron density suggests that the modified sequence of loop 7-8 confers a significant degree of dynamic disorder to the loop. The lack of interaction between loop 7-8 and the biotin molecule bound to the adjacent subunit probably allows a higher level of mobility and dynamic motion in this loop in the mutein.
Since inter-dimer interactions are critical for stabilizing the tetrameric structure and will likely affect the dynamics of the streptavidin tetramer, changes introduced to the 8-aa-loop-H127C mutein can possibly affect the inter-dimer arrangement. In fact, a significant alteration in the arrangement of subunits in the 8-aaloop-H127C mutein was observed when compared with wild-type streptavidin. The orientation of the A/B dimer relative to the C/D dimer appears to be very similar in nearly all of the previously reported structures of biotin-bound streptavidin [4,5], partly because loop 7-8 in one dimer interacts with a bound biotin molecule and nearby residues in the other dimer. Even when biotin is not present, Trp-120 and Leu-124 form mostly nonspecific van der Waals contacts with Val-47 and Lys-121 from the opposing dimer, respectively. Thus, previously reported structures do not appear to have much difference in inter-dimer orientation or interactions. In this case, the loss of order in loop 7-8 of 8-aaloop-H127C mutein allows for a much larger rearrangement of the A/B dimer relative to the C/D dimer. When the A/B dimer of the mutein is superimposed onto the A/B dimer of wild-type streptavidin, the C/D dimer is rotated by 11u relative to the position of the C/D dimer in wild-type streptavidin ( Figure 5). In contrast, the relative positions of the two dimers in other structures of biotin-bound streptavidin crystallized in different crystal forms differ by ,1-3u. Significantly, an inter-dimer rotation of 5.4u was reported in the first paper comparing the structure of apostreptavidin with biotin-bound streptavidin [4]. The even larger inter-dimer rotation seen in the 8-aa-loop-H127C mutein supports the correlation of a disrupted inter-dimer interface with less order in loop 7-8 and lower biotin-binding affinity. An interesting feature also clearly seen in the crystal structure of the 8-aa-loop-H127C mutein is that a disulfide bond is formed between the Cys-127 residues of adjacent subunits at the A/B and C/D dimer interfaces ( Figure 6). The Cys residue was introduced as a means of stabilizing the A/B and C/D dimers, based on the observed proximity of His-127 residues in adjacent subunits of the structure of wild-type streptavidin [4,5,21,22]. The formation of this disulfide bond in the crystallized mutein is consistent with the presence of disulfide-bonded protein in non-reducing SDS-PAGE analysis as described above (Figure 2b).
Discussion
An idealized streptavidin mutein for affinity chromatography applications should have the following desirable properties. First, it should be tetrameric and all four biotin binding sites are capable of binding biotin. In this state, the hydrophobic interface regions will not be exposed to the surface, thus minimizing non-specific hydrophobic interactions between the engineered streptavidin mutein and proteins in the crude sample to be analyzed. Second, intersubunit interactions should be strong. With one subunit of the tetramer immobilized, the other three subunits should stably associate with this covalently immobilized subunit so that no streptavidin subunits will be stripped off the column during the wash and elution steps. Third, the dissociation constant (K d ) for biotin should be 10 27 to 10 28 M. Fourth, the off-rate (k off ) for the bound biotin in the streptavidin-biotin complex is ideally around 10 24 sec 21 to allow the estimated half-life of the bound biotin to be around 10-30 minutes. With conditions 3 and 4 combined, the interaction would be both strong and specific enough to allow nonspecifically bound molecules to be washed off the matrix without leakage of the specifically bound biotinylated molecules. At the same time, there should be efficient and quantitative elution of biotinylated molecules from the column. A fine balance between affinity towards biotinylated molecules and good recovery is essential for the ideal streptavidin affinity agent. Fifth, the engineered streptavidin muteins immobilized to the matrix should be stable enough to allow the matrix to be used for multiple rounds. Sixth, the engineered streptavidin should be produced with a reasonable production yield in a soluble and functional state without the requirement of refolding via inefficient and expensive denaturation and renaturation processes. The engineered 8-aaloop-H127C mutein produced from B. subtilis via secretion meets all of the above requirements.
Replacement of residues in loop 7-8 of wild-type streptavidin allows loop 7-8 to become flexible and changes the relative orientation of subunits in the tetrameric structure. These structural changes can lead to the lowering of the biotin binding affinity in 8aa-loop-H127C mutein. Introduction of the H127C mutation is for the objective to stabilize the mutein in the tetrameric state. To confirm that loop replacement is the major factor contributing to the observed lower biotin binding affinity in the 8-aa-loop-H127C mutein, the kinetic parameters of the 8-aa-loop mutein (without the H127C mutation) for binding biotinylated proteins were also determined (data not shown). Its dissociation constant (3.96610 28 M) was found to be comparable to that (1.9610 28 M) of the 8-aa-loop-H127C mutein. In contrary, the streptavidin mutein carrying solely the H127C mutation did not show a significant decrease in biotin binding affinity since it could not be eluted off from the biotin-agarose matrix using a biotin containing buffer.
The strength of biotin interactions with wild-type streptavidin and 8-aa-loop-H127C mutein was analyzed using the ligand energy inspector function in the Molegro molecular viewer program [25] based on X-ray crystallographic data. The binding free energy is estimated by the MolDock scores as shown in Table 3. The more negative values indicate stronger interactions. This analysis suggests that the lower biotin binding affinity in 8-aaloop-H127C mutein is mainly contributed by both the absence of interactions (i.e. W120 and K121) and weaker interactions (D128, S45, N23 and L25) between biotin and residues in the biotin binding pocket. A previous study also suggests that D128, S45 and N23 play important roles in the biotin exit pathway [26]. Weakening the interactions between biotin and these residues is . Electron density map (2|F o |-|F c | coefficients, contoured at 1.1 sigma) contoured around the model of the disulfide bond formed by Cys-127 residues from adjacent subunits (A and C or B and D in the tetramer shown in Figure 5). Coefficients and phases were calculated using Refmac and the figure was prepared using PyMOL. doi:10.1371/journal.pone.0035203.g006 instrumental in enhancing biotin exit thereby increasing the biotin off rate (k off ).
In addition to the interaction between biotin and specific residues in the biotin binding pocket of streptavidin, the strength of biotin binding appears to depend in part on the flexibility of loop 3-4 and loop [7][8] . In both wild-type streptavidin [10] and the W120A mutein [7], both loops appear to be rigid in the presence of biotin. In contrast, the flexibility and mobility of loop 7-8 in 8-aaloop-H127C mutein creates a new exit path for the release of biotin from the biotin binding pocket even if loop 3-4 is closed. This effect contributes to increases in the dissociation rate of the bound biotin.
During the course of this study, a new streptavidin mutein called traptavidin was reported [27]. Biotin dissociates from traptavidin at least 10 times slower than from wild-type streptavidin and binds with 10 times higher affinity. The crystal structure of traptavidin [28] indicates that loop 3-4 is well-ordered and adopts a closed conformation in both the presence and absence of biotin. By contrast, loop 3-4 is well-ordered only when biotin is bound to wildtype streptavidin. Because the conformation of the biotin-bound complex in traptavidin is virtually identical to that seen in wildtype streptavidin, the increase in biotin binding affinity is clearly not due to additional contacts between biotin and traptavidin, as supported by the analysis of binding interactions with Molegro, which assigns similar MolDock scores for biotin binding to traptavidin and wild-type streptavidin (2139.3 and 2140.1 respectively). The increased rigidity of loop 3-4 thus accounts for the higher biotin binding affinity in traptavidin. Conversely, the increased mobility of the engineered loop 7-8 in 8-aa-loop-H127 mutein may in part account for the observed lower biotin binding affinity.
Most of the entire series of streptavidin muteins (DW120-H127C, 2-and 6-aa-loop muteins) have biotin binding affinities similar to that of the W120A mutein in the range of 10 29 M. Why the 2-and 6-aa-loop muteins do not have lower biotin binding affinities (K d .10 28 M) is unclear at present, but the dynamics of loop 7-8 are likely important. Structural and molecular dynamics studies of these muteins may help to explain relative binding affinities.
Two matrices are commercially available to purify biotinylated molecules. One is the monomeric avidin matrix [29]. Unfortunately, this approach has three drawbacks. First, the production cost is high as three avidin subunits are sacrificed for each avidin subunit immobilized to the matrix. Second, the large hydrophobic surface exposed in the generation of monomeric avidin typically introduces problematic non-specific interactions with unwanted proteins in the crude extract. Third, it is difficult to completely denature the tetrameric avidin because of the strong subunit interactions. Incomplete denaturation of avidin results in the presence of some tetrameric avidin in the matrix which can bind biotinylated molecules irreversibly. The second commercially available matrix contains the engineered streptavidin mutein [17]. The main concern with this matrix is the occasional observation of leakage of streptavidin subunits during protein loading, washing and elution. With the above-mentioned limitations, the engineered 8-aa-loop-H127C mutein has substantial advantages over the currently available matrices for affinity purification of biotinylated molecules.
Many potential biotechnological applications of this mutein can be developed. In the post-genomic era with the discovery of many novel proteins which can be promising therapeutic targets, biomarkers for diseases, and agents with medical and biotechnological applications, the availability of a user-friendly highthroughput system to purify and immobilize these proteins for structural and functional studies is vital. However, protein purification requires reversibility in binding while immobilization requires ultra-tight interactions. Development of the 8-aa-loop-H127C mutein provides a solution to solve this dilemma. Biotinylated biomolecules can be affinity purified using the 8-aaloop-H127C mutein matrix and immobilized to the wild-type streptavidin based protein chips. Furthermore, automated highthroughput purification platforms, reusable biosensor chips, protein arrays, bioreactors and magnetic beads can be developed using the 8-aa-loop-H127C mutein. With the reversible biotin binding capability, chemical conjugation or genetic fusion of 8-aaloop-H127C muteins with other proteins such as alkaline phosphatase can allow blots to be reprobed without worrying masking other probing sites because of steric hindrance imposed by the bulky streptavidin conjugates.
Although the 8-aa-loop-H127C mutein in the present format has many desirable features for biotechnological application, two areas need further improvement. First, it would be ideal if its production yield can be increased further (.60 mg/liter). Since this mutein binds biotin reversibly, production of this mutein does not exert any toxic effect on its expression host by depleting biotin in both the cytoplasm and the culture medium. This is significantly different from the production of wild-type streptavidin which requires the production host to have the ability to produce sufficient quantities of biotin to sustain the cell growth [24]. Many other expression systems from prokaryotes to eukaryotes can now be explored. Second, a hybrid tag containing six histidine residues and a single cysteine residue [30] can be fused to either end of the 8-aa-loop-H127C mutein. When this mutein is produced at industrial scale, a single-step purification using biotin-agarose column is usually not sufficient to purify the mutein to homogeneity. The added his-tag can offer another round of affinity purification of this mutein via a different mechanism. The added cysteine residue in the tag can be applied for orientation specific coupling of the mutein to the thiol-based coupling column matrices, biosensor chips or protein arrays. This approach not only improves the accessibility of the biotin binding sites of the immobilized mutein in the affinity matrix [31] but also makes chemical coupling of this mutein to various matrices in a user friendly manner.
Materials and Methods
Construction of expression plasmids for production of streptavidin muteins in B. subtilis Plasmid pSSAV [23] carrying a B. subtilis levansucrase signal peptide for secretion and a P43 promoter for transcription was used as the expression vector. This vector carries a synthetic gene for wild-type full-length streptavidin. To construct the expression vectors for the loop muteins, the gene encoding the wild-type streptavidin counterpart in pSSAV was replaced by the PstI/BclI synthetic fragments encoding the loop muteins. E. coli pbluescript plasmids containing the synthetic gene fragments were ordered from Epoch Life Science, Inc. Texas, U.S.A. Each E. coli plasmid was digested with PstI/BclI to release the synthetic gene fragment which would then be ligated to the PstI/BclI cut pSSAV to generate the expression vectors.
Production and affinity purification of streptavidin muteins B. subtilis WB800 [32] cells carrying the expression vectors were cultivated in a semi-defined medium [24] at 30uC for 9-12 hours. Culture supernatant was collected by centrifugation and streptavidin muteins were affinity purified using biotin-agarose as described previously [18].
Analysis of interactions between biotin and streptavidin using Molegro viewer
The pdb files of the biotin complexes associated with 8-aa-loop-H127C mutein, wild-type streptavidin (2IZF) and traptavidin (2Y3F) [28] were used as the input files. Each of these files was imported to Molegro Molecular Viewer (Version 2.2.0) [25]. Interaction energy analysis was performed using the ligand energy inspector module in the program. Before analysis, both the ligand and protein hydrogen bonding positions were optimized and the ligand was energy minimized using the action panel in the ligand energy inspector module. The protein-ligand interaction energy (E inter , sum of the steric interaction energy, hydrogen bonding energy, short-and long-range electrostatic interaction energies) was expressed in the form of the MolDock score [25] in arbitrary units. A more negative value reflects a stronger interaction.
Other methods
Determination of kinetic parameters using BIAcore biosensor, preparation of streptavidin-agarose matrix and purification of biotinylated proteins were performed as previously described [18,39]. The affinity column had a bed volume of 0.1 ml 8-aaloop-H127C mutein coupled matrix. 200 ml sample containing 0.2 mg of biotinylated protein G or 0.44 mg biotinylated IgG from ThermoFisher was applied to the column. To purify biotinylated IgG from a mixture, HeLa cell extract containing 0.6 mg protein was mixed with 0.44 mg biotinylatyed IgG. This mixture was then applied to the affinity column for purification. The graphic drawings of the structure of the streptavidin-biotin complex shown in Fig. 1 were generated by using the Yasara program [40], (YASARA Biosciences GmbH, Austria). To model the possible positions of loop 7-8 , the pdb file (1SWE) of the streptavidin-biotin complex was used as the input file. The amino acid residues (G113 and S122) at the base of loop 7-8 were fixed. The possible conformations of loop were searched against the protein data bank using the loop modeling module in Yasara. Two modeled structures with loops in the upper and lower positions were selected as the input pdb files. These two input structures define the boundary of the loop movement. A pdb file containing 10 modelled structures generated from the Morph server [41] website was analyzed by Yasara to generate the animated structures shown in Fig. 1c. Figure S1 A complete tetramer of the 8aa-loop-H127C mutein is drawn, which each subunit drawn in a different color. The unit cell is drawn as a black box. The two-fold axes running through the origin are drawn according to standard crystallographic conventions. Three orthogonal views are drawn: each view is parallel to one of the three crystallographic axes. For each view, the two remaining axes orthogonal to axis being viewed down are labeled.
|
v3-fos-license
|
2021-06-26T13:46:25.227Z
|
2021-06-26T00:00:00.000
|
235638249
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10784-021-09542-7.pdf",
"pdf_hash": "7555cfb4d7dbaae3026011c4633f9219f11fd5cd",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46026",
"s2fieldsofstudy": [
"Political Science"
],
"sha1": "7555cfb4d7dbaae3026011c4633f9219f11fd5cd",
"year": 2021
}
|
pes2o/s2orc
|
Domestic and international climate policies: complementarity or disparity?
Climate change is a global crisis that requires countries to act on both domestic and international levels. This paper examines how climate policies in these two arenas are related and to what extent domestic and international climate ambitions are complementary or disparate. While scholarly work has begun to assess the variation in overall climate policy ambition, only a few studies to date have tried to explain whether internationally ambitious countries are ambitious at home and vice versa. According to the common view, countries that are more ambitious at home can also be expected to be more ambitious abroad. Many scholars, however, portray the relationship instead as disparate, whereby countries need to walk a tightrope between the demands of their domestic constituencies on the one hand and international pressures on the other, while preferring the former over the latter. This study uses quantitative methods and employs data from the OECD DAC dataset on climate finance to measure international climate ambitions. Overall, the present work makes two major contributions. First, it provides evidence that international climate financing ambition is complementary to domestic climate ambition. Second, the article identifies the conditional effect of domestic ambition—with regard to responsibility, vulnerability, carbon-intensive industry and economic capacity—on international climate ambition.
Introduction
The United Nations Framework Convention on Climate Change (UNFCCC) obligates developed countries 1 to take ambitious domestic action to keep the rise in global temperature well below 2 • C this century. Moreover, articles 4.3 and 4.4 of the 1992 UNFCCC agreement (UNFCCC 1992) and the Copenhagen Accord (UNFCCC 2009) require developed countries to contribute "new and additional" international assistance to developing 1 3 countries-based on the principle of "common but differentiated responsibilities and respective capabilities" (CBDR & RC)-to address climate change. Studying the relationship between domestic and international climate policies is particularly timely, given that climate change is an urgent problem with a global impact, even though it is managed primarily within countries.
Since the 2000s, most developed countries have implemented policies at the local or national level to address imminent issues such as air pollution and energy efficiency. Climate change, however, is a global crisis that requires additional efforts at the international level (2015,394). In response, developed countries promised at the 2009 Copenhagen climate summit to mobilize $100 billion in international climate finance per year by 2020 to help developing countries cope with the impact of climate change and the transition to lowcarbon development. 2 The promise was reiterated in the Paris Agreement of 2015. Such assistance is commonly referred to as international climate finance (2014,1238).
While funding for domestic climate policies is generally greater than for international policies (2016, 6), domestic and international climate ambition display significant variation. For instance, Burck et al. (2018, 18) find it noteworthy that "many countries, including Canada, Germany, Argentina and South Africa, are performing relatively well on the international stage, yet seem to be failing to deliver on sufficiently implementing policy measures at the national level". Some contributions to the literature on environmental politics argue that countries are more likely to take international action when domestic policies are impeded (e.g., Michaelowa and Michaelowa (2007)); the question is whether international ambition to tackle climate change is complementary to or disparate from domestic action.
Research on the variation in domestic and international climate policies is on the rise (Madden 2014;Røttereng January 2018;Schmidt and Fleig 2018;Tobin 2017). Previous studies have investigated the general pattern of national climate policies (Schmidt and Fleig 2018); the effect of international support on domestic policies (Neuhoff 2009) compared the influence of domestic and international factors on ratification of environmental treaties (Bernauer et al. 2010;Dolšak 2009) and the influence of bureaucratic agencies (Peterson and Skovgaard 2019). Scholars have focused on the effect of democracy on climate policy (Bättig and Bernauer 2009), and on the impact of sub-or nonstate climate governance (Andonova and Tuta 2014;Andonova et al. 2017;Roger et al. 2017). Studies have generally focused on the impact of domestic politics on international climate negotiations or vice versa (Bulkeley 2010;Cass 2007;Dolšak 2009;Skjaerseth et al. 2013;Sprinz and Weiß 2001).
Very few papers (Castro 2020;Tosun and Guy Peters 2020), however, have combined an examination of domestic politics and institutions with a focus on international environmental politics (Van Deveer and Steinberg 2013). In this study, I am interested in determining whether domestic and international climate policies are complementary. Do more climate-ambitious countries tackle both levels of governance at the same time? Or do the two levels function disparately, with countries prioritizing one over the other?
3
The first part of the empirical analysis tests the correlation between domestic climate ambition and international climate finance in order to find out whether countries that are ambitious domestically also engage more ambitiously in international climate policy. This is achieved by way of a bivariate regression analysis, which is not aimed at identifying a causal relationship.
The second part of the study, however, tests whether domestic climate policy leads to higher international climate financing, and reviews the factors that likely moderate the relationship between domestic and international climate policies in developed countries. More specifically, I provide evidence of how the relationship is shaped by the responsibility for causing climate change, vulnerability to climate impacts, domestic industrial opposition and economic capability.
In the first section, I introduce the theoretical framework based on domestic and international climate ambition, and the moderating variables responsibility, vulnerability, industrial opposition and capability. This is followed by a second section, which describes the quantitative methods and data. The third section presents the empirical results. Finally, the article concludes with a discussion of the implications of the findings.
Theoretical framework
The question of whether climate change should be addressed domestically or globally is one of the central dilemmas of modern climate policy (Platjouw 2009, 244). While countries' measures to tackle climate change are generally taken within their own borders, international climate policy has been argued to provide the most efficient solution for tackling climate change since greenhouse gas emissions can be reduced where reductions are marginally the cheapest ("low-hanging fruits") (Castro 2010).
I define domestic ambition as complementary to international climate financing when countries tackle both levels ambitiously. By contrast, when countries are ambitious domestically and less so internationally, or vice versa, I discuss this as disparity. For instance, if a country is relatively ambitious internationally but takes less climate action domestically, I interpret this as disparity.
The current paper builds on emerging scholarship in the field of comparative climate policies, with an emphasis on international climate finance. I aim to contribute to this research field in some respects. First, this study aims to provide generalizable results regarding the potential complementarity of domestic and international climate ambition.
Second, I employ quantitative methods, which are still uncommon in this research area. Bernauer (2013, 434) notes that " [l]arge-N comparisons of many countries [...] are still rare", as most of the peer-reviewed research exploring climate policy ambition has relied on qualitative methods and case studies (Compston and Bailey 2016;Genovese 2020;Ingold and Pflieger 2016;Korppoo 2020;Tobin 2017).
Third, whereas most recent studies on climate finance have focused on adaptation Robinson and Dornan 2016;Weiler et al. 2018), this study looks mainly at mitigation. This gives me an opportunity to compare international climate mitigation policy with its domestic counterpart. Most studies have addressed this issue by comparing domestic climate action with the outcomes of UNFCCC negotiations. I find international climate finance from public sources to be a more appropriate stand-in for international climate policy ambition, due to the "words-deeds" gap in policy-making (Bättig and Bernauer 1 3 2009). What countries agree upon at climate summits ("words") does not necessarily translate into policy ("deeds").
The policy community overwhelmingly expects international climate policy to be complementary to domestic climate policy. OECD Secretary-General Angel Gurría (2017), for instance, hints that national climate policies and international climate financing create a powerful dynamic conducive to climate action. Eric Usher, head of the UNEP Finance Initiative, voiced a similar view to high-level representatives of the COP23 Finance for Climate Day in November 2017. Usher emphasized the two main gaps of the climate challenge: countries need to increase the domestic ambition of their NDCs (nationally determined contributions) and bridge the gap in global climate investment (international climate finance) (UNFCCC 2017).
Essentially, I ask the following overarching research question: 1. How are domestic and international climate policies related?
In addition to examining the relationship of complementarity and disparity, I aim in this paper to identify the factors that govern the variation in domestic and international climate policies. Consequently, I ask:
What factors affect the relationship between the level of ambition in domestic and international climate policy?
Thus, the second part of the analysis tests the premise that domestically ambitious countries are more likely to be active in international climate finance and that this relationship is moderated by additional factors. Many case studies argue that countries can maintain very different climate policy objectives domestically and internationally, as in the case of the Swiss climate commitment (Ingold and Pflieger 2016). Røttereng (January 2018, 70) claims that developed countries such as Canada and Japan conduct ambitious international climate policies, even as their own emission reduction targets are comparatively modest. This may be as Røttereng suggests, because countries do not want to be bound by domestic mitigation targets established by the international climate regime, even though they acknowledge mitigation as an international norm that needs to be upheld. Therefore, international climate spending may diverge from domestic efforts of climate change policy.
Most notably, (Putnam 1988, 434) points to an important cleavage between domestic and international policies. He describes domestic and foreign policy as a "two-level game", wherein national administrations try to cope with pressures "[a]t the international level[...]" even while seeking to maximize "[...] their own ability to satisfy domestic pressures". This is supported by the emerging literature on voter behavior, which finds that the electorate prefers domestic spending on climate change policy, and views domestic and international policies as disparate (Buntaine and Prather 2018;Neuhoff 2009).
However, according to most of the literature on climate policy, domestic ambition is complementary to international ambition for three predominant reasons. First, foreign policies strengthen domestic climate policies because climate change is a global issue where national borders have little bearing on real outcomes. Funding spent at home should be essentially supplanted with funding spent abroad, because greenhouse gas emissions add to global atmospheric concentrations regardless of where they originate.
Second, a strong donor commitment to climate finance signals a high overall level of interest and engagement in domestic environmental protection. Michaelowa and Michaelowa (2011) suggest that donor governments' "green beliefs" tend to extend to the international environmental arena. Third, countries may also exploit climate financing as an extension of domestic climate policy-making. Extensive support for international environmental financing may signal an interest in "internationalizing" domestic environmental policy (Falkner 2005, 587) and in leveling the playing field for domestic industry (Daniel 1 3 and Vogel 2010). Developed countries with highly ambitious domestic climate policies may support emission reductions in developing countries due to the risk they run of losing industrial competitiveness (Castro 2010, 3).
The first step of this study is to investigate whether domestic and international climate policies are on average complementary or disparate. Due to the more common arguments in favor of complementarity, the first hypothesis may be stated as follows: Hypothesis 1: Countries with a higher domestic climate ambition display a higher ambition for international climate policy.
Hence, I assume that countries generally aim to tackle climate change but that their ambition at the international level tends to reflect their level of ambition at home.
I now proceed to the hypotheses associated with research question 2.
Responsibility
One of the most relevant factors in climate mitigation policy is the extent to which a country is responsible for causing anthropogenic climate change. This norm is internationally codified as the principle of CBDR & RC in Article 3.1 of the UNFCCC (1992), which states that the largest polluters bear the greatest responsibility for climate change and thus for providing financing to counter it. This reflects a "contribution to the problem" logic: countries that contribute more to cumulative GHG emissions (Page 2008, 557) are generally expected to shoulder a heavier burden in tackling climate change (Castro 2020). In terms of the relationship between domestic and international climate ambition, I hypothesize that countries that are more responsible tend to complement their domestic ambition with an international one. Thus, I expect countries that are already ambitious domestically to raise their international climate finance ambition to match their cumulative contribution to the problem of climate change. This yields the following hypothesis regarding responsibility and climate change ambition: Hypothesis 2: Countries with a higher domestic climate policy ambition and larger cumulative greenhouse gas emissions display a higher international climate ambition. Heggelund (2007) predicts that the importance of climate change in domestic policy-making will increase in line with vulnerability to climate change. What does this say about the relationship between domestic and international climate ambition? Sprinz and Vaahtoranta (1994) find that, in addition to reflecting the marginal cost of tackling climate change (abatement costs), international environmental efforts are conditional on a country's ecological vulnerability, i.e., on the extent to which it is vulnerable to climate change (as seen in floods, sea level rise, wildfires, etc.), as well as on "its sensitivity, and its adaptive capacity" (Neil Adger 2006;Smit et al. 2000). Vulnerability is not equally shared, and some developed countries are more vulnerable to climate change than others (Chen et al. 2015). The impact of climate change is international; it is conditional not on local but on global GHG emissions. More vulnerable countries would be compelled to prioritize international mitigation activities to protect 1 3 themselves from rising sea levels and droughts. I expect vulnerability to lead domestically ambitious countries to increase their international climate ambition to safeguard against future climatic changes at home. This, in turn, means I expect domestically ambitious countries that are less vulnerable to climate change to reduce their international ambition due to a weak sense of urgency. Hence, I expect vulnerability to reinforce the complementarity of domestic and international climate ambition. This yields the following hypothesis:
Vulnerability
Hypothesis 3: Countries with a higher domestic climate policy ambition and a greater vulnerability to climate change display a higher international climate ambition.
Industrial opposition
The relationship between domestic politics and foreign policy has been a prominent motif in political science research since the publication of "Diplomacy and Domestic Politics: The Logic of Two-level Games" by Putnam (1988), which recognized the tension between domestic lobby groups and pressure from other states. Kincaid and Timmons Roberts (2013), in their study of President Obama's climate efforts, found support for the existence of a "two-level game", in which the US administration needed to walk a tightrope between refraining from antagonizing industry with more stringent domestic climate regulations and pleasing environmental pressure groups that were calling for more climate aid. As a result, President Obama decided to elevate climate finance within the US budgetary agenda (Kincaid and Timmons Roberts 2013).
A high ambition for international climate policy can in fact reflect opposition from important interest groups to policies-at the domestic level, that is-for combating climate change (Christoff and Eckersley 2011;Madden 2014). This primarily concerns groups that are negatively affected by climate policies, such as fossil fuel and energy-intensive industries, whose profit margins are dependent on actively resisting ambitious domestic climate policies. Steves and Teytelboym (2013) and Rafaty (2018) confirm this expectation: they find that a strong carbon-intensive industry hinders the adoption of domestic climate policies. Michaelowa and Michaelowa (2007) note that policymakers may turn to international climate finance when a far-reaching domestic climate commitment is strongly opposed by industry lobby groups. Ingold and Pflieger (2016, 32-33) conclude that domestic interest groups will oppose restrictive domestic climate measures but remain apathetic toward international climate policies that do not affect them immediately. Daniel and Vogel (2010), however, argue that carbon-intensive industries will likely even support international efforts if restrictive measures are already in place at the domestic level. The internationalization of climate policy helps the industry "level the playing field", since it forces foreign competitors to adhere to the same strict standards as those that apply domestically.
I expect domestic climate ambition to be associated with ambition abroad, but in a way that is moderated by strong industrial interest groups. Hence, countries with a higher domestic climate ambition and strong industrial groups will invest more abroad. Conversely, countries with a lower domestic ambition and strong industries will invest less abroad. This reflects the logic identified by Daniel and Vogel (2010) to the effect that industrial groups will seek to "level the playing field abroad". This yields the following hypothesis: Hypothesis 4: Countries with a higher domestic climate ambition and a larger carbonintensive industrial sector display a higher international climate ambition.
Capability
The third explanatory factor is a country's economic capability, as stated in the principle: "common but differentiated responsibilities and respective capabilities". In general, much of the literature on environmental policy assumes that higher economic development and greater resources are conducive to environmental policy-making. Fordham (2011) indicates that the "capabilities-drive-intentions" model provides a persuasive explanation for the foreign policy ambition of states, since "[o]nce the state becomes able to extract sufficient resources from society, it will use them to pursue a more ambitious foreign policy" (Fordham 2011, 589). This also reflects the importance of the "ability to pay" principle, as countries with the greatest resources can reasonably be required to contribute more to tackling the problem (Page 2008, 561).
However, empirical studies on the effect of economic development on domestic and international policies have been inconclusive. Halimanjaya (2015) observes a negative relationship between GDP per capita and international climate financing, while Madden (2014) discovers that higher-income developed countries are less willing to adopt highly ambitious domestic climate policies. Both Hicks et al. (2008) and Klöck et al. (2018) find that wealthier countries contribute more to global environmental projects. Therefore, I expect climate finance to be a "luxury" that greener countries can afford due to surplus economic resources. Hence, while countries provide more climate finance once they are more ambitious at home, their international commitment increases even when they have abundant resources. This yields the following hypothesis: Hypothesis 5: Countries with a higher domestic climate ambition and higher GDP per capita display a higher international climate ambition. Bernauer (2013, 436) concludes that political science research has primarily emphasized the use of qualitative methods and case studies for studying variation in climate change policies (Harrison and Sundstrom 2010). In this study, I aim to provide further generalizability by employing quantitative methods. More specifically, I make use of bivariate 1 3 regression and of interaction effects to capture changes in climate finance ambition. I also use country fixed effects to avoid violating the assumption that observations are independent and identically distributed. This study is focused on rich developed countries that are members of the OECD, which includes all "Annex II" group countries (except Turkey) and the Republic of Korea. These are the member countries of which according to the UNFCCC have a responsibility to reduce carbon emissions at home and to provide international climate finance abroad. The overall sample comprises 24 countries over the 2008-2017 period. The data comprise 232 observations due to missing values for some years (Table 1).
Methods
I employ international financing for climate change mitigation as a dependent variable, inasmuch as I find it a reasonable expectation that efforts in this direction will be implemented after domestic climate policies have been instituted. Countries in the dataset enacted domestic climate policies temporally earlier than their engagement with international climate financing. Moreover, domestic climate policies are known to be more likely due to their local impacts and are more likely to be a priority of countries than international efforts (as proposed by hypothesis 1).
I measure international climate ambition on the basis of OECD DAC country-level data on "Rio Marker" climate change, which provides information on the amount of bilateral and multilateral climate finance that OECD countries provided to developing countries during the 2008-2017 period (expressed in constant 2014 dollars). The data are selfreported by countries, which may cause inconsistencies, such as over-coding and a lack of granularity. See Weikmans and Timmons Roberts (2019) to obtain a larger picture of the potential problems. Nevertheless, the OECD data are currently the most comprehensive and comparable dataset available for public climate-related finance flows . I aim to overcome at least some of the issues by introducing a control variable on overall ODA flows.
The data on countries' commitment in this study consist of grant or loan agreements made between donors and recipients. This provides a more recent overview of donor decisions (Betzold and Weiler 2017;Peterson and Skovgaard 2019) and yields more years. The level of climate mitigation finance is presented as climate-related funding per GDP to developing countries per donor and year, as in other studies (Halimanjaya 2015;Klöck et al. 2018;Peterson and Skovgaard 2019). The study accounts for the funding of all projects that have principal and significant climate objectives. 3 As the dependent variable is skewed among the higher values of the distribution, I transform the variables using the natural logarithm.
To test my hypotheses, I employ the conceptually most rigorous measure for domestic climate mitigation ambition currently available: the Climate Change Performance Index (CCPI)-which is published by Germanwatch, CAN International and the NewClimate Institute. More specifically, I employ the CCPI's subindicator on national climate policy, which is based on a questionnaire distributed among climate change experts at national NGOs (Burck et al. 2018, 19). The measure ranges from 0 to 20, with higher values representing higher ambition. The questionnaire covers issues on domestic climate mitigation policies, such as energy efficiency, the promotion of renewable energies and efforts to reduce emissions from electricity production, manufacturing and transport. Moreover, the subindicator rates each country's deforestation, forest degradation and national peatland protection efforts (Burck et al. 2018). In effect, the subindicator largely measures a country's climate ambition based on experts' evaluation of its domestic climate policies compared with its potential capability. As a softer test of causality, the variable is lagged by one year to check whether values from the preceding year have a bearing on the provision of mitigation finance in the following year.
This study includes a logged GDP per capita term. I follow the standard practice of development aid literature as set out by Alesina and Dollar (2000) and Weiler et al. (2018); thus, I use GDP per capita (GDP/capita in the model) as collected by the World Bank (2018a). To account for the emission intensity of each country's economy, I include country CO 2 emission intensity ( CO 2 emissions per GDP), which is log transformed to counter the skewness of the variable (CDIAC 2018), as GDP per capita can vary significantly even between high-income countries.
I measure responsibility for causing climate change by using the Global Carbon Project's (GCP 2019) dataset on cumulative carbon emissions as a proxy for all GHGs. The variable is transformed using the natural logarithm due to the large differences between low and high emitters caused by the variation in the size of economies.
To account for vulnerability to climate change, I incorporate the Notre Dame Global Adaptation Initiative's (ND-GAIN) vulnerability indicator (NDGAIN 2018), defined as the "[p]ropensity or predisposition of human societies to be negatively impacted by climate hazards" (Chen et al. 2015, 3). The indicator measures vulnerability through six life-supporting sectors (food, water, health, ecosystem services, human habitat and infrastructure). Each sector is represented by subindicators that account "for exposure of the sector to climate-related or climate-exacerbated hazards, the sensitivity of that sector to the impacts of the hazard and the adaptive capacity of the sector to cope or adapt to these impacts" (ibid.). The indicator ranges from −0.086 to 0.128 for the selected sample. Lower values of the indicator (Vulnerability in the model) represent lower vulnerability to climate change, and higher values signify higher vulnerability. As vulnerability tends to correlate significantly with economic resources, this study uses a version of the indicator adjusted for GDP.
This study employs an approximate proxy for industrial opposition, as no comparative data are available for the size or number of industry sector lobby groups across countries. Analogous to Steves and Teytelboym (2013), I employ the size of carbon-intensive industry (manufacturing, mining and utilities) relative to GDP from UNSD (2018) to measure industrial opposition. Capability is measured by way of GDP per capita data (PPP) in constant 2011 US dollars (World Bank 2018b).
The study does not attempt to "reinvent the wheel": rather, it includes a number of control variables that have proven fruitful in previous studies on development aid and climate finance (Alesina and Dollar 2000;Berthélemy 2006;Klöck et al. 2018). Following (Halimanjaya 2015; Klöck et al. 2018), I control for each country's institutional capacity for effective administration by using the sum of the six subindicators of the Worldwide Governance Indicators (WGI) (Kaufmann et al. 2010). I expect better governed countries to be more likely to provide more international climate finance. Next, I use the World Bank (2018b) data to control for yearly country population size. I anticipate that larger countries will take less action per capita due to the sheer volume of their aid efforts. Finally, I take into account the total flows of Official Development Assistance, on the expectation that international climate financing is at least partly determined by path dependence arising from overall aid-giving (as previous researchers have found (Klöck et al. 2018, 16)). All control variables are lagged by one year on the expectation that decisions on international climate finance allocation are made based on knowledge from the antecedent year.
First, I employ a bivariate regression model to investigate the association between domestic and international climate policy. I estimate the bivariate regression: where the dependent variable it is the natural logarithm of climate mitigation finance per GDP by donor i in year t. it-1 corresponds to the main explanatory variable of interest, which is the domestic climate mitigation policy indicator lagged by one year. represents the error term.
Second, I use three separate interaction models, which estimate each moderating variable-vulnerability, industry opposition and resources-independently in each model: I make the argument that (domestic climate ambition) has an effect on (international climate ambition). This relationship, however, is moderated by (vulnerability), (resources) and (industrial opposition), which are represented by . stands for control variables, is a vector of controls for country i and stands for the error term.
To capture the differences in context by industry, exposure to climate change and differences in income, I employ interaction terms, which follow the principles of correct model specification suggested by Brambor et al. (2006): I include all constitutive terms in the model specification and analyze the marginal effects of substantively meaningful interaction terms 4 . Models (2-5) show different interaction terms based on the aforementioned hypotheses. To maintain comparability, all models include the same control variables and the same number of country-year observations (210). I also run robustness tests by including an alternative dependent variable (mitigation and adaptation combined) and independent variables (e.g., full CCPI index, economic growth) and other hypotheses (budget deficits and green technology patents (OECD 2021a, b)), as presented in Table 3. Figure 1 shows average levels of domestic climate policy ambition (CCPI national climate policy indicator) and average levels of international climate ambition (natural logarithm of public finance for climate change mitigation as a share of GDP) that countries displayed during 2008-2017. The relationship appears to be visually linear, but it includes distinct outliers, such as Japan and Portugal. At least three different country strategies can be identified. Many of the countries-among them Australia, Austria, Canada, Denmark, Finland, Iceland, Ireland, Spain, Sweden, Switzerland, Luxembourg, New Zealand and others-fit the hypothesized results and stay close to the regression line, appearing to complement their domestic ambition with the same level of international ambition. Another group consists of front-runner countries that promote ambitious climate policies at home, while also spending comparatively more on climate finance abroad; these include France, Germany and Norway. The other two strategies stray from this pattern by either being
Descriptive results
more committed to climate policies at home (Portugal, South Korea and the UK) or more engaged in international climate financing (Japan). This is in line with several country case studies. Germanwatch (2019) country report on Portugal notes that country's high ranking on domestic climate policy is primarily due to its "government's commitment to the carbon neutrality target by 2050 [...] and to a coal phase-out recently anticipated to 2023, which is to be achieved by means of 100% renewable energy in the mid-century". By contrast, international climate financing appears to be less integrated into Portugal's overall development strategy (Camões 2015). Previous case studies also support the results for Japan. Climate Action Tracker (Climate Action Tracker 2013, 3) observes a general shift in focus in the case of that country from domestic to international emissions reductions. While Japan has reduced its domestic climate ambitions, it has also concurrently increased its efforts to provide international climate finance. Table 2 presents the results of all of the models. Model 1 aims to provide descriptive results for the main bivariate regression, while models (2)-(5) include all control variables and interaction terms for hypotheses 2-5. In general, models (2)-(5) fit the data well, as the adjusted R 2 is over 80 percent in all interaction models. All interaction models include all of the control variables, and all country-and year-fixed effects. As per Keele et al. (2020), the main interaction models do not present the results for the control variables, since covariates do not carry a causal or substantive interpretation. The constitutive term on domestic ambition in models (2)-(5) captures the effect of domestic ambition on international climate commitment when the associated moderating variable is zero. As I hypothesized and as shown by Fig. 1, the descriptive bivariate model (1) in Table 2 demonstrates that a higher level of domestic climate ambition is positively associated with a higher level of international climate ambition. This result provides support for hypothesis 1-that domestic climate policy is complementary to international climate finance, as developed countries that are more ambitious at home are also more likely to be ambitious internationally. To determine why that should be, I test 4 additional hypotheses on domestic climate policy and specific moderating factors (responsibility, vulnerability, industrial opposition and capability). The following results are specific to climate mitigation finance but fairly robust to full climate finance data (total for mitigation and adaptation) in Table 3.
Responsibility
I will focus first on model (2) in Table 2, which tests hypothesis 2. The interaction term of domestic climate ambition and responsibility is statistically significant with a positive coefficient. This result is in line with the "idealist" expectation of UNFCCC (1992), which requires that countries commit to international climate finance based on their "common but differentiated responsibilities". Thus, I find support for hypothesis 2-the effect of domestic climate policy ambition on international climate finance increases with greater responsibility for causing climate change. This result is reinforced by the marginal effects in Fig. 2, that show that countries which are more ambitious domestically provide more climate mitigation finance at higher levels of responsibility. Countries that are less responsible, however, commit less international climate finance per domestic ambition.
Vulnerability
I will turn next to hypothesis 3 in model (3). Unlike Klöck et al. (2018), I discover a strong negative relationship between vulnerability, domestic climate ambition and ambition for international climate finance. The results show that countries that are more ambitious at home provide less international climate finance if they are more threatened by climate change. This suggests that vulnerability to climate change does not push domestically ambitious countries to take more action abroad. Accordingly, I reject hypothesis 3. Unexpectedly, it is the domestically least ambitious countries, which ceteris paribus are more likely to increase their climate finance ambition once their vulnerability intensifies. It appears that higher vulnerability leads to lower domestic action vis-a-vis international ambition.
Industry opposition
The importance of carbon-intensive industry (as share of GDP) is included in model (4) as an interaction term with domestic climate ambition. The negative effect of carbon-intensive industry in the model appears to be conditional on domestic climate ambition at the 90% confidence level. These results suggest that the effect of domestic ambition on international climate finance decreases as the carbon-intensive industrial sector increases (Fig. 2). This also means that domestically ambitious countries begin to provide more climate finance as their industrial sector decreases. The results provide some support for the thesis that the industry sector drives disparity in domestic and international climate ambition. The reason Fig. 2 Marginal effects. The gray area represents the 95% confidence interval may be that countries with large carbon-intensive industries, such as South Korea and Norway, support one dimension over the other, either locally or internationally. Nevertheless, this result is at variance with the theoretical expectations of the regulatory politics framework, to the effect that restrictive domestic policies will lead to a greater use of climate financing in order to support industry's efforts to "level the playing field" abroad. This leads me to reject hypothesis 4.
Capability
Model (5) shows that a greater abundance of economic resources decreases the effect of domestic climate ambition on international climate finance. The interaction term of domestic ambition and GDP per capita is statistically significant. Figure 2 shows that poorer countries (below the Annex II average) provide more funding when their domestic ambition is high. This effect decreases, however, among the wealthiest countries (Fig. 2). I conclude that the effect of domestic ambition on international climate finance decreases with income. This result suggests a basis for rejecting hypothesis 5, and it provides evidence that domestic "greenness" is more important for international ambition than additional wealth.
Conclusion
By examining international climate finance, I have provided generalizable results on the complementarity of international and domestic climate ambition. My study has included public funding for climate change mitigation and taken into account the responsibility of different countries for causing climate change, as stated in the Kyoto Protocol and the Paris Agreement. My main contribution to the literature is the finding that countries with more ambitious climate policies at home are also more likely to be more committed abroad. This effect is conditional, however, on several factors. The main contribution of the study is the disaggregation of domestic and international climate action. I find support for the first part of the "common but differentiated responsibilities and respective capabilities" principle, but not for the second. Everything considered, my analysis shows that domestically committed countries provide more climate mitigation financing and that this effect increases with cumulative carbon emissions (responsibility). However, my results do not support the argument that climate financing is a "luxury" that richer countries can afford due to excess wealth (capability). In contrast, excess resources do not matter for countries that are domestically ambitious. The theoretical implication of this study is thus that countries' international climate ambition is driven by a combination of responsibility and domestic ambition. An abundance of economic resources, by contrast, is associated with disparity.
I also find a moderating effect of industry opposition. Domestically "green" countries with a sizable carbon-intensive industrial sector provide less international climate financing. This result defies the expectation that carbon-intensive industries would be more supportive of renewable energy investments and the expansion of strict climate ambition abroad in countries that have stricter domestic policies in place. Instead, a large carbon-intensive industrial sector is more likely to encourage disparity, with domestically ambitious countries reducing their international ambition. This finding contradicts the theoretical expectation that exposure to climate change will increase the importance of international climate policy. Rather, my analysis shows that domestically ambitious countries may become even less interested in tackling climate change on a global level as their vulnerability to climate change increases.
This research has vital implications for climate change policy. I find that increased vulnerability, a strong carbon-intensive industry and a stronger economic capability are not enough to increase a country's international commitment when it is already ambitious at home. Instead, I find that domestically "green" countries are more likely to be influenced by calls for increased international responsibility. Future research can complement the present study, in particular by adopting a comparative framework for qualitative analysis and by encompassing a wider range of domestic political factors, including the role of national and transnational actors.
|
v3-fos-license
|
2024-04-26T13:11:11.987Z
|
2024-04-26T00:00:00.000
|
269363291
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "9baad5fb2f2e7e88bf5f5d657ac3f72a860c0fcc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46028",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Chemistry"
],
"sha1": "c5f53f306c20592801d04eb5eacb591f14909d9a",
"year": 2024
}
|
pes2o/s2orc
|
Methionine inducing carbohydrate esterase secretion of Trichoderma harzianum enhances the accessibility of substrate glycosidic bonds
Background The conversion of plant biomass into biochemicals is a promising way to alleviate energy shortage, which depends on efficient microbial saccharification and cellular metabolism. Trichoderma spp. have plentiful CAZymes systems that can utilize all-components of lignocellulose. Acetylation of polysaccharides causes nanostructure densification and hydrophobicity enhancement, which is an obstacle for glycoside hydrolases to hydrolyze glycosidic bonds. The improvement of deacetylation ability can effectively release the potential for polysaccharide degradation. Results Ammonium sulfate addition facilitated the deacetylation of xylan by inducing the up-regulation of multiple carbohydrate esterases (CE3/CE4/CE15/CE16) of Trichoderma harzianum. Mainly, the pathway of ammonium-sulfate's cellular assimilates inducing up-regulation of the deacetylase gene (Thce3) was revealed. The intracellular metabolite changes were revealed through metabonomic analysis. Whole genome bisulfite sequencing identified a novel differentially methylated region (DMR) that existed in the ThgsfR2 promoter, and the DMR was closely related to lignocellulolytic response. ThGsfR2 was identified as a negative regulatory factor of Thce3, and methylation in ThgsfR2 promoter released the expression of Thce3. The up-regulation of CEs facilitated the substrate deacetylation. Conclusion Ammonium sulfate increased the polysaccharide deacetylation capacity by inducing the up-regulation of multiple carbohydrate esterases of T. harzianum, which removed the spatial barrier of the glycosidic bond and improved hydrophilicity, and ultimately increased the accessibility of glycosidic bond to glycoside hydrolases. Supplementary Information The online version contains supplementary material available at 10.1186/s12934-024-02394-1.
Background
Plant biomass is a large-scale renewable organic carbon resource, and the application of microorganisms to degrade and convert it into biofuels and biochemicals is a promising way to solve the current energy crisis [1].Plant fiber components are complex and mechanically dense, mainly composed of cellulose (40%-50%), hemicellulose (20%-40%), and lignin (20%-30%) [2].Its whole-component degradation requires the combination of several lignocellulolytic carbohydrate active enzymes (CAZymes) [3].The CAZymes family consists of Glycoside Hydrolases (GHs), Glycosyl Transferases (GTs), Polysaccharide Lyases (PLs), Carbohydrate Esterases (CEs), and Auxiliary Activities enzymes (AAs), which are secreted by saprophytic microorganisms to act on lignocellulose and ultimately hydrolyze glycosidic bonds to form oligosaccharides. Trichoderma species are typical saprophytic fungi [4] that generally possess an affluent CAZymes system and a complete pentose (C5) and hexose (C6) utilization pathway.Meanwhile, it can regulate CAZymes secretion and hydrolysis strategy based on different lignocellulosic substrates.The major lignocellulosic response regulators are the transcriptional activator ACE2, transcriptional repressor ACE1 [5], zinc-finger transcription factor PACC, CCAAT binding complex HAP2/3/5, glucose repressor CRE1, and the GATA factor AREA, which form a regulatory network controlling the expression of lignocellulases [6,7].
Substrate variability has a significant impact on the degradation efficiency of CAZymes.Lignin degradation is relatively complicated, it is mainly an amorphous non-homogeneous phenolic polymer composed of coumarin, coniferyl alcohol, and sinapyl alcohol [8], mainly degraded by laccase and lignin peroxidase [9]; Cellulose is the most critical component of plant cell wall and structural support material, which is also regarded as the linear polysaccharide composed of glucose linked through glycosidic bonds.GHs such as Endoglucanases (EG), cellobiohydrolase (CBH), and β-glucosidases (BG) act directly on glycosidic bonds, and their catalytic function generally comes from two amino acid residues: proton donor and nucleophile/base [10].Hemicellulose is mainly composed of xylose, arabinose, and mannose polymerization.Xylanase, mannanase, and arabinase are the main hemicellulases [11,12].Hemicellulose also has diverse chemical modifications, such as methylation, acetylation, and ferulic acylation, which greatly limits the hydrolysis efficiency of hemicellulase and even other lignocellulosic components, especially the acetylation of xylose residues, which decreases the hydrophilicity of fibers [13,14] and even affect the binding efficiency between cellulase and cellulose [15].Acetylation of hemicellulose is prevalent in gramineae, which facilitates the interaction between polysaccharides [16], thus improving the resistance to physical and biological stresses [17,18].Nevertheless, the acetylation remaining in plant residues will greatly limit the degradation of hemicellulases, which is a bottleneck for plant biomass saccharification.
Trichoderma species can secrete carbohydrate esterases (CEs) such as CE16, CE5, and CE3, which can catalyze the deacetylation of polysaccharides [19].The CAZymes Database (CAZyDB) shows that the family number of CEs is only 20, whereas the GHs have the most expansive gene family in CAZymes (GH family number: 184).Although this result is not statistical for a single species, it implies that the GHs family is abundant in microorganisms.CEs are unappreciated because of the more contracted gene family compared to GHs and cannot act directly on glycosidic bonds.However, its polysaccharide deacetylation function can remove the obstacles to the hydrolysis of glycosidic bonds by GHs.The improvement of GHs hydrolysis efficiency means that cells can obtain sufficient carbon sources with less enzyme production, while the saved energy will be used for secondary metabolism and hyphae growth.Using Trichoderma species as a cell factory to achieve industrial bioconversion of straw into chemicals requires the support of huge hyphae biomass.Owing to the nutrition of straw being relatively single, a small amount of amino acids or ammonium sulfate are often added during solid fermentation [20], which can significantly stimulate the straw utilization efficiency of filamentous fungi.However, the nitrogen/sulfur supplied from these supplements is insignificant for supporting long-term microbial life activities, so it is more likely that the cellular assimilation products of added ammonium sulfate act as triggers to stimulate the lignocellulolytic response.
Here, we report that methionine, the major metabolite of ammonium sulfate, can enhance the lignocellulose degradation efficiency of Trichoderma harzianum by promoting its polysaccharide deacetylation capacity.T. harzianum is a plant residue degrader [21][22][23] with a rhizosphere-promoting function [24,25], which is capable of degrading soil polysaccharides that cannot be utilized by plants and converting them into phytohormone such as IAA through cellular metabolism [26].Therefore, T. harzianum can not only serve as a cell factory for plant biomass to biochemicals conversion but also for phytohormone synthesis [4], all of which rely on more efficient polysaccharide degradation ability.This study reveals a connection between ammonium sulfate assimilation and microbial lignocellulose degradation strategies, which helps to further explore the potential of Trichoderma species in plant biomass bioconversion.
Biomass assay
The hyphae and medium (straw) were fully mixed, and 10.0 g sample was accurately weighed.The samples were rapidly frozen with liquid nitrogen and milled.DNA of hyphae was extracted by PowerSoil Pro Kit (QIAGEN, Germany) according to the manufacturer's instruction and then dissolved with 20 μL ddH 2 O.The copies of standard DNA solution was 4.5 × 10 10 copy μL −1 (2000 bp, 100 ng μL −1 ), which was diluted 10 folds successively.The standard regression equation was Y = (34.3− X)/3.2,R 2 = 0.999, Y = log 10 (template copies) , X = average Cts.The Cts of treatments and mutants were determined by qPCR, and Cts were converted to genome copies by the standard regression equation.
Metabolomics analysis
The hyphae of different treatments were frozen by liquid nitrogen and then fully ground, followed by extraction with sterile distilled water.The extracts were identified using a quadrupole orbitrap mass spectrometer based on the LC-MS/MS system, and 40,850 peaks were obtained, while 26,383 peaks were retained after filtering for deviations and missing values [29].The data were logarithmically transformed and centered, and then analyzed by automatic modeling.Subsequently, UV formatting and OPLS-DA modeling analysis were performed for the principal component.The quality and validity of the model were judged by R 2 X, R 2 Y, and Q 2 Y were obtained after cross-validation.R 2 X and R 2 Y denoted the interpretability of the OPLS-DA model on the information of X and Y matrices, respectively, and Q 2 Y was used to evaluate the predictiveness [30].The adjusted data were compared with database and combined with the qualitative and quantitative results for univariate statistical analysis (UVA) and multivariate statistical analysis (MVA).Subsequently, the metabolites with significant differences were screened by ROC analysis, and the AUC value closer to 1 was considered to have a better diagnostic effect, concurrently, the AUC value higher than 0.9 indicated a high accuracy [31].Finally, a series of bioinformatics analyses were performed to visualize the biological functions of differential metabolites.
WGBS assay
2 µg hyphae DNA extracted from different treatments was diluted to 10 µL, and 1.1 µL NaOH (3 M) was added, after which the system was incubated at 37 °C for 10 min; then 6 µL hydroquinone (10 mM), 104 µL sodium bisulfite (3.6 M) was added, and sealed with mineral oil, respectively, and then the system was incubated at 50 °C for 18 h.Subsequently, the DNA sample was recovered, and the bisulfite conversion was validated by methylation-specific PCR (MSP).The pretreated DNA was ultrasonically fragmented to 200-300 bp and used to construct the DNA library.The libraries were sequenced by using the Illumina sequencing platform.The Fastp software was used to obtain clean reads [32], and Bismark [33] software with bisulfite conversion algorithm was used to correctly locate the reads and count the conversion rate after being treated with bisulfite.Meth-ylKit [34] software was used to complete the detection of methylation sites and count the methylation at genomewide and gene element levels.DMRs analysis was also performed by MethylKit, and the methylation level data within genome tiling windows were obtained.The logistic regression model was used to analyze the DMRs between groups and then analyzed the methylation level differences.Eventually, the DMRs were annotated according to the information of protein-coding genes.
ChIP-seq assay
His-tag was added to the C-terminus of ThGsfR2 and verified by sequence and western blot.Hyphae of the modified strain were immersed in cross-linking buffer (EDTA, 1.0 mM; FMSF, 1 mM; Formaldehyde, 1%, V/V, pH 8.0 Tris-HCl, 10 mM;) for 25 min, and terminated by 10 × Glycine (Beyotime, China).The chromatin was ultrasonically broken to 200-1000 bp.200 μL supernatant was obtained by centrifuge, 1.8 mL ChIP diffusion buffer (Beyotime, China) and 1 μg 6 × His-tag ChIP-class antibody (Abcam, UK) was added, and incubated at 4 ℃ overnight.Then, 60 μL Protein A/G beads were added into the ChIP system and incubated at 4 ℃ for 5 h.The supernatant was removed by centrifuge and the sediment was cleaned with Low salt immune complex wash buffer, High salt wash buffer, LiCl wash buffer, and TE buffer in turn.Finally, 500 μL elution buffer (NaHCO 3 , 0.1 M; SDS, 1%, W/V) was added to elute the Protein-DNA complex and the nucleic acid was recovered after de-crosslinking (5 M NaCl).
NGS: The recovered DNA was subjected to highthroughput sequencing (Illumina HiSeqTM2000) after library construction to obtain raw sequenced reads.The clean data was then aligned to the reference genome using BWA (version: 0.7.15) [35] and the bam files were obtained, the duplicate sequences were removed and the only aligned sequences were retained; the peak information was analyzed genome-widely using MACS (version:2.1.1)[36], and the screening threshold for the significant peak was q-value < 0.05.Peak detection was performed to obtain information of enriched regions, and the distribution of peak, nearest gene search and motif prediction were performed.Finally, the peak distribution was counted, and GO, KEGG enrichment and transcription factor prediction were performed for nearest peak genes.
HSQC assay
To remove the pectin and lignin, 1.0 g rice straw without hyphae was incubated in 5 mL ammonium oxalate (1%, w/v) at 37 °C, and then the pellets were incubated in 3 mL 11% peracetic acid at 80 °C for 40 min.Xylan was extracted by DMSO at 70 °C and pelleted by 5 volumes of ethanol/methanol/water (7/2/1, pH 3.0).10 mg extracted xylan was dissolved in 0.5 mL deuterated DMSO-d 6 (Sigma-Aldrich, USA) and subjected to 1 H- 13 C HSQC [38,39].HSQC spectrum obtained by Bruker 600-MHz NMR spectrometer.The standard pulse sequence (gHSQ-CAD) was used to determine the one-bond 13 C-1 H correlation.The HSQC spectrums were collected with a spectrum width of 10 ppm in 1 H dimension and 200 ppm in 13 C dimension.The spectrums were calibrated by the DMSO solvent peak (dC 39.5 ppm, dH 2.49 ppm) [40].The acetyl-group identification in NMR data and peak area statistics were conducted with MestReNova (Version 10.0.2) software.
WSI determination
After removing hyphae, the rice straw from each treatment was dried to a constant weight, and 1.000 g samples were weighed under an infrared lamp and placed in a sample holder, subsequently transferred to a designated position inside a magnetic suspension balance.The samples were degassed at 120 °C for 3 h under vacuum conditions and then cooled to 28 °C [41].The evaporator was then connected to the adsorption room and the mass of adsorbed water was recorded after reaching the saturation pressure.The process of vacuum drying and degassing was subsequently repeated to obtain the water adsorption isotherm by setting the vapor pressure gradient.
Low-field NMR
WT was inoculated into the rice straw medium including T1 and T3 at 28 °C for 15 days.After removing hyphae, the straw was dried to constant weight and ground through a 100-mesh sieve.1.000 g straw powder from T1 and T3 were weighed, and 1 mL deionized water was added and mixed well, then stood overnight.The mixtures were centrifuged at full speed for 10 min and the precipitates were subjected to the T 2 relaxation time determination.The NMR analyzer (MesoMR23-060H-I, Niumag) operating frequency was 18 MHz, and the operating temperature was 32 °C, while the pulse sequence was Carr-Purcell-Meiboom-Gill (CPMG) [42].The repeat sampling waiting time was 3000 ms, echo time was 0.1 ms, echo number was 1000, and radio frequency delay was 0.08 ms.Data was collected to generate T 2 relaxation spectra using Origin 2023b.
Ammonium-sulfate facilitated plant biomass utilization of T. harzianum
Different ammonium-sulfate (AS) gradients including T1 (0%), T2 (0.5%), and T3 (1%) were set by adjusting the mineral medium with rice straw as carbon source (MM + straw), and other mineral nutrients content were kept consistent.The wild-type (WT) of T. harzianum was inoculated on the treatments for 4 days (Fig. 1a).Since hyphae could not be isolated well from the straw medium, we determined biomass by using absolute quantitative PCR, and the result showed that the hyphae biomass of T2 (1.52 × 10 5 copy•g −1 ) and T3 (1.76 × 10 5 copy•g −1 ) were significantly greater than T1(0.93 × 10 5 copy•g −1 ), indicating that the biomass was increased gradually with the AS gradient (Fig. 1b).Extracellular proteins from each treatment were extracted by the same volume of PBS buffer (10 mL) and subjected to SDS-PAGE (Fig. 1c), which showed that the extracellular proteins contents were , d, g, h).***significant difference to WT at two-tailed P = 0.000 (b, T2), 0.000 (b, T3); **significant difference to WT at two-tailed P = 0.005 (d, T2), 0.001 (d, T3), 0.007 (g, OE-Thalt), 0.003 (h, OE-Thatps), 0.001 (h, OE-Thalt); *significant difference to WT at two-tailed P = 0.014 (g, OE-Thatps) increased with the AS gradient (Additional file 1: Fig. S1a, b).In addition, the FPA, EG, and xylanase activities of T2 and T3 were also significantly increased than that of T1 (Fig. 1d; Additional file 1: Fig. S6a).All these results indicated that small amounts of ammonium-sulfate could significantly induce an increase in the lignocellulose utilization capacity of T. harzianum.In order to exclude the physical interference of exogenous addition, we respectively overexpressed the key enzymes in ammoniumsulfate assimilation pathway, ATP sulfatase (Thatps) and alanine transaminase (Thalt, KEGG ID: K00814, GENE ID: A1A111846.1,NCBI ID: OPB36402.1),which catalyzes sulfate reduction and ammonia transfer to pyruvate and obtained the strains OE-Thatps ( Thatps was 44-folds up-regulated) and OE-Thalt (Thalt was 26-folds up-regulated) (Fig. 1f ).The mutants and WT were cultured on MM + straw (with same AS content) for 4 days (Fig. 1e).The biomass and CAZymes activities (FPA, EG, and Xylanase) of OE-Thatps and OE-Thalt were also significantly greater than that of WT (Fig. 1g, h; Additional file 1: Fig. S6b), suggesting that the enhanced AS assimilation capacity was conducive to the lignocellulolytic utilization of T. harzianum, and also hinting at a potential linkage between AS assimilation pathway and lignocellulolytic response.
Ammonium sulfate induced a significant increase in intracellular methionine and 5-methylcytosine
AS needs to be assimilated by cells to generate relevant metabolites to perform its regulatory function, therefore, we performed the metabolomic analysis on T1 and T3 to reveal the changes in intracellular metabolites.PCA exhibited a significant difference between T1 and T3, and good reproducibility of 6 biological repeats (Fig. 2a).The correlation between metabolites and sample categories was modeled by OPLS-DA.The model validity was judged by using cross-validation, and result indicated that the model with high interpretability for categorical variables (R 2 Y = 0.995, P < 0.05) and high predictability (Q 2 = 0.807, P < 0.05) (Additional file 1: Fig. S2a).The volcano plot showed the metabolites content Foldchange in T3 relative to T1, with 3303 metabolites up-regulated and 1523 metabolites down-regulated in T3 (Fig. 2d).Signal intensity changes of metabolites were identified by mass spectra (LC-MS/MS) by comparing the 6 biological repeats of T3 and T1, and the metabolites from database comparison (score > 0.9) were then subjected to clustering analysis.The results showed that amino acids, especially for sulfur-containing amino acids, were the major up-regulated metabolites, while organic acid metabolites were the major down-regulated substances.Notably, a large number of methylation-modified metabolites were upregulated (Fig. 2c).Matchstick plot showed the 10 most significantly up/down-regulated metabolites in volcano plot, and statistical significance analysis was performed.Methionine was the most significantly upregulated metabolite, while muconic acid was the most significantly down-regulated (Fig. 2d).KEGG enrichment showed that mainly various amino acid metabolic pathways responded to AS addition, among which sulfur metabolism and methionine synthesis pathway were the most drastically affected by AS addition (Fig. 2e).
To screen out the signature metabolite that can serve as a marker capable of distinguishing different AS additions (T1 and T3), the receiver operating characteristic (ROC) analysis was performed by using the previously constructed regression model.The area under curve (AUC) was applied to evaluate the ability of a specific metabolite to differentiate test treatments, with an AUC value closer to 1 indicating that this metabolite was more reliable in diagnosing and differentiating treatments (AS gradients).Methionine (Met) was screened as the best signature metabolite for differentiating the AS gradient and its AUC value was 0.97, while 5-methylcytosine (5mC), with an AUC value of 0.94, could also be applied as a signature metabolite (Additional file 1: Fig. S2c), indicating that Met and 5mC responded most dramatically to AS addition.Met was readily activated by ATP to produce S-adenosylmethionine (AdoMet), which was the main methyl donor.5mC and 3mA implied that a change in DNA methylation level may have occurred.The MS signal intensity could characterize intracellular metabolite content.The relative signal intensity of Met and AdoMet were significantly up-regulated in T3 relative to T1 (Fig. 2f ), which corresponded to the ROC analysis results.1-Methylthymine (1mT), 5-methylcytosine (5mC), 3-methyladenine (3mA), and 7-methylguanine (7mG) could serve as DNA methylation markers, where 5mC and 3mA were significantly up-regulated in T3 relative to T1, and 7mG was relatively up-regulated as well, while 1mT was not detected (Fig. 2g, Additional file 2: Dataset 1).It appeared that the increase in Met and AdoMet might be the trigger for the increase in 5mC and 3mA.The above results implied that AS-induced up-regulation of the lignocellulolytic response might be mainly through assimilation to Met, and this process might include changes in DNA methylation.
To investigate whether the increase in intracellular Met favored straw utilization, we increased the intracellular Met content by overexpressing the key enzyme in Met synthesis, 5-methyltetrahydrofolate-homocysteine methyltransferase gene (ThmetH, KEGG ID: K00549, GENE ID: A1A104551.1,NCBI ID: KKP03877.1),and obtained the strain OE-ThmetH.OE-ThmetH was cultured on MM + straw for 4 days (Fig. 2h), and its intracellular Met content (26.3 μmol•g −1 ) was significantly higher than that of WT (21.2 μmol•g −1 ), while the biomass and FPA of OE-ThmetH (2.26 × 10 5 copy•g −1 , 1.89 U•g −1 ) were also significantly up-regulated relative to WT (1.88 × 10 5 copy•g −1 , 1.44 U•g −1 ) (Fig. 2i, Additional file 1: Fig. S2d).The trend of EG and Xylanase activities was the same as that of FPA (Additional file 1: Fig. S6c) The above results indicated that up-regulation of intracellular Met facilitated the lignocellulolytic response of T. harzianum.Thus, the promotional effect of AS might be achieved mainly through assimilation to Met, and the process of Notably these values were all significantly greater in T3 than T1.g MS signal intensity of 5mC, 3mA, and 7mG in T1 and T3.h Growth comparison of OE-ThmetH and WT on MM + straw.i Quantification of intracellular Met content and FPA.Intracellular Met content and FPA of OE-ThmetH were significantly higher than that of WT.All results were obtained from hyphae samples grown on T1 and T3, both of which contained 6 biological replicates; red dots resemble values from individual experiments.Student's t-testing was conducted in (f, g, i), *significant difference to T1 at two-tailed P = 0.022 (g, T3: 5mC), *significant difference to WT at two-tailed P = 0.011 (i, OE-ThmetH: intracellular Met).**significant difference to WT at two-tailed P = 0.004 (i, OE-ThmetH: FPA); ***significant difference to T1 at two-tailed P = 0.000, (f, T3: Met), 0.000 (f, T3: AdoMet), 0.000 (g, T3: 3mA); ns = no statistical difference to T1 at two-tailed P = 0.454 (g, T3: 7mG) Met promoting lignocellulosic response may involve inducing DNA methylation level changes.
WGBS revealed that ammonium-sulfate addition induced methylation of ThgsfR2 promoter
Based on the metabolomics analysis results, multiple methylated metabolites were significantly up-regulated with AS addition.Noteworthy, 5mC was also screened as signature metabolite, implying that AS addition might induce a change in DNA methylation levels.Whole genome bisulfite sequencing (WGBS) was performed to evaluate the DNA methylation changes after AS addition.PCA demonstrated a good intra-group reproducibility and inter-group variability in T1 and T3 (Fig. 3a).By combining with NGS, the conversion rate of cytosine (C) to uracil (U) for each sample was counted, and the result indicated that T3 had higher DNA methylation levels in the promoter, genebody, and terminator than that in T1 (Fig. 3b), which validated the previous results obtained in metabolomics that 5mC, 3mA, and 7mG were up-regulated in T3.The average methylation levels of gene elements including exon, intron, TSS, intergenic, etc. on the DMRs of CpG, CHG, and CHH types were determined, which could show the effect of DMRs on gene expression (Fig. 3c).The numerical distribution of CpG, CHG, and CHH methylation levels of samples were counted and plotted as violins, in which the vertical coordinates indicated the methylation levels and the width of each violin represented how many points were at that methylation level (Fig. 3d).By analyzing the changes in DMRs methylation levels of GpG, CHG, and CHH types, we found that 4 GpG type DMRs were not present in T1 but were identified in T3, while 2 GHH type DMRs and 9 CHG type DMRs were similarly not present in T1 but were identified in T3.The Venn showed that the DMR in the chromosome fragment LVVK02.1 (2,050,001-2,051,000 bp) and LVVK42.1 (1001-2000 bp) was existed in GpG and CHG (Fig. 4e, Additional file 3: Dataset 2).By comparing the sequences of these 13 newly appeared DMRs in T3, we found that most DMRs were located in the beginning regions of chromosome fragments, such as LVVK31.1 (3001-4000 bp) and LVVK56.1 (1001-2000 bp), and LVVK63.1 (1-1000 bp), etc., which mostly were non-coding sequences.We screened DMRs located in gene transcriptional functional regions and compared them to their corresponding gene, the DMR located in LVVK02.1 (2,050,001-2,051,000 bp), which was the functional region of a zinc finger transcription factor gene (ThgsfR2, GENE ID: A1A109863.1,NCBI ID: OPB38861.1),which encoded a protein with high homology to the griseofulvin synthesis regulator GsfR2.The subtilase family protein gene (Thsfp, GENE ID: A1A100688.1,NCBI ID: OPB45967.1),glycosyl hydrolase family 92 protein gene (Thgh92, GENE ID: A1A100345.1,NCBI ID: OPB47137.1),and AMP-binding enzyme gene (Thabe, GENE ID: A1A100950.1,NCBI ID: OPB45998.1)were also included (Fig. 3f ).DNA methylation in open reading frame (ORF) or promoter commonly affected transcription, we performed qPCR on the 4 genes, and result showed that the transcription levels of ThgsfR2 (9.4-folds) and Thsfp (5.4-folds) were down-regulated in T3 relative to T1, no significant Foldchange for Thgh92 (1.5-folds) and Thabe (2.1-folds) (Fig. 3g).To reveal the effect of down-regulation of these genes on lignocellulosic response of T. harzianum, these 4 genes were knocked out and obtained the mutants KO-ThgsfR2, KO-Thsfp, KO-Thgh92, and KO-Thabe.The growth of KO-ThgsfR2 on MM + straw was better than WT, while KO-Thgh92 and KO-Thabe were worse than WT, and the difference between KO-Thsfp and WT were not significant (Fig. 3h).The biomass and FPA were increased in KO-ThgsfR2 (2.52 × 10 5 copy g −1 , 1.71 U g −1 ) and decreased in KO-Thgh92 (1.22 × 10 5 copy g −1 , 1.11 U g −1 ) and KO-Thabe (0.45 × 10 5 copy g −1 , 0.62 U g −1 ) relative to WT (× 10 5 copy g −1 , 1.35 U g −1 ), while these parameters of KO-Thsfp (1.59 × 10 5 copy g −1 , 1.19 U g −1 ) was not significantly different from WT (1.81 × 10 5 copy g −1 , 1.35 U g −1 ) (Fig. 3i, Additional file 1: Fig. S3b).The trend of EG and Xylanase activities was the same as that of FPA (Additional file 1: Fig. S6d).Since the growth of KO-Thabe on PDA was also affected, the drastic decrease in straw utilization capacity could be due to the absence of Thabe affecting basal metabolism (Additional file 1: Fig. S3a).These results suggested that AS promoted lignocellulose utilization of T. harzianum involved indirectly inducing the methylation of ThgsfR2 promoter.Therefore, ThGsfR2 might be a negative regulator of lignocellulosic responses in addition to regulating griseofulvin expression, and the methylation contributed to the release of lignocellulolytic response while inhibiting secondary metabolism.Similarly, the regulator LaeA played a critical role in regulating the biosynthesis of aflatoxin, penicillin, and lovastatin, controlling cellulase synthesis [43,44].Ypr1 regulates the yellow pigment sorbicillin biosynthesis in T. reesei, and the knockout strain of YRP (KO-Trypr1) showed a decrease in secondary metabolites and an increase in biomass, notably, the cellulase gene (Trcbh1, Trbgl, Trbxl1) and xylanase gene Trxyn1 were up-regulated [45].Our results also indicated that the inhibition of griseofulvin synthesis was beneficial for the up-regulation of biomass and total enzyme activity in T. harzianum, suggesting that inhibition of secondary metabolism favored the up-regulation of CAZymes.
Chemical or biological factors could induce an increase in DNA methylation levels [46,47], but how the components of DNA methylation are recruited to genome-specific sites remains to be investigated.Factually, AdoMet was one of the intracellular inducers for DNA methylation [48,49], and significantly upregulated when AS was added, which might be the inducer for methylation of ThgsfR2 promoter.The exogenous AdoMet addition was reported to promote cellulase synthesis in Penicillium oxalicum [50].Notably, most of the novel DMRs induced by AS occurred in intergenic regions.The intergenic regions could also regulate cellular behaviors by encoding microRNAs [51][52][53].The Dicer-dependent Student's t-testing was conducted in (i).*significant difference to WT at two-tailed P = 0.017 (i, KO-Thgh92); **significant difference to WT at two-tailed P = 0.002 (i, KO-ThgsfR2); ***significant difference to WT at two-tailed P = 0.000 (i, KO-Thabe); ns = no statistical difference to WT attwo-tailed microRNAs (miRNAs) and various small interfering RNAs (siRNAs), such as exo-siRNAs, endo-siRNAs, and natsiRNAs [54] can regulate the expression of plant cell wall degrading enzyme (PCWDE) genes or secondary metabolism [55].Thus, whether the AS-induced methylation in intergenic regions was equally involved in the regulation of the lignocellulolytic response by affecting microRNA expression deserved further investigation.
ChIP-seq uncovered the downstream genes regulated by ThGsfR2
To reveal which gene was negatively regulated by ThGsfR2, we performed Chromatin immunoprecipitation (ChIP).ChIP was an in-situ and in-vivo assay that can reveal the downstream genes regulated by ThGsfR2.The strain ThGsfR2-His was constructed by adding Histag to C-terminus of ThGsfR2 and validated by western blotting (Additional file 1: Fig. S4a).ThGsfR2-His was incubated at 28 °C for 4 days, after formaldehyde cross-linking and ultrasonic fragmentation of chromatin (Additional file 1: Fig. S4b), ThGsfR2 and DNA complexes were specifically recognized and precipitated by ChIP-class His-tag antibody and protein A/G beads, and then decross-linked and recovered to yield pure DNA fragments.The products of IP and Mock (IgG) were sequenced and mapped with the T. harzianum genome.PCA showed a significant difference between IP and Mock (IgG), suggesting that IP treatment has effective nucleic acid precipitation (Fig. 4a).The transcription start site (TSS) proximal (0-2 kb) was associated with a specific transcriptional regulatory function, and the reads distributed within the TSS proximal were counted; the peak plot showed the distribution and enrichment of all peaks near the TSS proximal (Fig. 4b).
The number of peaks on the annotated genomic structural elements (intergenic, promoter, 5'UTR, exon, intron, 3'UTR) was counted, and their enrichment and distribution characteristics on each element were also counted.Peaks were most distributed on promoter, accounting for 47.84%, followed by exon (22.01%), and 30.12% in non-functional areas (intergenic and intron) (Fig. 4c).IGV visualization showed the distribution on genome of the 10 peaks with the highest enrichment folds (Fig. 4d).
After sequencing and genomic alignment of the IP and Input products, 44 peaks with differential enrichment folds (folds > 2) were obtained (Additional file 4: Dataset 3).Combining the peaks on gene elements, the peaks on intergenic and intron were filtered out, and 14 genes were confirmed to be potentially regulated by ThGsfR2.The mutant OE-ThgsfR2 (56.8-fold up-regulated) and KO-ThgsfR2 were constructed by overexpressing and knocking out ThgsfR2 to confirm the downstream gene regulated by ThGsfR2.The transcription level changes of the 14 genes in OE-ThgsfR2 and KO-ThgsfR2 relative to WT were quantified by qPCR.The peaks mapped genes with a low fold enrichment (folds < 10) did not show a significant Foldchange with up-regulation (OE) and down-regulation (KO) of ThgsfR2.The transcription levels of nucleotide binding hypothetical protein THAR02_03409 gene (HP5723, GENE ID: A1A105723.1,NCBI ID: OPB42089.1)and hypothetical protein TRIVIDRAFT_215543 gene (HP5320, GENE ID: A1A105320.1,NCBI ID: OPB42540.1)displayed a significant linear correlation with the ThgsfR2 expression levels.The carbohydrate esterase family 3 protein gene (Thce3, GENE ID: A1A109676.1,NCBI ID: OPB38703.1)was 34.9-fold (See figure on next page.)Fig. 4 ChIP-seq identified the downstream genes regulated by the zinc finger transcription factor ThGsfR2. a PCA showed the difference in reads distribution between IP and Input.Notably, after data dimensionality reduction, PC1 has 100% interpretation on eigenvalue and cumulative variability.b The peak plot showed the distribution of peak reads of IP and Input.c Venn showed the distribution of peak reads over the genome functional elements.d The IGV visualization showed the location in genome of the 10 peaks with the highest enrichment of ThGsfR2, and the red column was IP, the blue column was Input, and column height indicated the signal intensity, data range was shown inside the "[]" on the left, the proximity genes for peaks were shown on the bottom.e Expression level FoldChange of the genes corresponding to peaks in OE-ThgsfR2 and KO-ThgsfR2 were compared to WT, note that we excluded the peaks located in non-gene functional regions.f Growth of OE-HP5723, OE-HP5320, OE-Thce3 and WT on MM + straw.g Biomass of OE-HP5723, OE-HP5320, OE-Thce3 and WT grown at 28 °C for 4 days.h FPA of OE-HP5723, OE-HP5320, OE-Thce3 and WT.i Motif analysis of all peaks using Homer yielded 10 ThGsfR2 binding motifs, ranked according to scoring, the last column of table showed the transcription factors predicted for motifs using JASPAR.j The conserved sequence "TCT CTC TCTC" of motif1 was presented in Thce3 promoter with 50-fold enrichment.k DNA-Protein interactions between ThGsfR2 and motif1, Thce3 promoter region R1 (−1000 to −1 bp), and R2 (−2000 to −1001 bp) were verified by Y1H assay.Bait-reporters (pAbAi::motif1, pAbAi::R1, and pAbAi::R2) could not grow in SD medium without uracil (SD-Ura) containing Aureobasidin A (AbA, 600 ng mL −1 ); pAbAi::motif1 + pGADT7::ThGsfR2 co-transformant and pAbAi::R2 + pGADT7::ThGsfR2 co-transformant could grow on SD-Ura-Leu containing AbA (600 ng mL −1 ), pAbAi::R1 + pGADT7::ThGsfR2 co-transformant could not grow on that medium.Bars represent mean ± SEM, with n = 3 biological repeats; red dots resemble values from individual experiments.Student's t-testing was conducted in (g, h), **significant difference to WT at two-tailed P = 0.002 (g, OE-Thce3), 0.004 (h, OE-Thce3); ns = no statistical difference to WT at two-tailed P = 0.789 (g, OE-ThHP5723), 0.265 (g, OE-ThHP5320), 0.807 (h, ThHP5723), 0.077 (h, OE-ThHP5320) down-regulated in OE-ThgsfR2 and 14.2-fold up-regulated in KO-ThgsfR2 (Fig. 4e), which suggested a negative regulation of ThGsfR2 on Thce3.
Up-regulation of Thce3 facilitated the lignocellulosic response of T. harzianum
Since methylation or knockout of ThgsfR2 favored lignocellulolytic response, this suggested that ThGsfR2 repressed some genes involved in lignocellulolytic response, and HP5723, HP5320, and Thce3 might be the repressed genes.The effect of these genes on the lignocellulose utilization capacity of T. harzianum was investigated by overexpression and obtained the mutants OE-HP5723, OE-HP5320, and OE-Thce3.These mutants were cultured at 28 ℃ for 4 days (Fig. 4f ) and the biomass and FPA were also evaluated.The biomass and FPA of OE-Thce3 (1.58 × 10 5 copy g −1 , 2.61 U g −1 ) was significantly greater than that of WT (1.18 × 10 5 copy g −1 , 1.98 U•g −1 ), and these parameters of OE-HP5723 (1.15 × 10 5 copy g −1 , 2.01 U g −1 ) and OE-HP5320 (0.97 × 10 5 copy•g −1 , 1.81 U g −1 ) were not significantly different from WT (Fig. 4g, h).The same trends were obtained in EG and Xylanase activities (Additional file 1: Fig. S6e).These results suggested that of all the genes that changed significantly with ThGsfR2 expression level, only Thce3 was directly implicated in lignocellulose utilization of T. harzianum, which was reasonable because Thce3 could act directly on polysaccharides and belonged to the CAZymes family.Overexpression of the two hypothetical proteins (HP5723 and HP5320) did not promote the growth of T. harzianum on straw, suggesting that they were not lignocellulose utilization related genes, and since their transcription levels linearly correlated with the expression level of ThGsfR2, HP5723, and HP5320 might have belonged to the griseofulvin synthesis pathway.
The binding sequence preference of ThGsfR2 could be obtained by motif analysis on all peaks.The table ranked the motifs sequence characteristics of the transcription factor ThGsfR2 based on Homer analysis and scoring, and these motifs were predicted for the reported transcription factors through the Jasper transcription factor database (Fig. 4i).Notably, the conserved sequence "TCT CTC TCTC" of motif1 with the highest confidence was found in the Thce3 promoter region (−1632 to −1619 bp), correspondingly, peak_36 was coincidentally located at about −1632 to −1619 bp upstream of Thce3 (A1A109676.1) (Fig. 4j).The protein-DNA interactions between ThGsfR2 and motif1 was verified by yeast one-hybrid assay (Y1H) and the interactions between ThGsfR2 and the Thce3 promoter region1 (R1: −1000 to −1 bp) and region 2 (R2: −2000 to −1001 bp) (Additional file 1: Fig. S4c) were also verified.Notably, R2 contained motif1.The result indicated that ThGsfR2 exhibited protein-DNA interaction with motif1, strong interaction with R2, and almost no interaction with R1 (Fig. 4k).Combined with the qPCR result, it could be extrapolated that ThGsfR2 negatively regulated Thce3 through competitive binding with RNA polymerase.Therefore, methylation of ThGsfR2 gene facilitated the reduction of its repression on Thce3 thereby releasing the expression potential of ThCE3.
AS induced up-regulation of multiple CEs significantly enhanced substrate deacetylation of T. harzianum
Previous experiments have shown that AS addition eventually released the expression of Thce3 by inducing methylation of ThgsfR2 promoter, which in turn increased the lignocellulose utilization efficiency of T. harzianum.The main function of CEs was to deacetylate polysaccharides [56], and the removed acetyl groups were detectable [40].The KO-ThgsfR2 and WT were subjected to liquid fermentation for 7 days, and 1 mL centrifuged supernatant was used to detect the acetate in the fermentation broth by high-performance liquid chromatography (HPLC).The results showed that the signal intensity of acetate peak of KO-ThgsfR2 treatment was significantly greater than that of WT and had good reproducibility (Fig. 5a).By counting the peak area, acetate content in treatments was obtained by standard regression equation (Additional file 1: Fig. S5a, b).The acetate content of KO-ThgsfR2 treatment was significantly greater than that of WT treatment (Fig. 5b), which further suggested that there was a negative regulatory of ThGsfR2 on Thce3.AS induced up-regulation of Thce3 enlightened us to quantify the expression levels of other CE family genes that were annotated in T. harzianum.After extracting hyphae RNA grew under AS gradients (T1, T2, and T3) and reversing transcription, the expression of the CE family genes was quantified by qPCR.Excitingly, except for Thce1 (GENE ID: A1A101843.1,NCBI ID: OPB45480.1),including Thce3, Thce3-2 (GENE ID: A1A102051.1,NCBI ID: OPB45653.1),Thce4 (GENE ID: A1A101972.1,NCBI ID: OPB45589.1),Thce4-2 (GENE ID: A1A108380.1,NCBI ID: OPB39263.1),Thce15 (GENE ID: A1A103020.1,NCBI ID: OPB43780.1),and Thce16 (GENE ID: A1A106549.1,NCBI ID: OPB27301.1)all showed a significant up-regulation linearly with AS gradient, even the foldchanges of Thce15 and Thce16 greater than Thce3 (Fig. 5c).CE3 and CE4 defined as acetyl xylan esterase mainly responded to catalyze O-2-deacetylation and O-3-deacetylation of xylose residues, while CE15 mainly deacetylating the glucuronic acid.CE16 was a non-specific acetylesterase, which could deacetylate oligomeric xylan [57].
It follows that AS addition induced the up-regulation of multiple CEs, which facilitated the deacetylation of substrate.Gramineae such as rice have a high degree of polysaccharide acetylation, leading to GHs could not effectively binding the glycosidic bonds, which greatly limited the efficiency of straw saccharification [15,58].CEs could assist GHs in efficiently hydrolyzing polysaccharides through deacetylation.Usually, acetylation in rice polysaccharides occurred mainly on xylan [59], hence the change in xylan acetylation level of T. harzianum treated straw was examined by 2D nuclear magnetic resonance (2D NMR).T. harzianum was inoculated in the straw medium under T1 and T3 (AS addition) conditions at 28 °C for 15 days, and xylan was extracted from the straw after removing hyphae.The extracted xylan was dissolved using DMSO-d 6 for heteronuclear single quantum coherence (HSQC) assay.The HSQC spectra showed that the signal peak of 2-O-Acetyl-Xylosyl residues (Xyl2Ac: 99.71 ppm, 4.46 ppm) was reduced in the T3 spectrum compared with T1, and 3-O-Acetyl-Xylosyl (Xyl3Ac: 102.11 ppm, 4.43 ppm) was even undetectable in the T3 spectrum.Other chemical modification residues (XylR: 92.89 ppm, 4.91 ppm; 98.11 ppm, 4.26 ppm) were also undetectable in the T3 spectrum (Fig. 5d, e).The peak areas of Xyl2Ac, Xyl3Ac, and XylR in T1 and T3 were counted, and the result indicated that AS was able to enhance the xylan deacetylation by inducing the upregulation of CEs, and the deacetylation on 3-O-Acetyl-Xylosyl was more efficient (Fig. 5f ).
Lower acetylation level favored the hydrophilicity of polysaccharides
Generally, the relaxation time T 2 could characterize the moisture affinity efficiency of polysaccharides, a larger T 2 indicated a higher water freedom [60].The straw powder after hyphae treated was washed and centrifuged, and the samples of T1 and T3 were aliquoted and analyzed by low-field nuclear magnetic resonance (low-field NMR), and the T 2 relaxation spectra showed that the T 2 time of T3 was smaller than that of T1 (Fig. 5g; Additional file 5: Dataset 4), indicating that a smaller water freedom degree in T3, which further suggested that the straw decomposed by hyphae under T3 (AS addition) conditions has better hydrophilicity relative to T1.
Deacetylation not only removed the spatial barrier of glycosidic bonds but also exposed the hydroxyl group, thereby increasing the hydrophilicity of polysaccharides.The water sorption isotherms (WSI) could reflect the hydrophilicity of material.After drying, the water uptake of straw powder in different relative pressures (P/P 0 ) was determined, and the isotherms were obtained by the Peleg modeling.The WSI of treatments (T1 and T3) exhibited the type II WSI.Notably, the WSI of T3 was above T1 (at the same relative pressure), indicating that the straw powder degraded by T. harzianum grown in T3 (AS addition) owned better hydrophilicity than T1 (Fig. 5h).This result indicated that the deacetylation of xylan has the advantage of improving the hydrophilicity of straw polysaccharides.
Glycoside hydrolases (GHs) need to be soluble in water to hydrolyze glycosidic bonds, so an increase in substrate hydrophilicity might improve the accessibility of GHs to glycosidic bonds, thereby increasing the hydrolysis efficiency.Therefore, we used commercial GHs (N7, A50, XS) to hydrolyze hyphae-treated straw powder, and the production of reducing sugars could characterize the digestibility of substrate.The result showed that at 40 °C, GHs hydrolyzed hyphaetreated straw powder under T3 conditions produced higher reducing sugars than T1, but the difference was not significant.The same result was obtained at 50 °C, and the difference between T1 and T3 reached a significant level (P < 0.05) in A50 and XS (Fig. 5h).These suggested that the increased hydrophilicity of polysaccharides favored the hydrolysis efficiency of GHs.The above experimental results indicated that AS addition facilitated the hydrophilicity increase of polysaccharides and the spatial barrier reduction of glycosidic bonds by inducing the up-regulation of CEs, which in turn improved the accessibility of glycosidic bonds to glycoside hydrolases.
Conclusions
The lignocellulose decomposition promotional effect of ammonium-sulfate was mainly achieved through cellular assimilation to Met, which was activated by ATP into an active methyl donor (AdoMet), leading in turn to methylation of the ThgsfR2 promoter and an increase in DNA methylation level.Inhibition of ThgsfR2 by methylation released the expression of ThCE3, while multiple ThCEs were significantly up-regulated in response to the AS-inducing, thereby increasing the efficiency of substrate deacetylation.Deacetylation of polysaccharides improved hydrophilicity and removed the spatial barriers of glycosidic bonds hydrolysis, i.e., improved accessibility of glycosidic bonds (Fig. 6).Higher efficiency of glycoside hydrolases in hydrolyzing polysaccharides facilitated plant residues utilization and biochemicals conversion by T. harzianum.These results contributed to further understanding the complex CAZyme-producing regulatory network in fungi and the importance of carbohydrate esterases in plant residues biodegradation, which provide insights to improve fermentation efficiency and novel targets for metabolic engineering modification.
Fig. 2
Fig. 2 Metabolomic revealed the changes in intracellular metabolites induced by ammonium sulfate.a PCA demonstrated intra-group reproducibility and inter-group variability for T1 and T3.b The volcano plot counted and displayed log 2 FoldChange of the differential metabolites between T3 and T1 and log 10 P-value, with each point representing a differential metabolite.c Heatmap showed the significantly (P < 0.05) up/ down-regulated differential metabolites identified by secondary mass spectrometry.d Match plot showed the significantly up/regulated differential metabolites (VIP-value > 1.5).e KEGG enrichment of differential metabolic pathways.f MS signal intensity of Met and AdoMet in T1 and T3.Notably these values were all significantly greater in T3 than T1.g MS signal intensity of 5mC, 3mA, and 7mG in T1 and T3.h Growth comparison of OE-ThmetH and WT on MM + straw.i Quantification of intracellular Met content and FPA.Intracellular Met content and FPA of OE-ThmetH were significantly higher than that of WT.All results were obtained from hyphae samples grown on T1 and T3, both of which contained 6 biological replicates; red dots resemble values from individual experiments.Student's t-testing was conducted in (f, g, i), *significant difference to T1 at two-tailed P = 0.022 (g, T3: 5mC), *significant difference to WT at two-tailed P = 0.011 (i, OE-ThmetH: intracellular Met).**significant difference to WT at two-tailed P = 0.004 (i, OE-ThmetH: FPA); ***significant difference to T1 at two-tailed P = 0.000, (f, T3: Met), 0.000 (f, T3: AdoMet), 0.000 (g, T3: 3mA); ns = no statistical difference to T1 at two-tailed P = 0.454 (g, T3: 7mG)
Fig. 3
Fig.3WGBS revealed that ammonium-sulfate assimilates induced the upregulation of DNA methylation level.a PCA demonstrated intra-group reproducibility and inter-group variability for T1 and T3.b Profiling analysis divided the region into 20 bins, and the methylation level in each bin reflected the trend of methylation level in the genome region.c Mean methylation levels of CG, CHG, and CHH type sites in exon, intron, UTR, and intergenic gene regions for each sample, with the horizontal coordinate being the genomic element type and the vertical coordinate indicating the methylation level, and the colors (red, green, and blue) indicated the mean methylation levels of the three types of sites.d Violin plots showed the numerical distribution of CpG, CHG, and CHH methylation levels for T1 and T3, with the vertical coordinate indicating the methylation level, and the width of each violin represented the number of points that were at that methylation level.e The Venn showed the distribution of novel DMRs of T3 and the number in CpG, CHG, and CHH types.f The location of the novel DMRs of T3 in chromosome fragment and its gene function.g FoldChange of the genes that were associated with the novel DMRs in T3 relative to T1. h Growth comparison for the mutants and WT on MM + straw.i FPA of the mutants and WT grown on MM + straw at 28 °C for 4 days.Bars represent mean ± SEM, with n = 3/4 biological repeats; red dots resemble values from individual experiments.Student's t-testing was conducted in (i).*significant difference to WT at two-tailed P = 0.017 (i, KO-Thgh92); **significant difference to WT at two-tailed P = 0.002 (i, KO-ThgsfR2); ***significant difference to WT at two-tailed P = 0.000 (i, KO-Thabe); ns = no statistical difference to WT attwo-tailed
Fig. 5
Fig. 5 Ammonium-sulfate induced the up-regulation of multiple CEs, which in turn improved glycosidic bond accessibility through xylan deacetylation.a HPLC results of acetate after 10-day liquid fermentation of WT and KO-ThgsfR2.The red line was WT treatments, and blue line was KO-ThgsfR2 treatments, note that the acetate mainly comes from acetyl-groups.b Peak area and concentration of acetate for WT and KO-ThgsfR2 treatments.Concentration was obtained by peak areas and standard regression equation.c Expression levels quantification of CEs genes with AS gradient (T1, T2, T3).qPCR was performed on the 7 identified CEs genes and multiple CE genes were up-regulated with the AS gradient.d HSQC spectrum of extracted xylan from WT-treated straw under T1 condition for 15 days.The signals from Xyl2Ac and Xyl3Ac could be detected.e HSQC spectrum of extracted xylan from WT-treated straw under T3 condition for 15 days, The signal peak area of Xyl2Ac was reduced and Xyl3Ac has no detectable signal.f Quantitative results of Xyl2Ac, Xyl3Ac, and XylR peak areas in the T1 and T3 NMR spectrum, which can indicate the change in acetyl content with the AS addition.The peak areas of Xyl2Ac, Xyl3Ac, and XylR were all decreased in T3 relative to T1. g T2 relaxation spectrum of T1 and T3.Low-field NMR determined the spin-spin relaxation time (T 2 ) of straw after hyphal treated on T1 and T3 conditions for 15 days.h WSI of hyphae-treated (15 days) straw under T1 and T3 conditions, noting that the Peleg modeled WSI belonged to type II isotherm.i Degradability comparison of hyphae-treated (15 days) straw under T1 and T3 conditions.GHs (N7, A50, and XS) were used to hydrolyze hyphae-treated straw, and the affinity efficiency of GHs for polysaccharide glycosidic bonds was compared by determining the reducing sugars (40 °C and 50 °C reaction for 20 min).Note that this could compare the effects of different acetylation levels on GHs binding glycosidic bonds.Bars represent mean ± SEM, with n = 3 or 4 biological repeats; red dots resemble values from individual experiments.Student's t-testing was conducted in (b, c, i), *significant difference to T1 at two-tailed P = 0.002 (i, T3: A50-50 ℃).***significant difference to WT at two-tailed P = 0.000 (b, KO-ThgsfR2: peak area), 0.000 (b, KO-ThgsfR2: acetate).***significant difference to T1 at two-tailed P = 0.000 (i, T3: XS-50 ℃).ns = no statistical difference to T1 at two-tailed P = 0.335 (i, T3: N7-40 ℃), 0.495 (i, T3: A50-40 ℃), 0.113 (i, T3: XS-40 ℃), 0.267 (i, T3: N7-50 ℃)
Fig. 6
Fig. 6 Schematic diagram of ammonium-sulfate induced up-regulation of multiple CEs and thereby increased glycosidic bond accessibility.Normally, the transcription factor ThGsfR2 inhibited ThCE3 expression by competitive binding of the functional region of Thce3 promoter.After the ammonium-sulfate addition, AS would be transported into the cell by SULTR and AMT, and ammonium ions were converted to NH 3 by deprotonation, while SO 4 2− was reduced into S 2− by ATPS.Pyruvate produced by glycolysis bound NH 3 to produce alanine (Ala).Ala would be converted to O-acetyl homoserine (OAHS) by multi-enzyme catalysis.Subsequently, OAHS would be converted to homocysteine (HCY) catalyzed by the O-acetyl homoserine sulfhydrylase, and further catalyzed by 5-methyltetrahydrofolate-homocysteine methyltransferase (metH) to produce the terminal assimilates methionine (Met).Met was converted to AdoMet by ATP activation and induced methylation of the ThgsfR2 promoter, leading to transcriptional repression of ThgsfR2.This allowed Thce3 to be released from the repression of ThGsfR2, exhibiting a significant up-regulation of transcriptional level.In addition, multiple CEs were induced to be up-regulated by unknown inducers generated by AS assimilation.Up-regulation of CEs enhanced polysaccharide deacetylation, which in turn increased hydrophilicity and removed the spatial barrier of glycosidic bonds.1. Acetylation of xylose residues prevented glycoside hydrolases from glycosidic bonds.2. CEs catalyzed deacetylation of the xylose acetyl group.3. Deacetylation removed the spatial barrier of glycosidic bonds and increased the accessibility of the glycosidic bond to glycoside hydrolases
|
v3-fos-license
|
2018-04-03T03:25:14.790Z
|
2013-10-22T00:00:00.000
|
3007075
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/crim/2013/264189.pdf",
"pdf_hash": "b461a776957754b73e1e97b62baaf42e5252d6f0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46030",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "0c060188d4c398ca6081ca7b2e64455ef9b8017d",
"year": 2013
}
|
pes2o/s2orc
|
Gastroenterology Cases of Cutaneous Leukocytoclastic Vasculitis
Rarely, leukocytoclastic vasculitis can result from ischemic colitis, inflammatory bowel disease, and cryoglobulinemia. There is no established standard for the treatment of leukocytoclastic vasculitis associated with gastroenterologic diseases. This paper presents three cases of leukoytoclastic vasculitis, each of which is associated with a different gastroenterologic condition: ischemic colitis, Crohn's disease, and chronic hepatitis C. Each condition went into remission by treatment of leukocytoclastic vasculitis, regardless of the underlying disease.
Introduction
Vasculitis is an uncommon disease caused by destruction, necrosis, and inflammation of vessel walls of all types and sizes, especially small vessels such as postcapillary venules. Among small vessel vasculitides, cutaneous leukocytoclastic vasculitis (LV) is the most common [1]. LV may be idiopathic, or caused by viral, bacterial, and parasitic infections or by vaccines, insect bites, drugs, chemicals, toxins, rheumatologic diseases, or systemic diseases such as cancer [2,3]. Infections, drugs, and malignant diseases are the most common causes of LV [4], but even with additional testing, identification of the particular etiologic agent can be difficult. LV may, rarely, result from inflammatory bowel disease and cryoglobulinemia. The association between ischemic colitis and LV has not been reported in the literature. The rare cases of LV associated with gastroenterologic diseases have no established standard for the treatment. This paper presents three cases of cutaneous LV, each associated with a different gastroenterologic condition: ischemic colitis, Crohn's disease, and chronic hepatitis C. The treatment of LV caused remission in all of them, regardless of the underlying disease.
Case 1.
A 73-year-old male patient suffering from bloody diarrhea that had begun 3 weeks before was referred to us. He also had purpura-like rash on both lower extremities. In his history, he had been diagnosed with hypertension and pulmonary thromboembolism one year before. Our physical examination revealed widespread abdominal tenderness without defender or rebound. Stool microscopy showed an abundance of leukocytes and erythrocytes; his body temperature was 37.3 ∘ C. Laboratory test results were normal except for his C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) ( Table 1). The patient's stool culture proved negative, so a colonoscopy was performed. It revealed severe colitis beginning from the distal sigmoid colon and reaching to the midtransverse colon, suggestive of ischemic colitis ( Figure 1). The biopsy taken from the column showed widespread ulceration, hemorrhage, and necrosis of the ulcer floor, plus intense fibrinopurulent inflammation in the tissue and the lumen. A computed tomography (CT) angiography showed splenic and portal vein thrombosis, wall thickening of the colon segments from the level of splenic flexure to the rectum, and increasing density in pericolon adipose tissue. The patient was diagnosed with ischemic colitis, and lowmolecular-weight heparin treatment was started: enoxaparin sodium (120 mg/day).
Vasculitis was considered in the patient due to increasing CRP (to 17 mg/dL), neutropenia, continuous fever, and purpura in the bilateral lower extremities. The skin biopsy results were compatible with leukocytoclastic vasculitis. As a secondary cause of the vasculitis, coexistence of ischemic colitis and LV was investigated. Treatment began with methyl prednisolone (40 mg/day). On the third day of the steroid therapy, the patient's general condition improved, rashes disappeared, and the patient became afebrile, and, on the fifth day of the steroid therapy, his diarrhea was resolved.
The patient was discharged on reduced steroid doses to be followed as an outpatient.
Case 2.
A 28-year-old male patient had complaints of pain in his stomach and wrists for 2 weeks. His history includes no complaints except for periodic abdominal pain. Physical examination revealed widespread abdominal tenderness, plus tenderness, swelling, and limited range of motion by palpating of elbows, as well as petechial rash on bilateral lower extremities from ankle to kneecap. Selected laboratory tests results are as follows. CRP was 12.34 mg/dL (<0.5 mg/dL), white blood cell (WBC) was 25 × 10 3 / L, and stool microscopy showed an abundance of leukocytes and erythrocytes. Other laboratory tests were normal (Table 1). An abdominal CT scan showed marked wall thickening along long segments in the jejunal loops, multiple lymphadenopathy (the largest was 2 cm), and bilateral chronic sacroiliitis. Since colonoscopy and biopsy results were compatible with Crohn's disease, treatment was started with methyl prednisolone (60 mg/day), ciprofloxacin (1000 mg/day), and metronidazole (1500 mg/day). During followup, both WBC (15 × 10 3 / L) and CRP (7 mg/dL) decreased, but skin rashes on the lower extremity did not disappear, and the skin biopsy of the patient showed LV. Secondary reasons for LV such as drug use, infection, and additional diseases were investigated, but none of them was found, so LV was considered to be secondary to inflammatory bowel disease (Crohn's disease). When the existing methyl prednisolone treatment did not reduce leukocytoclastic vasculitis-related complaints, pulse steroids (methyl prednisolone 1 g/day) were added. After three days, the rashes had regressed so that the patient's treatment was continued with oral steroids.
Case 3.
A 59-year old female patient was admitted with complaints of rashes on her legs for 2 months and bleeding of the nose that had started 2 days before. In her history, the patient had been diagnosed for chronic hepatitis C (HCV), but went untreated for 2.5 years as there had been no indication to require any treatment. Physical examination was unremarkable except for petechial rashes on her bilateral lower extremities. Laboratory tests detected the following: platelet, 18 × 10 3 / L, anti-HCV (+), cryoglobulin (+), and complement C4 <1.47 mg/dL. According to clinical and laboratory results, cryoglobulinemia due to chronic hepatitis C was diagnosed ( Table 1). Because of the rashes on her legs, vasculitis was considered, which a punch biopsy helped to definitively diagnose ( Figure 2). Without treating the HCV, steroid therapy was initiated for LV. Clinical examination and laboratory findings for LV improved. After three days of methyl prednisolone therapy (60 mg/day), platelet values had increased and rashes had decreased. With this improvement, the patient was discharged to be followed up in the gastroenterology clinic, and after one month, methyl prednisolone therapy was interrupted. When the rash returned on her legs and the platelet values had again decreased, the patient was again admitted to the hospital and put on methyl prednisolone (32 mg/day) with the addition of azathioprine (50 mg/day). When the patient complaints declined, she was discharged to be followed up in the outpatient clinic.
Discussion
Leukocytoclastic vasculitis is a pathological condition first defined in 1950 by Pearl Zeek as vasculitis of small vessels after drug intake. LV is characterized by exudates rich in neutrophils, endothelial damage, fibrin deposition, and core fragments (leukocytoclasis) in postcapillary venules of small vessels. Patients diagnosed with LV who have isolated skin involvement but no internal organ involvement are considered to have cutaneous LV [5,6]. Approximately 23% of the cases of cutaneous LV are associated with infections, 20% with drugs, 12% with connective tissue diseases, and 4% with malignancies. Cutaneous LV is not common in primary systemic vasculitis such as Wegener's granulomatosis, polyarteritis nodosa, microscopic polyangiitis, and Churg-Strauss syndrome and generates only 4% of all cases of cutaneous LV. In the literature, the cause of cutaneous LV has been reported as idiopathic in 3% and 72% of cases [7,8]. Rarely, inflammatory bowel disease, cryoglobulinemia, and bowel bypass syndrome may be the cause of cutaneous LV.
The association between ischemic colitis and LV has not yet been reported in the literature. We wanted to present Case 1 because ischemic colitis was a definitive diagnosis and other causes of LV were excluded. However, we could find no underlying cause of the ischemic colitis. Whether LV developed due to ischemic colitis or ischemic colitis developed because of underlying LV is unclear, but in either case anticoagulant therapy produced no response. Clinical and laboratory improvement of the ischemic colitis was verified by the treatment of LV. Perhaps steroid therapy had more response because ischemic colitis developed secondary to LV.
Treatment of cutaneous LV is based on the degree of systemic involvement and should be appropriate to the underlying disease. Most patients have only scattered purpuric lesions and, clinically, no systemic involvement. Rashes usually restrict themselves. In the treatment of LV, if the encountering drug and antigen are eliminated, symptoms will disappear without treatment within days or weeks. Symptomatic treatment is given; bed rest is recommended. Any underlying infection should be treated. Patients who have longstanding skin manifestations, severe cutaneous involvement, and/or systemic disease should be treated with oral or parenteral corticosteroids. Prednisolone (20-60 mg daily in divided doses) will control the disease. Dosage should be gradually reduced to the lowest possible amount, and then the treatment must be terminated [9,10]. In Case 2, however, steroid therapy, the major treatment for both Crohn's disease and LV, was not effective, so pulse steroid therapy was started, and then the rashes vanished.
Low levels of serum complement, positive serum cryoglobulins, and high ESR may call for a diagnosis of cryoglobulinemia secondary to chronic hepatitis C [11], such as in Case 3, who also had LV + cryoglobulinemia + HCV. The LV was treated with steroid alone, without at the same time addressing the HCV. Steroid therapy resulted in remission of LV, but when steroid dose was reduced, LV recurred. Because of the risk of activating the hepatitis C infection, we kept to the lowest possible steroid dosage, therefore, azathioprine therapy was added to the initial course of treatment. By such means, remission was achieved again. The patient is still in remission, and her hepatitis C has not activated.
In summary, LV cases due to ischemic colitis, Crohn's disease, and chronic hepatitis C are rarely seen in practice and have no standard treatment. Each case may require separate treatment protocols, as did our three. Regardless of the underlying disease, however, these patients' clinical and laboratory abnormalities resolved completely by being treated only for leukocytoclastic vasculitis.
|
v3-fos-license
|
2020-02-27T09:18:27.008Z
|
2020-02-26T00:00:00.000
|
213518668
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://threatenedtaxa.org/index.php/JoTT/article/download/5526/6668",
"pdf_hash": "e761d48ce82900f1fdfc6685b3c9a059b8b677a6",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46031",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "a87ebe61a0671890c3f01f3ffb423ea965c149b5",
"year": 2020
}
|
pes2o/s2orc
|
Diet ecology of tigers and leopards in Chhattisgarh, central India
Wild prey base is a potential regulatory parameter that supports successful propagation and secured long term survival of large predators in their natural habitats. Therefore, low wild prey availability with higher available livestock in or around forest areas often catalyzes livestock depredation by predators that eventually leads to adverse situations to conservation initiatives. Thus understanding the diet ecology of large predators is significant for their conservation in the areas with low prey base. The present study reports the diet ecology of tiger and leopard in Udanti Sitanadi Tiger Reserve and Bhoramdeo Wildlife Sanctuary, in central India to know the effect of wild prey availability on prey predator relationship. We walked line transects to estimate prey abundance in the study areas where we found langur and rhesus macaque to be the most abundant species. Scat analysis showed that despite the scarcity of large and medium ungulates, tiger used wild ungulates including chital and wild pig along with high livestock utilization (39%). Leopards highly used langur (43–50 %) as a prime prey species but were observed to exploit livestock as prey (7–9 %) in both the study areas. Scarcity of wild ungulates and continuous livestock predation by tiger and leopard eventually indicated that the study areas were unable to sustain healthy large predator populations. Developing some strong protection framework and careful implementation of the ungulate augmentation can bring a fruitful result to hold viable populations of tiger and leopard and secure their long term survival in the present study areas in central India, Chhattisgarh.
INTRODUCTION
Investigating diet composition of a predator is vital to indicate the adequacy of prey base and understand prey requirements. Fluctuations in prey abundance may induce changes in dietary selection and the rate of prey consumption by predators (Korpimäki 1992;Dale et al. 1994). Prey selection by large carnivores is a vital strategy to maintain their population growth and their distribution in space and hence, it becomes essential to understand the life history strategies of carnivores for better management practices (Miquelle et al. 1996).
Generally, the tiger Panthera tigris as a large solitary predator requires >8 kg of meat daily to maintain its body condition (Schaller 1967;Sunquist 1981). It hunts a varied range of prey species based on their availability in a particular landscape; this may include large bovids such as Indian Gaur (Karanth & Sunquist 1995) to small animals like hares, fish, and crabs (Johnsingh 1983;Mukherjee & Sarkar 2013). Tigers, however, prefer prey species that weigh 60-250 kg and this indicates the conservation significance of large-sized prey species in the maintenance of viable tiger populations (Hayward et al. 2012). Whereas, plasticity in leopard Panthera pardus behavior (Daniel 1996) enables them to exploit a broad spectrum of prey species which makes them more adaptable to varied range of habitats. Large carnivores show high morphological variations (Mills & Harvey 2001) across their distribution ranges which in turn regulate their dietary requirements. The number of prey items in a leopard's diet can go up to 30 (Le Roux & Skinner 1989) or even 40 species (Schaller 1972). Leopards consume prey items ranging from small birds, rodents to medium and large-sized prey such as Chital Axis axis, Wild Boar, Nilgai and Sambar to domestic prey like young buffalo, and domestic dogs in the Indian subcontinent (Eisenberg & Lockhart 1972;Santiapillai et al. 1982;Johnsingh 1983;Rabinowitz 1989;Seidensticker et al. 1990;Bailey 1993;Karanth & Sunquist 1995;Daniel 1996;Edgaonkar & Chellam 1998;Sankar & Johnsingh 2002;Qureshi & Edgaonkar 2006;Edgaonkar 2008;Mondal et al. 2011;Sidhu et al. 2017). Hayward et al. (2012) categorized Leopard as a predator that exploits over one hundred prey species but prefers to kill prey items within 10-50 kg body weight which may deviate to 15-80 kg (Stander et al. 1997), depending on their hunger level, hunting efforts and sex (Bothma & Le Riche 1990;Mondal et al. 2011).
Apart from the natural prey-predator relationship, tigers and leopards are reported to consume domestic ungulates as a large proportion of their diet during scarcity of wild prey. Hunting and habitat destruction are the major reasons behind the decline of wild prey availability. The distribution ranges of tigers and leopards are mostly interspersed and overlapped with human habitations. In such situations, there are abundant records of carnivores hunting livestock which in turn frequently leads to retaliatory killing of the predators or escalates human tiger or leopard conflict. It has become a serious issue and can be considered as one of the toughest hurdles to resolve in large carnivore conservation and management. In India these large carnivores are gradually confined within the fragmented forest habitats that share sharp boundaries that home dense human populations. Areas like these experience intensive grazing by domestic and feral cattle, and simultaneous forest resource utilization by local people have been degrading tiger habitats in terms of retarded growth of vegetation, increase in abundance of weeds and ultimately depletion of natural prey base (Madhusudan 2000). As a consequence of increase in livestock and depletion of natural prey base, carnivores are compelled to prey on the domestic livestock (Kolipaka et al. 2017).
Studies have already been conducted to understand the feeding ecology of tiger and leopard in many parts of the Indian sub-continent but, there are only few studies available where diets of both the top predators have been studied together (Sankar & Johnsingh 2002;Ramesh et al. 2009;Majumder et al. 2013;Mondal et al. 2013). To gather knowledge on the complex diet ecology and prey-predator relationship of tiger and leopard, the present study was conducted in two different protected areas in Chhattisgarh, central India with the objectives to understand the food habits of leopard in absence of tiger (in Bhoramdeo Wildlife Sanctuary) and in presence of tigers but with low prey abundance (Udanti Sitanadi Tiger Reserve). The present study was conducted in Bhoramdeo Wildlife Sanctuary (BWS) from March 2016 to June 2016 and in Udanti Sitanadi Tiger Reserve (USTR) from December 2016 to June 2017. Studying large predator diet is always useful for park managers because it provides very relevant information on prey species utilization by large carnivores. The present study will eventually attribute to such important aspects of resource management of the large carnivore populations in both the study areas.
Study areas
BWS is spread over 351.25km 2 and situated in the Maikal Range of central India (Figure 1). It provides an extension to the Kanha Tiger Reserve as well as serves as a corridor for dispersing wildlife between the Kanha and Achanakmar Tiger Reserves (Qureshi et al. 2014 USTR is spread over 1842.54km 2 of Gariyaband and Dhamtari districts of Chhattisgarh, central India ( Figure 1). It is constituted with Udanti and Sitanadi Wildlife Sanctuaries as cores and Taurenga, Indagaon and Kulhadighat Ranges as buffer. The topography of the area includes hill ranges with the intercepted strips of plains. The forest types are chiefly dry tropical peninsular sal forest and southern tropical dry deciduous mixed forest (Champion & Seth 1968 USTR is contiguous with Sonabeda Wildlife Sanctuary (proposed tiger reserve) in Odisha on the eastern side and forms Udanti-Sitanadi-Sonabeda Landscape. This connectivity has a good future if the entire tiger landscape complex (Chhattisgarh-Odisha Tiger Conservation Unit) can be taken under significant wildlife conservation efforts.
Prey abundance estimation
Line transect method under distance sampling technique was followed to estimate the prey abundance in both the study areas (Anderson et al. 1979;Burnham et al. 1980;Buckland et al. 1993Buckland et al. , 2001. In total, 29 transects in BWS and 108 transects in USTR were laid according to their areas and surveyed during the study period ( Figure 1). Each transect was 2km in length and walked three times in BWS and 5-6 times in USTR between 06.30 and 08.30 h on different days. The total effort of the transect samplings was 174km and 974km for BWS and USTR, respectively. The data were recorded for six ungulate species, viz., Chital, Sambar, Gaur, Wild Boar, Barking Deer, and Nilgai in both the study areas. The other species recorded during the transect walk were Northern Plains Gray Langur and Rhesus Macaque. On each sighting of these species the following parameters were recorded, a) group size, b) animal bearing, and c) radial distance (Mondal et al. 2011). Radial distance and animal bearing were measured using range finder (HAWKE LRF 400 Professional) and compass (Suunto KB 20/360), respectively.
The key to distance sampling analyses is to fit a detection function, g(x), to the perpendicular distances from the transect line and use it to estimate the proportion of animals missed by the survey (Buckland et al. 2001), assuming that all animals on the line transect are detected (i.e., g(0) = 1). The assumptions of distance sampling have been discussed by Buckland et al. (2001). Program DISTANCE ver. 6 was used to estimate prey density. The best model selection was carried out by the generated values of Akaike information criterion (AIC; Akaike 2011). Population density (D), cluster size, group encounter rate and biomass (body weight of prey species x density) for each species was calculated in the present study.
Food habits estimation
The food habits of leopards and tigers were estimated following scat analysis methods (Sankar & Johnsingh 2002;Link & Karanth 1994;Mondal et al 2011;Basak et al. 2018). Tiger and leopard scat samples were collected during the sign survey along the trails in the study areas. Scats were collected opportunistically whenever encountered, irrespective of fresh or old condition to increase sample size. Scat samples were collected from entire BWS and North Udanti, South Udanti, Taurenga, and Kulhadighat ranges of USTR. In total 100 leopard scats were collected from BWS, 30 tiger scats and 121 leopard scats were collected from USTR for diet analysis. Tiger and leopard scats were differentiated on the basis J TT of degree of lesser coiling and larger gap between two constrictions in a piece of tiger scat (Biswas & Sankar 2002). Scat analysis was performed to derive frequency of occurrence of consumed prey items in the scats of tiger and leopard (Schaller 1967;Sunquist 1981;Johnsingh 1983;Karanth & Sunquist 1995;Biswas & Sankar 2002).
Scats were first sun-dried then washed using sieves and collectible hairs, bones, feathers were filtered out.
Food habits
In BWS, nine different prey items were identified from the collected leopard scats (n=100). No new prey species were found after analyzing 50-60 scats, as shown by diet stabilization curve (Figure 2A). The relationship between contributions of all nine prey species in the diet of leopards showed that minimum of 50-60 scats should be analyzed annually to understand the food habits of leopard, and the sample size (n=100) in the present study was adequate ( Figure 3A). Among all the prey species, langur contributed the most (43.65%) to the diet of leopard whereas wild ungulates contributed only 29.35% and separately livestock contributed 6.34% of the total consumption. In BWS, presence of Sambar and The hair samples were dried and collected in zip-lock polythene bags for further lab analysis. In laboratory, hairs were washed in Xylene and later mounted in Xylene (Bahuguna et al. 2010) and slides were studied under 10-40 X using a compound light microscope. For each sample at least twenty hairs (n=20 hairs/ sample) were selected randomly for diet identification and species level identification has been done based on species-specific hair medulla pattern of prey items as described by Bahuguna et al. (2010). To evaluate the effect of sample size on results of scat analysis (Mukherjee et al. 1994a,b), five scats were chosen at random and their contents analyzed. This was continued till n=100, n= 30 and n=121 scat samples were analyzed and cumulative frequency of occurrence for each prey species was calculated to infer the effect of sample size on the final result (Mondal et al. 2011). Quantification of prey biomass consumed from scat was computed by using the asymptotic, allometric relationship equation; biomass consumed per collectable scat/predator weight = 0.033-0.025exp -4.284(prey weight/predator weight) (Chakrabarti et al. 2016). Prey selection of tigers and leopards was estimated for each species by comparing the proportion of the prey species utilized from scats with the expected number of scats available in the environment for each of prey species consumed (Karanth & Sunquist 1995) in SCATMAN (Link & Karanth 1994). Prey selection was also determined by using Ivlev's index (Ivlev 1961), where E= (U-A)/ (U+A), U=relative frequency occurrence of prey species in predators' scat and A=Expected scat proportion in the environment.
J TT
Four-horned Antelope were recorded but were never represented in leopard scats. Hare and other rodents were found to contribute frequently (11.9%, 7.14%) to the leopard diet (Table 3) but porcupine was found negligible, found only in the 1.58% of all leopard scat. All the wild ungulates together represented 42.89% of total biomass consumption by leopard whereas langur alone contributed the highest at 43%. Livestock represented 9.93% of the consumed biomass by leopard which was higher than the contributions made by any other wild ungulates in BWS (Table 3). Ivlev's index of prey selection criterion indicated Chital, Wild Boar and Nilgai were not significantly utilized as per their availability. Whereas Barking Deer, Indian Hare and Common Langur were the selected prey species by leopard (Figure 4) in the area. Similarly, in the diet of leopard in USTR, nine prey items were identified from the scats (n=121). It was also found that after analyzing 40-50 scats, no new species were identified ( Figure 2B) and from the relationship between contributions of nine prey species in the diet of leopard in Udanti Sitanadi Wildlife Sanctuary, it was understood that analysis of more than 50 scats is enough to understand the food habits of leopards ( Figure 3B). Among all the prey species, Common Langur contributed maximum (50.92%) to the diet of leopard followed by rodents, livestock, Chital, Wild Boar, Barking Deer, Fourhorned Antelope, sambar and birds (Table 4). Common Langur was found to be contributing maximum (57.79%) in leopard's diet in terms of biomass consumption. All the wild ungulates together contributed 26.71% of total biomass consumed by leopards, whereas livestock alone contributed 15.50% (Table 4). Ivlev's selection index indicated only Common Langur as a selected species by leopard in USTR and all other species were utilized less than their availability in the sampling area of USTR ( Figure 5). Five different prey items were identified in the diet of tiger as analyzed through scats (n=30) in USTR. After analyzing 20 scats, no new prey species was found in tiger's diet ( figure 2C and 3C), that signifies our sample size was adequate to understand tiger's diet. It was found that 47.37% of tiger's diet was contributed by wild ungulates, 39.47% by livestock and 13.16% by common langur in terms of percentage frequency of occurrence (table 5). Livestock, however, contributed 47.33 % of the total biomass consumed by tiger in USTR (table 5). Ivlev's selection index expectedly indicated that tiger selected Chital and Wild Boar significantly (p > 0.05) whereas langur was highly avoided by tiger during the study period ( Figure 6). Sambar was found only two times in scat despite their low availability in the study area.
DISCUSSION
Population density of prey species, specifically ungulates were found significantly low in both the study areas BWS and USTR. Primates including Rhesus Macaque (24.03/km 2 and 22.94/km 2 in BWS and USTR, respectively) and Common Langur (21.82/km 2 and 35.06/km 2 in BWS and USTR, respectively) were found to be the most abundant prey speicies which evidently supported leopard population in the areas but were not preffered by tiger. Various studies on diet ecology of tiger indicated that they mostly prefer large to medium size prey species like Sambar, Chital and Wild Boar, whereas in Chhattisgarh large to medium size prey species have been found to be less as compared to other protected areas in central India (Table 6). Despite low abundance, however, tiger was found to prey mostly upon wild prey species including Chital and Wild Boar in USTR. Leopard was found to prefer mostly small to medium sized prey species including Barking Deer and Common Langur in both the study areas.
It can be assumed that low abundances of small to large sized wild ungulates in both the study areas have triggered livestock utilization by the large cats (Table 3-5). In USTR, livestock contributed 50% of overall biomass consumed by tiger and 15% in case of leopard. Similarly, in BWS livestock contributed more than 9% of overall biomass consumed by leopard. Less abundance of wild ungulates and higher utilization of livestock by tiger and leopard eventually have indicated that both the protected areas were not in a condition to sustain healthy large predator populations and the conditions appeared to be challenging for future large carnivore conservation efforts.
The study areas have resident populations of hunting human communities like Baiga, Kamar and Bhunjiya who still practice traditional hunting in these areas of Chhattisgarh. USTR even has pressures from external hunters who illegaly exploit the region as their hunting ground. These uncontrolled practices are serious threats to the wild ungulate populations and consequently affecting the food resources of carnivore populations in the study areas. Therefore, prey depletion by these illegal hunting practices compels large mammalian predators to prey upon livestock, which brings forward even bigger conservation threat, i.e., negative humanwildlife (tiger/leopard) interaction. Athreya et al. (2016) also supported the fact that in the situations where large prey availibility is less, chances of livestock predation is automatically elevated.
Both the study areas have villages inside the core J TT areas and eventually have thousands of livestock which roam mostly unguarded within the protected areas and become easy prey to large predators. BWS has 29 villages inside the protected area boundary with approximately 4,000 domestic and feral cattle population, whereas, USTR has settlements of 99 villages with 26,689 livestock population. In the eight ranges of USTR, livestock density varied from 4.776-33.581/km 2 even overall density of livestock was 14.489/km 2 for the entire USTR which was found higher than the any wild ungulate population in this area. Consequently, cattle killing by both tiger and leopard has become common in these areas and may provoke severe negative human-carnivore interactions situations in both the protected areas in the near future. The present study indicates the urgency of wild ungulate population recovery programs in both BWS and USTR and also supports to initiate the framework of the recovery plan by finding evident facts of low wild ungulate abundances and higher livestock utilization by large predators in these areas. Earlier studies showed that increasing availability of wider variety of ungulate prey species and checking grazing activities in a protected forest system may decrease the livestock predation by large predators in those areas and eventually decrease chances of negative human-large predator interactions (Basak et al. 2018;Sankar et al. 2009). Feasibility framework for recovery, however, is required by involving multi-step conservation friendly control measures. Village level mass sensitization to change their perception is vital to build up support for the ungulate recovery program and to maintain viable populations of large cats. Simultaneously strong protection framework is needed to safeguard the captive breeding and re-stalking of wild ungulate populations to increase sufficient prey-base for both tiger and leopard.
Careful effort and strong scientific background behind the implementation of the ungulate augmentation plan can bring a fruitful result and can secure long term survival of large cats and other layer of carnivores in Bhoramdeo Wildlife Sanctuary and Udanti-Sitanadi Tiger Reserve in central India, Chhattisgarh.
|
v3-fos-license
|
2019-03-05T14:12:22.582Z
|
2019-03-04T00:00:00.000
|
67870128
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.00293/pdf",
"pdf_hash": "f5341f23491a5ebc970c422ec7942d0062ec75dc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46033",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "f5341f23491a5ebc970c422ec7942d0062ec75dc",
"year": 2019
}
|
pes2o/s2orc
|
Natural Extracellular Electron Transfer Between Semiconducting Minerals and Electroactive Bacterial Communities Occurred on the Rock Varnish
Rock varnish is a thin coating enriched with manganese (Mn) and iron (Fe) oxides. The mineral composition and formation of rock varnish elicit considerable attention from geologists and microbiologists. However, limited research has been devoted to the semiconducting properties of these Fe/Mn oxides in varnish and relatively little attention is paid to the mineral–microbe interaction under sunlight. In this study, the mineral composition and the bacterial communities on varnish from the Gobi Desert in Xinjiang, China were analyzed. Results of principal components analysis and t-test indicated that more electroactive genera such as Acinetobacter, Staphylococcus, Dietzia, and Pseudomonas gathered on varnish bacterial communities than on substrate rock and surrounding soils. We then explored the culture of varnish, substrate and soil samples in media and the extracellular electron transfer (EET) between bacterial communities and mineral electrodes under light/dark conditions for the first time. Orthogonal electrochemical experiments demonstrated that the most remarkable photocurrent density of 6.1 ± 0.4 μA/cm2 was observed between varnish electrode and varnish microflora. Finally, based on Raman and 16S rRNA gene–sequencing results, coculture system of birnessite and Pseudomonas (the major Mn oxide and a common electroactive bacterium in varnish) was established to study underlying mechanism. A steadily growing photocurrent (205 μA at 100 h) under light was observed with a stable birnessite after 110 h. However, only 47 μA was generated in the dark control and birnessite was reduced to Mn2+ in 13 h, suggesting that birnessite helped deliver electrons instead of serving as an electron acceptor under light. Our study demonstrated that electroactive bacterial communities were positively correlated with Fe/Mn semiconducting minerals in varnish, and diversified EET process occurred on varnish under sunlight. Overall, these phenomena may influence bacterial–community structure in natural environments over time.
INTRODUCTION
Rock varnish, known as "desert varnish, " is a dark-colored Fe/Mn-rich film that forms on rock surfaces in almost every terrestrial weathering environment on earth. On average, rockvarnish thickness may range from 100 µm to several hundred micrometers, with an accumulation rate of 1-15 µm/1000 years (Perry and Adams, 1978;Dorn et al., 1992;Liu and Broecker, 2000;Goldsmith et al., 2012). The elemental composition of rock varnish varies among different rocks, but it commonly comprises clay minerals (70%), amorphous silica and Fe/Mn oxides (about 10-30%) (Potter and Rossman, 1979;Dorn, 2007;Garvie et al., 2008). Rock varnish has been studied by geologists and microbiologists for many years and several theories have been put forward to explain its origin including abiotic origin, biotic processes, or a combination of different mechanisms (Dorn and Oberlander, 1981;Kuhlman et al., 2006;Goldsmith et al., 2014). Notably, varnish is attracting increased attention in the field of astrobiology owing to the recent detection varnish-like geological structures on Mars (Krinsley et al., 2009).
Over the past decades, the microbial diversity of rock varnish in different geographical settings worldwide have gained considerable attention. A diverse microbial ecology of varnish has been analyzed by culture-independent molecular methods from different sites such as the Death Valley, Mojave Desert, Whipple Mountains and Black Canyon (Krumbein and Jens, 1981;Perry et al., 2002;Schelble et al., 2005;Kuhlman et al., 2006Kuhlman et al., , 2008Northup et al., 2010;Marnocha and Dixon, 2014;Esposito et al., 2015). Despite many published works on mineral composition and microbial diversity of rock varnish all over the world, only a few studies have focus on microbial diversity in Xinjiang, China. Further study is required to understand the microorganisms in these special environments.
Previous studies have focused on the microbe biodiversity within varnish, and Fe/Mn minerals have been reportedly concentrated by bacterial activity, whereas some Mn-oxidizing bacteria have been isolated from varnish (Krumbein and Jens, 1981;Kuhlman et al., 2005;Goldsmith et al., 2014). Although Fe/Mn minerals are used by bacteria as electron acceptors in several metabolic pathways (Lovley, 2006;Weber et al., 2006;Summers et al., 2010;Byrne et al., 2015;Shi et al., 2016), little attention has been given to their semiconducting properties or their influence on bacterial communities. Electron transfer is one of the most fundamental life processes and extracellular electron transfer (EET) in microorganisms is associated with organic matter and elements cycling. Recently research has indicated an unusual interaction between microorganisms and semiconducting minerals under light irradiation. Lu et al. (2012) demonstrated that photoelectrons were produced by the photocatalysis of semiconducting minerals (i.e., rutile, sphalerite, and goethite), and supported the growth of non-phototrophic microorganisms including Acidithiobacillus ferrooxidans, heterotrophic Alcaligenes faecalis, and a natural soil microbial community. In addition, the non-photosynthetic bacterium Moorella thermoacetica was proven to assimilate carbon dioxide into acetate cooperated with the cooperation of cadmium sulfide (CdS) nanoparticles under light illumination (Sakimoto et al., 2016). Although photoenhanced electrochemical interactions have been observed between semiconducting minerals and microorganisms (Feng et al., 2016;Ren et al., 2017Ren et al., , 2018Zhu et al., 2017). Surprisingly, little attention given to the interaction between Fe/Mn semiconducting minerals and bacterial communities on varnish in natural environments.
To the best of our knowledge, only a few studies have focused on the diverse microbial-community composition of varnish in the Gobi Desert in Xinjiang, China. Thus far, almost no research has analyzed the semiconducting properties of varnish or explored its interaction process under sunlight. The aims of the present study were as follows: (i) to analyze mineral composition through synchrotron radiation X-ray diffraction (SR-XRD), Raman spectroscopy and describe the semiconducting characterization of varnish by electrochemical measurements; (ii) to identify the diversity of bacterial communities from varnish, non-varnish (named substrate) and soil in surrounding environments; (iii) to explore the relationship between semiconducting minerals and microorganisms under visible light and subsequently investigate the complex EET between them. Our results may extend knowledge on mineral-microbe interactions and help elucidate how minerals influence the microbial world especially under sunlight, in natural environments.
Site Description and Sample Collection
The study area was located at the Desert of Hami, which was a section of the Gobi Desert in Xinjiang, China. Rocks and soil were sampled on June 26, 2017, and the location information were summarized in Table 1. In each site, quintuplicate samples of rock with varnish, non-varnish coated rocks (named substrate) and surrounding soil (the topmost surface soil under sunlight) were collected with a five-point sampling method. All rocks and soil samples were divided into two parts for both microbiological and mineralogical analyses. To assure a sterile condition for molecular analysis, all three kinds of samples were obtained using flame-sterilized tweezers and immediately placed into
Varnish Mineral Observation and Analysis
To study the morphological features of varnish, rock samples were cut by a diamond saw blade and smoothened with silicon carbide powder. After being impregnated and solidified with polyester resin, the samples were cut into 100-150 µm with wafer cutter and burnished into 30 µm-thick thin sections. These thin sections were observed under an optical microscope (Supplementary Figure S1), and scanning electron microscope (SEM) and the chemical compositions of Fe/Mn mapping were detected by energy dispersive X-ray spectroscopy (EDS). For SEM imaging, the sections were coated with Cr and all above analysis was finished on FEI Quanta 650 FEG system. Then, the varnish mineral composition was investigated by the Synchrotron Radiation X-ray diffraction (SR-XRD) at beamline BL14B1 of the Shanghai Synchrotron Radiation Facility (SSRF) at a wavelength of 0.6887 Å, which was a beamline based on a bending magnet, a Si (111) double crystal monochromator was employed to monochromatize the beam with a focal spot of 0.5 mm (Yang et al., 2015). Furthermore, the Fe/Mn mineral samples were measured by using a Renishaw inVia Reflex system (Wottonunder-Edge, Gloucestershire, United Kingdom) equipped with a 785 nm laser and a long working-distance 50× objective with a focus spot of 1-2 µm and a spectral resolution of 1 cm −1 . The frequency stability and the accuracy of the apparatus were checked by recording the Raman spectrum of Si. The concentration of Mn 2+ in the solution was determined via inductively coupled plasma-optical emission spectrometry (ICP-OES, Spectro Blue Sop).
DNA Extraction, 16S rRNA Amplification and Phylogenetic Analysis
The DNA of samples were extracted with the PowerSoil DNA Isolation Kit, and DNA concentration was determined with a UV-Vis spectrophotometer (Nanodrop ND-1000, United States). The V3-V4 hypervariable regions of bacterial 16S rRNA gene were amplified with the primers 357F and 806R (Huber et al., 2007). For each sample, a 10-digit barcode sequence (provided by Allwegene Company, Beijing, China) was added to the 5 end of the forward and reverse primers.
Electrode Fabrication and Electrochemical Measurement
Mineral electrodes were fabricated using varnish, substrate and soil samples brought back from the field. First, three different kinds of samples were ground in a bowl chopper and sieved through a 400-mesh sieve. To prevent shedding, the mineral powder (20 mg) was mixed with anhydrous ethanol (400 µL) and 5% Nafion solution (10 µL). Then, the mixture was dropwise added onto the conductive side of fluorine-doped tin oxide (FTO) electrode. The blank control FTO electrode was dripped with only anhydrous ethanol and Nafion solution.
A conventional three electrode system was used (Grätzel, 2001;Hsu et al., 2012), consisting of a mineral electrode, Pt sheet and a saturated calomel electrode (SCE, 0.244 V vs. normal hydrogen electrode) that served as the working, auxiliary and reference electrodes, respectively. Dark and light conditions were realized by an external Light Emitting Diode (LED) having a working wavelength from 400 to 700 nm (Supplementary Figure S2). The light illumination intensity was 100 mW/cm 2 , which was measured by a FGH-1 photosynthetic radiometer (Beijing Normal University Photoelectric Instrument Factory, Beijing, China). Linear sweep voltammetry (LSV) was performed in 0.1 M Na 2 SO 4 within the potential range from 0 to 1.0 V at a scan rate of 2 mV s −1 . The photocurrent-time response of the mineral electrodes was determined by an electrochemical workstation (CHI 760E Shanghai Chenhua Instrument, Shanghai, China) at a constant potential of 0.6 V. The measurements were carried out in 0.1 M Na 2 SO 4 and 0.1 M Na 2 SO 4 + 1.0 M ethyl alcohol (EA) solution as electrolyte, respectively. All potentials were referenced to the SCE electrode unless otherwise stated in whole paper.
Bacterial Communities Culture and EET Process Analysis
To gain a better understanding of the EET process that occurred in difference kinds of samples, we took 5 mg of varnish, substrate and soil samples from every four sampling sites. Then, we separately mixed all four varnish, substrate and soil samples. The microbial community from three different kinds of samples were grown in 1/10 diluted Luria-Bertani (LB) medium at 35 • C with 200 rpm for 48 h under light irradiation. Subsequently, the cell suspension was inoculated in 2% (v/v) 1/10 diluted LB (35 • C, 200 rpm) with agitation until the optical density at 600 nm (OD 600 ) reached approximately 1.0. The EETs between minerals and microorganism was investigated through photocurrent-time curves recorded by electrochemical workstation. Photoresponse process for three different mineral electrodes and EET process were measured in a quartz cube cell (10 × 10 × 10 cm) with a conventional three-electrode configuration system (mineral electrode, Pt sheet and a SCE used as working, auxiliary and reference electrode, respectively).
Owing to the turbidity for microflora mediums, the actual illumination intensity reached to the electrode surface was approximately 80 mW cm −2 . The orthogonal experiments for different mediums and mineral electrodes were compared with each other and the electron transferring process should be inferred by the value of light/dark currents.
Construction of Light-Birnessite-Pseudomonas System
Based on the mineral composition and 16S rRNA genesequencing results, birnessite (the major Mn oxide) and Pseudomonas (a common electroactive bacterium genus present in varnish) were built in a light-Birnessite-Pseudomonas system for mechanism study. Considering no pure bacteria were separated from varnish and that Pseudomonas aeruginosa is ubiquitously found in natural environments, the pure culture of P. aeruginosa PAO1 was selected for the system. Birnessitetype manganese oxide electrodes were prepared by cathodic electrodeposition as reported previously (Ren et al., 2018). P. aeruginosa PAO1 was cultured in Luria-Bertani no-sodium (LBNS) medium (Tryptone 10 g L −1 , Yeast extract 5 g L −1 ) at 35 • C with 200 rpm for 24 h and then transferred to the reactor. The reaction system (liquid volume for 120 mL) included a birnessite film coated FTO photoanode, a platinum plate electrode and a SCE. The distance between working electrode and counter electrode was 1 cm. Dark/light cycles were provided by an external LED with an actual illumination intensity, reaching the birnessite electrode surface of approximately 60 mW cm −2 . The electron transfer process between semiconducting birnessite and P. aeruginosa was recorded by a multi-potentiostat (CHI 1000 C, CH Instruments Inc., China) with a potential of 0.6 V. The reactors were placed in a temperature-controlled biochemical incubator (LRH-250, Shanghai, China) with a constant temperature of 35 ± 1 • C.
Mineral Composition of Varnish Samples
In arid and semiarid China, the surface of Gobi was covered by dense gravel and rock varnish that is ubiquitously distributed (Supplementary Figure S1a). Under optical microscopy, a black to brown colored coating was observed, which was varnish and with a thickness varying from several to hundreds of micrometers. Moreover, the EDS mapping revealed the distribution of Fe/Mn elements (Supplementary Figure S1b).
The EDS data revealed that the dominant elements in rock varnish were O, Mn, Fe, Si, and Al. The Mn content and Fe content was 12.42-17.07 and 8.85-11.28 wt%, respectively.
Moreover, the O, Si, and Al content were 47. 36-50.43, 11.23-16.72, and 6.64-8.78 wt%, respectively. The concentrated Fe/Mn in varnish agreed with the previous results that this thin layer was a manganese-and iron-rich coating and the concentration up to one hundred times higher than that in substrate rocks (Perry and Adams, 1978;Dorn et al., 1992;Thiagarajan and Lee, 2004;Goldsmith et al., 2012). To further explore the mineral composition of varnish, SR-XRD was employed, which had a high signal-to-noise ratio and was suitable for the characterization of nanomaterials and nano-minerals. Figure 1A recorded the XRD patterns, both clay and Fe/Mn minerals were confirmed, such as quartz, montmorillonite, birnessite, hollandite, hematite, and goethite. Considering that Fe/Mn oxides were together with clay minerals and crystallized poorly, their signal was much weaker than of quartz. The main mineral phase of Fe/Mn oxides in varnish, a confocal Raman spectroscopy, was utilized for better understanding. As shown in Figure 1B, the Raman bands at 591 and 641 cm −1 were attributed to Mn-O stretching vibration along the chains in MnO 6 octahedron and the symmetric stretching vibration of MnO 6 groups, respectively (Julien et al., 2003). The results indicated that the primary Mn oxides were birnessite, which agrees with the previous studies that stated birnessite and birnessite-like minerals are the major phases that occurred in a wide variety of geological settings including soils, desert varnishes, Mn-rich ore deposits and even ocean Mn nodules (Post, 1999). In addition, the primary iron oxides were hematite, which can be identified by two A 1g modes (226 and 494 cm −1 ) and three E g modes (298, 409, and 614 cm −1 ) (De Faria et al., 1997). Based on these results we concluded that Fe/Mn was enriched in varnish, and the major mineral phases were hematite and birnessite.
Photo-Response of Varnish and Semiconducting Properties
Hundreds of semiconducting minerals are found on Earth such as Fe/Mn oxides (e.g., hematite, pyrolusite), Ti/Ti-Fe oxides (e.g., rutile and ilmenite), and sulfides (e.g., sphalerite, pyrite) (Xu and Schoonen, 2000;Lu et al., 2012). The electronic structure of semiconducting minerals can often be characterized by a filled valence band (VB) and an empty conduction band (CB). When energy was absorbed, electrons in the VB can be excited to the CB. This process leads to the separation of electron-holes, which can induce redox reactions (Grätzel, 2001). In order to demonstrate the semiconducting properties of varnish and examine their response to sunlight, mineral electrodes were fabricated with real field varnish, substrate and soil samples. Subsequently, their photocurrent output diversity under light irradiation (illumination intensity of 100 mW/cm 2 ) were contrasted. The dark/light linear sweep voltammetry (LSV) curves (Figure 2A) showed that both soil and FTO electrodes produced negligible dark currents, varnish and substrate electrode had an average current of 24.5 ± 4.2 and 6.2 ± 1.5 µA, respectively. Upon light illumination, all photocurrents increased in varying degrees. Notably, the photocurrent for varnish electrode was as high as 32.7 µA at 0.6 V and even reached to 82.4 µA at 1.0 V. However, the photocurrents for other electrodes were lower than 20 µA at 1.0 V. The results indicated that varnish had good response to solar light and its remarkable photocurrent was attributed to the photocatalytic reaction. Furthermore, the repeatability of light-response for mineral electrodes was investigated by using photocurrent-time curves. As shown in Figure 2B, the photocurrent density of varnish electrode was 3.1 ± 0.4 µA/cm 2 under light illumination, which was almost 6 times higher than that of the value of the control substrate samples (0.5 ± 0.2 µA/cm 2 ) under the same conditions. Notably, the photocurrent response of varnish mineral electrode was further enhanced to 5.1 ± 0.2 µA/cm 2 , owing to the suppressed direct charge carrier recombination when EA was utilized as hole scavengers in electrolyte. The anodic photocurrents corresponding to the photo-oxidation process indicated that varnish had an n-type (mostly charge carriers were free electrons) semiconducting nature which was in accord with the fact of n-type for hematite and birnessite (Xu and Schoonen, 2000;Hsu et al., 2012;Ren et al., 2017). These sixtime higher photocurrents for varnish than substrate minerals demonstrated that rock varnish exhibited photoelectrochemical activity in response to light, and its semiconducting properties should be associated with the photogenerated electron-transfer process and maybe had potential influencing biocatalysis metabolism in natural conditions.
General Characteristics of Bacterial Communities in Different Samples
The bacterial community in varnish, substrate and soil samples was investigated via High-throughput Illumina sequencing technique. The flat extent of the rarefaction curves indicated
FIGURE 4 | (A)
The PCA based on the relative OTUs abundance indicated that three kinds of samples with different covering formed distinct communities (red-varnish; blue-substrate; orange-soil); (B) Statistical analysis of microbiology on the surface (the yellow point) and substrate (the black point), the p-value was less than 0.01. that the major parts of the bacterial communities in all samples were covered (Supplementary Figure S3). After quality and chimera evaluation, a total of 378367 final tags and 5262 operational taxonomic units (OTUs) were obtained from 12 samples. Moreover, 1249, 1629, and 2384 OTUs were identified as belonging to varnish, substrate and soil samples, respectively. All soil samples exhibited higher variety than the other two kind of sample which is in accordance with the Observed species and Goods coverage results (Supplementary Table S1). The varnish samples have lower diversity than the substrate and soil samples as they showed a lower Shannon result and Chao1 index. In addition, 15 phyla were present across the 12 samples, of which 10 phyla accounted for almost 95%. Actinobacteria dominated all samples with average percentages of 35 to 94% (Supplementary Figure S4). The phyla present in the substrate and soil samples were similar to one another, and they differed from the varnish samples (Figure 3). At the genus level, Rubrobacter accounts for a large proportion (more than 30%) at each sample, which belongs to Actinobacteria class, and is thought to be a radiation resistant bacterium (Terato et al., 1999). This bacterium had been isolated from various extreme conditions, such as hot spring, Gobi Desert (Suzuki et al., 1988;Zhang et al., 2012). Previous studies indicated that some isolated bacteria are capable of oxidizing or reducing Mn (Dorn and Oberlander, 1981). Mn oxidizing/reducing bacteria were identified in our study, such as Acinetobacter, Pseudomonas, Bacillus, Rhizobium, and Brevundimonas. The relationship between these bacteria and Fe/Mn minerals was familiar with us. However, it should be point out that these oxidizing or reducing metabolism process were closely related to the EET process by microorganisms.
Microbial Community Structure Characteristics and Cluster Analysis
To better understand the microbial-community structure and its relationship with minerals. Principal components analysis (PCA) was performed to determine the relative abundance of OTUs, which can simplify complex problems and is usually used to analyze the influencing factors from polyelement. As shown in Figure 4A, the PCA results indicated that the varnish, substrate and soil samples had different coverings and formed distinct communities. In addition, all three kinds of samples clustered together, respectively, and the distance between the soils and substrates was smaller than that with varnish samples, suggesting that the bacterial communities in varnish were highly different with the others. We calculated the PCA values to identify the major microorganisms that contribute to the PCA results. An interesting phenomenon was observed, in which 11 electroactive microorganisms appeared in varnish, contributing significantly to the PCA results ( Table 2). The t-test statistical analysis was performed to further explore the distribution variations of the 11 electroactive microorganisms, and the results repeatedly showed that these electroactive microorganisms gathered on varnish ( Figure 4B).
To gain more insight, we calculated the total number of electroactive genera varnish, substrate and soil samples. All the four sample sites showed that the electroactive microorganisms gathered on varnish and fewer appeared in the other two kinds of samples (Table 3). One key question raised was why electroactive microorganisms gathered on varnish. Based on mineral analysis and photoelectrochemical measurements, we concluded that varnish is a thin coating enriched with semiconducting Fe/Mn oxides. The negatively charged electrons and positively charged holes in the CB and VB can be yielded after absorbing light energy. These electron-hole pairs then induced redox reactions, and the semiconducting minerals can serve as electron conduits for different redox reactions. Previous studies have demonstrated that the photo-holes can combine with the electrons from microorganisms, and the photoenhanced electrochemical interaction occurred between hematite and Shewanella, Geobacter, or even the bacterial community under light (Feng et al., 2016;Ren et al., 2017;Zhu et al., 2017). Thus, other issues have emerged. First, do these electroactive microorganisms have a relationship with the semiconducting Fe/Mn minerals on varnish? Second, can the semiconducting minerals in varnish participate in the EET process? These semiconducting Fe/Mn oxide minerals in varnish, may possibly have a long-existing pathway, participating in the EET process and influencing bacterial-community structure in local environments, but they are not given much attention.
EET Process Between Semiconducting Minerals and Bacterial Communities
The in situ currents between electroactive microorganisms and semiconducting minerals on varnish are difficult to measure. However, these currents should be observed after culturing the bacterial communities. Undoubtedly, primitive microbial communities change once cultured, but different currents may be obtained between mineral electrodes and the cultured microorganisms owing to the numerous electroactive genera that gathered on varnish. It would be possible for observing the EET process occurred between semiconducting minerals and "electroactive genera" from varnish under the same culture condition. Accordingly, the bacterial communities from varnish, substrate and soil samples were cultured by 1/10 LB under light irradiation at the same time. The EET between mineral electrodes (varnish, substrate, and soil) and these microfloras were explored in detail under light and dark conditions. As shown in Figure 5, significant currents can be observed only when varnish electrode was employed. When using electrodes made by substrate or soil samples, the photocurrents were negligibly, and their average value was lower than 1 and 0.2 µA/cm 2 , respectively. These slight currents should be ascribed to the lower concentration of Fe/Mn semiconducting minerals in substrate or soil than that in varnish, which was consistent with the LSV results in Figure 2.
Comparison of the currents between the varnish electrode and the four kinds of mediums (bacterial communities from varnish, substrate, soil and blank 1/10 LB), showed that they significantly differed, which can be due to the microflora media. Notably, the most remarkable photocurrent was observed between the varnish electrode and the varnish microflora, with the average value reaching 6.1 ± 0.4 µA/cm 2 . However, the values of photocurrents in the substrate and soil microflora were 3.9 ± 0.2 and 2.9 ± 0.1 µA/cm 2 , respectively, all of which were higher than that in 1/10 LB medium (2.4 ± 0.1 µA/cm 2 ) (Figure 5). In addition, bacterial community studies indicated that the percentage of electroactive genera in varnish, substrate and soil microflora were 62, 43, and 38%, respectively (Supplementary Figure S5). The absolute growth of the photocurrents indicated that a high number of electroactive compounds appeared in varnish culture, which was possibly associated with the electroactive genera in varnish. Moreover, when using a substrate electrode or a soil electrode, the photocurrents decreased in the order varnish microflora > substrate microflora > soil microflora (red > green > black). These results showed a good electron transfer process between mineral electrodes and varnish-cultured microflora under light irradiation, indicating that the EET between semiconducting minerals and electroactive bacterial communities may have occurred on the varnish under sunlight in a natural environment.
Mechanism Study Based on "Light-Birnessite-Pseudomonas aeruginosa"
To clarify the detailed EET process and understand the interaction mechanism between semiconducting minerals and microorganisms in varnish under light, we further built a pure bacteria culture system and explored its performance. Based on the mineral and bacterial community analysis results in varnish. Birnessite, a major mineral phase of Mn oxide in desert varnishes, and P. aeruginosa, an electrochemical activity bacterium ubiquitous found in environments (actual average value of relative abundance in varnish, substrate and soil samples: 1.85, 0.02, and 0.02%, respectively) were chosen. The photocurrent results for "Light-Birnessite-Pseudomonas" system with a reduplicative 1200 s dark/light illumination cycles (laboratory-based simulated day-night cycles) are presented at Figure 6. In the negative control without Pseudomonas, the photocurrents were stable and with an average value of 25 µA. Notably, in the "Light (on/off)-Birnessite-Pseudomonas" system, a steadily growing current generated, both photo/dark currents increased at a higher speed after 40 h and their value reached at 205 and 137 µA at 100 h, respectively. The enhanced photocurrents were ascribed to Pseudomonas, indicating semiconducting birnessite transferred out more electrons from P. aeruginosa PAO1 under light condition. This result demonstrated that the enhanced EET process was realized with the cooperation of P. aeruginosa and semiconducting birnessite under light illumination.
In order to further understand the interactions in the "lightmineral-microbe" system, a "Dark-Birnessite-Pseudomonas" control was conducted. Notably, the current was only 47 µA at 100 h, which was considerably lower than that in the light system. The birnessite film was reduced to Mn 2+ (0.21 µM) after 13 h as shown in Figure 6 (insert picture), suggesting that birnessite played a role of electron acceptor. Whereas, the birnessite electrode can remain stable even after 110 h in "Light (on/off)-Birnessite-Pseudomonas" system, suggesting that birnessite did not act as an electron acceptor. Under light, the semiconducting property of birnessite was activated and photoelectron-holes were generated at the same time, leading to mineral photocatalysis. The photoexcited holes had more positive potential and can easily combine with the electrons produced by microorganism metabolisms. High currents were maintained at the light on/off system, indicating that the efficient electron transfer process occurred on the surface of birnessite and mineral structure was "protected" after harvesting light energy. Hence, the electron-transfer rate changed drastically, and photocurrents steadily increased over time.
EET Possibly Occurred on Varnish Under Natural Light Conditions
Manganese and Iron are common variable valence elements in nature and the Fe/Mn oxides or oxyhydroxides are widely distributed throughout the earth environments (Koschinsky and Halbach, 1995;Boston et al., 2008). Previous studies have indicated that Fe/Mn oxides are closely related to the metabolic processes of microorganisms, especially Fe/Mn oxidizing bacteria (Emerson et al., 2010), and these Fe/Mn oxides could protect microorganisms from ultraviolet radiation (Friedmann, 1982;Hughes and Lawley, 2003). However, limited attention has been devoted to the semiconducting properties of these Fe/Mn oxides in nature and little attention focus on their interaction with bacteria after sunlight activation. In recent years, research on the electron transfer between electricigens and semiconducting materials under light illumination has notably progressed (Lu et al., 2012;Sakimoto et al., 2016). Our research demonstrated that both electroactive microorganisms and semiconducting Fe/Mn oxides gathered on varnish. When these minerals are exposed to sunlight, the photoelectron holes could be produced and the EET process should be exist on varnish for a long geological history.
Fe/Mn oxides are abundant on the Earth's surface and serve as the most common natural electron acceptors for EET (Shi et al., 2016). However, under sunlight irradiation in daytime, semiconducting minerals should be stimulated, generating photoelectron-holes, and a rapid electron transfer process should occur on the varnish as shown in Figure 7. Owing to the semiconducting properties, photo-excited holes combined with electrons from microorganisms. Therefore, Fe/Mn oxide no longer acted as an electron acceptor but took part in the EET process, delivering them to surrounding environments. At night or during the absence of sunlight, the oxidation of organic matter is realized duo to microbial metabolism, in which Fe/Mn oxides serve as electron acceptors (Weber et al., 2006;Shi et al., 2012). From an ecological perspective, EET is an efficient way to cope with the limitations in habitat for electroactive microorganisms (Klüpfel et al., 2014;Koch and Harnisch, 2016). With the cooperation of semiconducting minerals and sunlight, a local field effect is generated, resulting in more efficiently electron transfer process, and this phenomenon affects the structures of bacterial communities and facilitate electroactive microorganism accumulating on varnish over time. Notably, both electroactive genera and Fe/Mn semiconducting minerals are positively correlated with the EET process under light. Further studies are warranted to understand EET in natural environments, especially under sunlight irradiation in the future.
CONCLUSION
Rock varnish is commonly found on rock surfaces throughout the arid regions of the world. In this study, we analyzed the mineral composition and bacterial communities of varnish in Xinjiang. Electrochemical measurements demonstrated that varnish had photoelectrochemical activity in response to visible light owing to the semiconducting Fe/Mn oxides. The bacterial community for varnish, substrate and surrounding soil were analyzed, and the PCA results indicated that electroactive microorganisms gathered on varnish. Orthogonal experiments for EET between mineral electrodes and cultured microflora and pure cocultured system of birnessite and Pseudomonas demonstrated that efficient EET process can occur on varnish under light. The electroactive bacterial communities were positively correlated with Fe/Mn oxides in varnish, and the bacterial-community structure was influenced by semiconducting minerals. All of these findings presented a greater possibility for electron flow under sunlight in natural environments.
AUTHOR CONTRIBUTIONS
GR and HD designed the experiments. GR and YY carried out the experiments. GR, YY, YN, and HD analyzed the experimental results. GR wrote the manuscript. HD, XW, and AL performed revisions. GR, HD, YL, and CW analyzed the data in the Supplementary Materials and revised them. AL, YL, and CW funded the study. All authors agree to submit the work to Frontiers in Microbiology and approved it for publication.
|
v3-fos-license
|
2020-09-22T13:15:47.864Z
|
2020-09-19T00:00:00.000
|
237149527
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.neunet.2021.07.026",
"pdf_hash": "f2f1f0721bbac3ec4217c9220a75399a230ee523",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46034",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "e4a85546f4c1c20f84538fae3e295146b4efea4d",
"year": 2021
}
|
pes2o/s2orc
|
Spatial information transfer in hippocampal place cells depends on trial-to-trial variability, symmetry of place-field firing, and biophysical heterogeneities
The relationship between the feature-tuning curve and information transfer profile of individual neurons provides vital insights about neural encoding. However, the relationship between the spatial tuning curve and spatial information transfer of hippocampal place cells remains unexplored. Here, employing a stochastic search procedure spanning thousands of models, we arrived at 127 conductance-based place-cell models that exhibited signature electrophysiological characteristics and sharp spatial tuning, with parametric values that exhibited neither clustering nor strong pairwise correlations. We introduced trial-to-trial variability in responses and computed model tuning curves and information transfer profiles, using stimulus-specific (SSI) and mutual (MI) information metrics, across locations within the place field. We found spatial information transfer to be heterogeneous across models, but to reduce consistently with increasing levels of variability. Importantly, whereas reliable low-variability responses implied that maximal information transfer occurred at high-slope regions of the tuning curve, increase in variability resulted in maximal transfer occurring at the peak-firing location in a subset of models. Moreover, experience-dependent asymmetry in place-field firing introduced asymmetries in the information transfer computed through MI, but not SSI, and the impact of activity-dependent variability on information transfer was minimal compared to activity-independent variability. We unveiled ion-channel degeneracy in the regulation of spatial information transfer, and demonstrated critical roles for N-methyl-d-aspartate receptors, transient potassium and dendritic sodium channels in regulating information transfer. Our results demonstrate that trial-to-trial variability, tuning-curve shape and biological heterogeneities critically regulate the relationship between the spatial tuning curve and spatial information transfer in hippocampal place cells.
Introduction
Biological organisms rely on information about their surroundings through different senses for survival. They receive, encode and process information about their surroundings in eliciting robust responses to challenges posed by the external environment. From an ethological perspective, it is essential that sensory information is efficiently encoded by neural circuits to ensure effective responses to environmental challenges. A dominant theme of neural circuit organization is the ability of individual neurons to encode specific features associated with the external environment, with different neurons responding maximally to distinct feature values. For instance, neurons in the primary visual cortex respond maximally to a specific visual orientation (Hubel & Wiesel, 1959), neurons in the cochlea respond maximally to specific tones (von Békésy & Wever, 1960) and place cells in the hippocampus act as spatial sensors by responding maximally to specific locations of an animal in its environment (O'Keefe, 1976). Central to this overarching design principle is the concept of tuning curves, whereby neurons that respond maximally to a given feature value also respond to nearby feature values, with the response intensity typically falling sharply with increasing feature distance from the peak-response feature. The concept of "tuning curves" and efficient information transfer involving stimulus distributions have been effectively employed to assess biological systems from the sensory coding perspective (Attneave, 1954;Barlow, 1961;Bell & Sejnowski, 1997;Brenner, Bialek, & de Ruyter van Steveninck, 2000;Fairhall, Lewen, Bialek, & de Ruyter Van Steveninck, 2001;Laughlin, 1981;Lewicki, 2002;Simoncelli, 2003;Simoncelli & Olshausen, 2001), from a single neuron perspective (Andrews & Iglesias, 2007;Lundstrom, Higgs, Spain, & Fairhall, 2008;Narayanan & Johnston, 2012;Stemmler & Koch, 1999) and in understanding biochemical signaling cascades (Brennan, Cheong, & Levchenko, 2012;Cheong, Rhee, Wang, Nemenman, & Levchenko, 2011;Mehta, Goyal, Long, Bassler, & Wingreen, 2009;Selimkhanov et al., 2014;Tkacik, Callan, & Bialek, 2008;Waltermann & Klipp, 2011;Yu et al., 2008).
A fundamental question on neurons endowed with such tuning curves relates to the relationship between the tuning curve and the information transfer profile of the neuron across feature values. Although this relationship has been explored in neural responses across different sensory modalities (Bezzi, Samengo, Leutgeb, & Mizumori, 2002;Butts, 2003;Butts & Goldman, 2006;DeWeese & Meister, 1999;Montgomery & Wehr, 2010), the question on the relationship between spatial information transfer and spatial tuning curve within the place field of hippocampal place cells has not been quantitatively assessed. Neurons in the hippocampus receive spatial information about a given arena and a substantial fraction of them respond to different spatial locations in the same arena (Andersen, Morris, Amaral, Bliss, & O'Keefe, 2006;Moser, Kropff, & Moser, 2008;Moser, Moser, & McNaughton, 2017;Moser, Rowland, & Moser, 2015;O'Keefe, 1976;O'Keefe & Dostrovsky, 1971). In a one-dimensional arena, hippocampal place cells exhibit bell-shaped firing within their place-field firing, representing a tuning curve of the external space (Ahmed & Mehta, 2009;Bittner et al., 2015;Dombeck, Harvey, Tian, Looger, & Tank, 2010;Dragoi & Buzsaki, 2006;Geisler et al., 2010;Harvey, Collman, Dombeck, & Tank, 2009;Huxter, Burgess, & O'Keefe, 2003;Lee, Lin, & Lee, 2012;Mehta, Barnes, & McNaughton, 1997;Mehta, Lee, & Wilson, 2002;Mehta, Quirk, & Wilson, 2000). The specific question we pose here is on the relationship between this tuning curve and the spatial information transfer with reference to synaptic inputs received by the place cell (that contains spatial information from the external world) and a specific output characteristic (rate of firing). In this scenario, spatial information transfer is computed with reference to a variable associated with the external world, the spatial location within the place field, and the firing of the neuron. These definitions of tuning curves and information transfer are analogous to the assessment of information transfer in cortical neurons receiving sensory inputs that traverse through multiple synapses. As an example, for neurons in the visual cortex (which are several synapses away from the eyes), orientation-selective tuning curves and visual information transfer questions are posed with reference to the synaptic inputs received by the neuron (containing visual information from the external world) and a specific output characteristic (e.g., spikes, rate of firing) (Belitski et al., 2008;Bell & Sejnowski, 1997;Hubel & Wiesel, 1959). Spatial tuning curves, by definition, are dependent on specific spatial locations within the place field. As our principal goal in this study is to assess the relationship between spatial tuning curves and spatial information transfer, it is essential that the information transfer measure also is specific to particular spatial locations. An ideal information metric that fulfills this requirement is the stimulus-specific information (SSI), a measure that was specifically defined to convey the amount of information that the responses of a neuron convey about a particular stimulus. SSI is defined as the average specific information across all the neural firing rates that are elicited when the animal traverses a particular spatial location, with specific information referring to the information that a particular firing rate response provides about which spatial location was being traversed (Butts, 2003;Butts & Goldman, 2006;DeWeese & Meister, 1999;Montgomery & Wehr, 2010). We employed SSI as the principal metric to assess the relationship between rate-based spatial tuning curves and spatial information transfer. We also computed the Shannon's mutual information (MI) at different segments within the place field as an additional location-dependent information metric. Whereas the SSI offers a weighted average of specific information, which is a metric that accounts for all spatial locations within the place field, the location-dependent MI that we computed solely accounts for firing rate responses within a small segment of the entire place field.
In assessing the relationship between spatial information transfer and spatial tuning curves, it was essential to account for three characteristics in our experimental design:
1.
Biological neurons are heterogeneous. Neurons of the same cell type from the same subregion show very distinct ion channel distributions, even if they maintain signature electrophysiological properties. These observations pose two important questions: asymmetry on spatial tuning curves, spatial information transfer, and the relationship between the two measurements.
Our analyses show that each of these three characteristics -biophysical and physiological heterogeneities, the type and level of trial-to-trial variability, and behavior-dependent alterations to the tuning curve -critically regulated the relationship between the spatial tuning curve and spatial information transfer. We demonstrate that when hippocampal neurons exhibit low trial-to-trial response variability, they transfer peak spatial information at the high-slope locations (and not at peak firing location) of the spatial tuning curve within their place field. Importantly, we show that our model population manifested parametric degeneracy in the expression of similar tuning curves and similar information transfer metrics. As a consequence of the expression of degeneracy, we found heterogeneities in spatial information transfer and in the impact of knocking out individual ion channels on spatial information metrics, together pointing to a many-to-one relationship between different ion channel subtypes and spatial information transfer. Finally, our analyses also unveil a potent reduction in information transfer consequent to the elimination of transient potassium channels, NMDA receptors or dendritic sodium channels, thereby providing direct experimentally testable predictions.
Methods
The computational model of the place cell was constructed as a morphologically realistic CA1 pyramidal neuron of rat hippocampus. A morphologically reconstructed model (n123; Fig. 1A) was obtained from Neuromorpho.org (Ascoli, Donohue, & Halavi, 2007). Several active and passive mechanisms were incorporated into the model to mimic intrinsic functional properties of a CA1 pyramidal neuron. The passive properties arising due to the lipid bilayer was modeled as a capacitive current, and to represent the leak channels a resistive current was included. The three parameters which regulated the passive electrical properties of the neuron are axial resistivity (R a ), specific membrane resistivity (R m ) and specific membrane capacitance (C m ). In the base model, R a was set to 120 Ωcm and the specific membrane capacitance was set to 1 μF/cm 2 for the entire neuron (Table 1, Fig. 1B). The specific membrane resistivity was non-uniform and varied in a sigmoidal manner (Basak & Narayanan, 2018;Golding, Mickus, Katz, Kath, & Spruston, 2005;Narayanan & Johnston, 2007;Rathour & Narayanan, 2014) as a function of the distance of the point from the soma (x) (Fig. 1B): In Eq. (1), x is the radial distance from soma, and the parameters and their base-model values are provided in Table 1. The neuron was compartmentalized using the d λ rule (Carnevale & Hines, 2006), such that the length of each compartment was less than one-tenth of λ 100 , the space constant at 100 Hz. In the base model, this resulted in the compartmentalization of the neuron into 879 distinct compartments.
Intrinsic physiological measurements
To measure input resistance (R in ) of a somatodendritic compartment, a hyperpolarizing current step of 100 pA was injected for 500 ms into the compartment. The local change in the membrane potential as a result of the step current was measured and the ratio of the local voltage deflection to the step current amplitude was taken to be the input resistance (Fig. 1C). For measuring the back propagating action potential (bAP) amplitude, a step current of 1 nA was given at the soma for 2 ms. This generated a single action potential at the soma which actively back propagated along the dendrites. The amplitude of the bAP was measured at different locations along the somato-apical trunk (Fig. 1D).
To quantify the frequency dependence of neuronal responses, we used impedance based physiological measurements across the somatodendritic arbor Narayanan (2018, 2020), Narayanan, Dougherty, and Johnston (2010), Johnston (2007, 2008) and Rathour and Narayanan (2014): resonance frequency (f R ), maximum impedance amplitude (|Z| max ), strength of resonance (Q) and total inductive phase (Φ L ). To measure these a chirp stimulus, defined as a current stimulus with constant amplitude (peak to peak 100 pA) and linearly increasing frequency with time (0-15 Hz in 15 s), was injected in the compartment where the measurement was required. The local voltage response was recorded. To compute the impedance as a function of frequency, the Fourier spectrum of voltage response was divided with the Fourier spectrum of the current giving us the impedance profile as a complex quantity. The magnitude of impedance as a function of frequency was calculated using the following equation, In Eq. (5), Re (Z (f)) is the real part of the impedance profile and Im (Z (f)) is the imaginary part of the impedance profile and |Z (f) | is the magnitude of impedance. The maximum impedance amplitude was measured and the frequency at which it occurred was taken to be the resonance frequency. The strength of resonance was measured by taking ratio of the maximum impedance amplitude to the impedance amplitude at 0.5 Hz. For the phase related measures, the impedance phase profile was computed: In Eq. (6), ϕ (f) is the phase as a function of frequency. The total inductive phase was measured by calculating the area under the positive portion of phase profile: Roy and Narayanan Page 7 Neural Netw. Author manuscript; available in PMC 2021 August 27.
Synapses and normalization of somatic unitary synaptic potential
The model contained excitatory synapses with colocalized NMDAR and AMPAR, with an NMDAR-to-AMPAR ratio of 1.5, with 80 such synapses randomly dispersed across the apical dendritic arbor (Basak & Narayanan, 2018. These 80 synapses correspond to the number of active synapses when the animal traverses within the place field of the postsynaptic neuron. The number of synapses was based on sensitivity analyses spanning different synapse numbers (Basak & Narayanan, 2018). Broadly, neural firing rate was directly related to the number of synapses, but resulted in depolarization-induced block if number of synapses increased beyond a certain threshold (Basak & Narayanan, 2018). The current through the NMDAR were divided into current due to three ions, Na + , K + and Ca 2+ . The dependence of current due to each of these ions as a function of voltage and time was modeled in GHK formulation (Anirudhan & Narayanan, 2015;Ashhad & Narayanan, 2013;Basak & Narayanan, 2018: Here, P NMDAR defined the maximum permeability of NMDA receptors. The relative permeability ratios were set to P Ca = 10.6, P Na = 1 and P K = 1.
Here a is a normalization constant such that 0 ≤ s (t) ≤ 1, τ d is the decay constant, τ r is the rise time, with τ r = 5 ms and default τ d = 50 ms (Ashhad & Narayanan, 2013;.
The current through the AMPA receptor was mediated by two ions, Na + and K + .
I AMP A Na (v, t) = P AMP AR P Na S(t) In Eqs. (15)-(16), PĀ MPAR , defined the maximum permeability of AMPA receptors. The relative permeability ratios were set to P Na = 1 and P k = 1. The s (t) was modeled in a manner similar to NMDAR with τ r = 2 ms and τ d = 10 ms. To normalize the unitary EPSP values associated with each synapse, we ensured that the attenuation along the dendritic cable did not affect the unitary somatic EPSP amplitude. Hence, the AMPAR permeabilities at the somato-apical trunk was tuned such that it produced a unitary somatic response of ~ 0.2 mV irrespective of the synaptic location (Andrasfalvy & Magee, 2001;Magee & Cook, 2000).
Place cell inputs and synaptic localization
The input to this neuron was fed through colocalized AMPAR-NMDAR synapses. As the virtual animal traversed through the place field the presynaptic neurons fired action potentials. Their firing rates were modeled in a stochastic manner, driven by a Gaussian modulated cosinusoidal function, mimicking place cell inputs to the neuron (Basak & Narayanan, 2018Seenivasan & Narayanan, 2020). The presynaptic firing drove the opening of the colocalized synaptic NMDAR and AMPARs, resulting in synaptic currents (Eqs. (8)-(16)) flowing into the model neuron. The Gaussian modulated cosinusoidal function that governed the probability of occurrence of a presynaptic spike to each synapse Roy and Narayanan Page 9 Neural Netw. Author manuscript; available in PMC 2021 August 27.
in the neuron was computed as (Basak & Narayanan, 2018Seenivasan & Narayanan, 2020): In Eq. (17), T (5 s) defined the center of the place field, f 0 is the frequency of the cosine (8 Hz), F pre max is the maximal input firing rate, σ is the standard deviation of the Gaussian (1 s). In our analyses, the virtual animal was assumed to traverse a linear arena at constant velocity, implying the equivalence of time and space as the independent variable in Eq. (17). The input current resulting from synaptic activation produced post-synaptic action potentials and caused place cell like firing activities in the model neuron.
In introducing experience-dependent asymmetry in place-field firing (Harvey et al., 2009;Mehta et al., 1997Mehta et al., , 2002Mehta et al., , 2000, we replaced the symmetric Gaussian profile in Eq. (17) by a horizontally reflected Erlang distribution to construct an asymmetric place-field envelope (Seenivasan & Narayanan, 2020). In this scenario, the Erlang-modulated cosinusoidal function that governed the probability of occurrence of a presynaptic spike to each synapse in the neuron was computed as: In Eq. (18), the parameters λ (=5) and k (=25) governed the extent of asymmetry (Seenivasan & Narayanan, 2020).
Although each of the 80 synapses was driven by the Gaussian-or the Erlang-modulated cosinusoidal functions for the probabilistic generation of their respective pre-synaptic spike trains, they were independently generated, thereby ensuring that the input spikes are not temporally synchronous. Specifically, for a given synapse, at each integration time step (dt = 25 μs), a random number was generated from a uniform random distribution spanning (0,1). An event corresponding to a presynaptic spike for this synapse was generated if this random number was less than dt × F pre (t) at a given time t. This process was independently repeated for each dt across each of the 80 synapses impinging on the postsynaptic neuron.
Trial-to-trial variability in place-cell responses
For simulating trial-to-trial variability in the place cell firing profile with different levels of variability, noise was introduced into the presynaptic firing rate profile (Eq. (17)) associated with each synapse. Simulations were performed with Gaussian white noise (GWN) which was introduced either additively (AGWN) or multiplicatively (MGWN): In Eqs. (19)-(20), [F] + = max(F, 0) represents rectification to avoid negative firing rates, ξ(t) defined a GWN with zero mean and standard deviation σ noise . As the rectification governs the overall firing rate and not the noise term, this formulation allows for both negative and positive modulation of F pre (t). The value of σ noise was increased to enhance the level of trial-to-trial variability, with F pre (t) defined by a Gaussian-(Eq. (17)) or an Erlangenvelope (Eq. (18)) to assess the impact of trial-to-trial variability in symmetric or asymmetric place field firing profiles, respectively. As AGWN (Eq. (19)) introduced trial-to-trial variability across stimulus locations, irrespective of the strength of afferent synaptic activity, this form of variability is activity-independent. On the other hand, the level of trial-to-trial variability introduced by MGWN is progressively higher with increasing strength of afferent synaptic activity (Eq. (20)), thereby manifesting as activity-dependent trial-to-trial variability.
Neuronal voltage response during place-field traversal
Spikes were detected from the place-cell voltage response to afferent synaptic stimuli (Eqs.
(17)-(20)) by setting a voltage threshold on the rising phase of the voltage values. These spike timings were then converted to the firing rate of the place cell as a function of time (F(t)) through convolution with a Gaussian kernel (σ = 200 ms). The maxima (F max ) and the full-width at half maximum (FWHM) of the place-cell firing profile were employed as relative measures of place-field tuning sharpness. Specifically, high F max and low FWHM ( Table 2) were indicative of a sharply tuned place-cell responses (Basak & Narayanan, 2018. We took this relative approach of using high F max and low FWHM for assessing tuning sharpness to ensure that our comparisons of the model remain focused on synaptic and channel localization profiles. Specifically, we resorted to these relative metrics to circumvent heterogeneities in spatial extent of place-cell populations, especially along the dorso-ventral axis (Kjelstrup et al., 2008;Strange, Witter, Lein, & Moser, 2014). Our experimental design involves the assessment of responses of the model cell are to a Gaussian-(Eq. (17)) or Erlang-modulated (Eq. (18)) cosinusoidal waveform with a fixed width. With the input distribution fixed, this design allowed us to focus specifically on the roles of the neuron's intrinsic properties and of synaptic localization on the output tuning profiles and spatial information transfer (Basak & Narayanan, 2018. As animals traverse through the place field of a given hippocampal place cell, these neurons are known to produce characteristic sub-threshold voltage ramps (Harvey et al., 2009). To assess such ramps, we filtered the voltage traces using a 0.75 s wide median filter, which removed the spikes and exposed the sub-threshold structure of the voltage response during place-field traversal. The maximum value of these ramps was taken as peak ramp voltage (V ramp ). Since the firing rate of the presynaptic neurons were modulated with a sinusoid of theta frequency (8 Hz, Eqs. (17)-(18)), we analyzed whether the post synaptic voltage traces reflected this temporal modulation. The voltage trace at the soma was filtered using a 50 ms wide median filter, to eliminate spikes but retain theta-frequency temporal modulation, and the Fourier spectrum of the filtered signal was computed. The power at 8 Hz of this power spectrum represented theta power (Basak & Narayanan, 2018Seenivasan & Narayanan, 2020).
Spatial information transfer within a place field: Mutual information metrics
To quantify the information transmitted through the firing pattern of a place cell, we employed two sets of information metrics. The first set involved the computation of mutual information (MI), with space within the place field considered as the stimulus and the neuronal firing-rate considered the response. The aforementioned equivalence of time and space as the independent variable in Eqs. (17)-(20) allowed us to compute spatial information transfer from the firing rate response.
To obtain location-dependent spatial information transfer, we computed mutual information in a piece-wise manner at 20 different locations (Nloc) from the instantaneous firing-rate profile obtained for 30 different trials. To compute MI at these 20 locations, each location was subdivided into 4 bins, and the associated firing rate response was quantized into 20 bins. Mutual information between the spatial stimulus (S) and firing-rate response (F) was calculated at each N loc as: where, I i (F; S) denoted mutual information between the response and the spatial stimulus at the i th location (i = 1… N loc ), and F defined the firing rate for S. The response entropy H i (F) was calculated as: where, p i (F j represented the probability of the firing rate lying in the j th response bin within the i th spatial location, and was computed as: In Eq. (23), p i (F j |S k ) represented the conditional probability that the response was in the j th firing rate bin, given that the stimulus was in the k th spatial bin within the i th spatial location. pi (Sk) denoted the probability that the virtual animal was in the k th spatial bin within the i th spatial location, which was considered to follow a uniform distribution given the constant velocity assumption.
The noise entropy term H i (F|S) in Eq. (21) was computed as: where H i (F|s k ) represented the conditional noise entropy for the k th spatial bin within the i th spatial location, calculated as: where p i (F j |S k ) denoted the conditional probability of the firing rate being in the j th bin given that the stimulus was in the K th spatial bin within the i th location.
Together, this methodology of computing MI at several locations along the place field allowed us to assess spatial information transfer from all possible neural responses at that specific location. Note that I i (F; S), the mutual information computed for the i th spatial location is different from I (F; S), the location-independent mutual information that could be computed for the entire place field (spanning all firing rates and all spatial locations within the place field). We employed the location-dependent formulation I i (F; S) to compare this with stimulus-specific information metrics.
Spatial information transfer within a place field: Stimulus-specific information metrics
The second set of metrics that we used to compute spatial information transfer was derived from stimulus-specific information (SSI), obtained for 30 different trials of the entire traversal spanning all spatial locations. SSI has been proposed as a measure of information in neuronal response about a particular stimulus, and conveys the average specific information spanning all responses to a particular stimulus. To calculate the SSI, the spatial stimulus and the firing rate response were segregated into 80 and 40 bins, respectively. The SSI was calculated using the expression given below (Butts, 2003;Butts & Goldman, 2006;Montgomery & Wehr, 2010): where p (F j |S i ) is the conditional probability of the firing rate being in the j th response bin given that the i th stimulus location was presented, and the specific information I sp (F j ) (DeWeese & Meister, 1999) was computed as: Here, p (F j ) is the probability of the firing rate being in the j th response bin and p (S i |F j ) defined the conditional probability for the stimulus in the i th bin given that the firing rate was in the j th response bin. The first term in Eq. (27) represents the entropy of the stimulus ensemble H(S) and the second term represents the entropy of the stimulus distribution conditional on a particular firing rate response H(S/F j ), providing I sp (F j = H (S) − H(S/F j ) (Butts, 2003;Butts & Goldman, 2006;Montgomery & Wehr, 2010). Thus, specific information defines the reduction in uncertainty about the spatial location gained by a Roy and Narayanan Page 13 Neural Netw. Author manuscript; available in PMC 2021 August 27. particular firing rate response (F j ), and SSI constitutes the average reduction of uncertainty gained from all firing rate responses given a particular spatial location (S i ). As I sp (F j equals I (S; F j , the information gained from the observation of a specific output F j about the range of possible spatial inputs S, the MI across the entire place field I (S; F) would be defined as I(S; F ) = ∑ j p F j I S; F j (DeWeese & Meister, 1999). Here, p Fj represents the probability of the firing rate lying in the j th response bin across the entire place field. As our focus in this study was on information metrics that were location-dependent (stimulus specific), we did not employ I (S; F), but have included the definition to illustrate the relationships and differences between I sp (F j ), SSI (S i ), I i (F; S), and I (S; F).
Bias in I sp (F j ) calculation was corrected using the Treves-Panzeri correction procedure (Bezzi et al., 2002;Montgomery & Wehr, 2010;Panzeri, Senatore, Montemurro, & Petersen, 2007;Panzeri & Treves, 1996;Treves & Panzeri, 1995) as follows: with N S representing the total number of stimulus bins, N R denoting the total number of response bins and N SRP depicting the total number of stimulus-response pairs.
Spatial information transfer as a function of space within a place field was found to be bimodal or trimodal in several scenarios. To quantify the information and compare the information transfer across models and across the different levels of trial-to-trial variability, several MI-based and SSI-based information metrics were developed (listed in Table 3).
Exploring parametric dependencies in spatial information transfer
A single hand-tuned model does not account for the numerous biophysical heterogeneities inherent to neural structures, and the results obtained with a single model could be biased by the specific selection of parametric values. A simple methodology to account for the biophysical heterogeneities with signature electrophysiological properties of specific neuronal subtype under consideration is to build a population of models. We employed a multi-parametric multi-objective stochastic search (MPMOSS) algorithm to arrive at a population of models that would satisfy the several biophysical heterogeneities (by allowing the multiple parameters to span an experimental range, shown in Table 1) and would match with bounds on several electrophysiological measurements (Table 2). Since this procedure involves a uniform random sampling of parameter values, it is unbiased and provides a good strategy to search for interdependencies between parametric combinations that yield signature electrophysiological characteristics.
To match physiological outcomes, these models were then validated on the basis of sharpness of their place-cell firing properties (F max > 56 Hz and FWHM < 2.5 s; 2 measurements), six signature intraneuronal functional maps (Basak & Narayanan, 2018Narayanan & Johnston, 2012) of back propagating action potential amplitude (bAP), input resistance (R in ), resonance frequency (f R ), maximum impedance amplitude (|Z| max ), strength of resonance (Q) and total inductive phase (Φ L ), each validated at three locations (soma, ~ 150 μm and ~ 300 μm from soma on the apical trunk; total 18 measurements) and firing rate at the soma resulting from step current injections of 100 pA, 150 pA, 200 pA and 250 pA (4 measurements). Only the models that matched the bounds on these 24 measurements (Table 2) were declared valid. To explore interdependencies among parameters that resulted in the valid models, which showed sharp place-field tuning and manifested signature intrinsic electrophysiological properties, pairwise Pearson's correlation coefficients spanning the parameters of all valid models were computed. To assess the impact of individual channels in the model on spatial information transfer, we removed each channel individually from the model (by setting the conductance value associated with that channel to zero) and assessed how the information measures changed due to the removal of this ion channel.
Computational details
All simulations were performed using custom-written software in the NEURON simulation environment (Carnevale & Hines, 2006), at 34 °C with an integration time step of 25 μs. Unless otherwise stated, all simulations were performed with a resting potential of -65 mV. Analysis was performed using custom-built software written in Igor Pro programming environment (Wavemetrics). Statistical tests were performed using statistical computing language R (www.R-project.org), and the p values are reported while presenting the results, or in the respective figure panels or associated captions. In qualitatively defining weak and strong correlations, we followed the nomenclature introduced by (Evans, 1996) (Marder & Taylor, 2011;Rathour & Narayanan, 2019), all data points from the population of neural models are depicted as beeswarm or scatter plots.
Results
We built a morphologically realistic, conductance-based model of a CA1 pyramidal cell, incorporating electrophysiologically characterized passive and active mechanisms (Fig. 1A). The model contained 10 distinct biophysically constrained ion channel subtypes that were distributed along the somatodendritic arbor to match experimental findings (Fig. 1B). We hand-tuned the base model parameters (Table 1) to match several intrinsic somatodendritic electrophysiological properties (Table 2) of rat CA1 pyramidal neurons (Fig. 1C-H). We tuned the strength of synaptic connections such that the somatic unitary AMPAR EPSP was set to ~0.2 mV (Fig. 1I) irrespective of synaptic location within the stratum radiatum of the CA1 pyramidal neuron (~350 μm of apical dendrites from the soma).
Ion-channel degeneracy in the concomitant emergence of sharply tuned spatial firing profile and intrinsic physiological properties of the neuron
As a first step in evaluating the impact of heterogeneous ion channel combinations on sharp tuning of place-cell responses, we generated 12,000 random models by independent selection of parameter values from their respective uniform distributions (Table 1). We randomly dispersed 80 distinct synaptic locations (of the 428 possible locations) across the stratum radiatum where presynaptic afferent inputs impinged. These 80 synapses received independent presynaptic inputs governed by Eq. (17), and the somatic voltage response of the neuron was recorded to compute the place-field firing rate profile.
We validated the firing rate profiles of these randomly generated neuronal models for sharpness of place field tuning by placing thresholds on maximum firing rate within the place field (> 56 Hz) and the width of the firing rate profile (<2.5 s), and found 1024 of the 12,000 models (~8.5%) to satisfy these constraints (Fig. S1). We picked five models within these 1024, with similar place-field firing profiles reflected as similar values of F max and FWHM and asked if similar place-field tuning required similar parametric combinations ( Fig. S1A-B). Consistent with prior findings with models endowed with fewer ion channels (Basak & Narayanan, 2018, we found evidence for ionchannel degeneracy in the expression of sharp place-field tuning (Fig. S1C). Across all 1024 sharply-tuned models, whose F max and FWHM are depicted in Fig. S1D-E, the parameters spanned the entire valid range of parameters pointing to the absence of any parametric clustering in arriving at sharp spatial tuning (Fig. S1F). We explored pairwise correlations of the parameters underlying these place-cell models with sharply tuned firing profiles, and found most of the correlation coefficients to be weak (Fig. S1F).
Whereas place-field tuning constitutes one aspect of CA1 pyramidal neuron physiology, their well-characterized signature somatodendritic intrinsic properties form defining electrophysiological attributes. To match our model population with these signatures, we validated the 1024 sharply tuned models against 22 distinct electrophysiological measurements ( Table 2): each of input resistance, backpropagating action potential amplitude, maximal impedance amplitude, resonance frequency, resonance strength and total inductive phase at 3 different somatodendritic locations; and action potential firing rate in response to somatic pulse current injections at 4 different current values. Of the total 12,000 models generated, we found 127 (~1.06%) models to match all 24 measurement bounds ( Table 2) and were declared valid. We picked five models within these 127 valid models, with similar place-field firing profiles (Fig. S2 A) and similar intrinsic measurements across the somatodendritic axis . We assessed the parameters associated with five models and found evidence for ion-channel degeneracy in the concomitant expression of sharp place-field tuning and signature intrinsic properties (Fig. S2G). Across all 127 models that were intrinsically-valid ( Fig. 2A-G) and sharply-tuned , the parameters spanned the entire valid range of parameters pointing to the absence of any parametric clustering in these models (Fig. 3). We explored pairwise correlations of the parameters underlying these models, and found most of the correlation coefficients to be weak (Fig. 3).
Together, the unbiased stochastic search procedure provided us with a population of place cell models that exhibited several signature electrophysiological properties, and manifested sharp place-field tuning in their firing rate profiles. We employed this population of place cell models for assessing the impact of several biophysical and physiological characteristics on spatial information transfer within the place field.
Heterogeneities in the regulation of spatial information transfer by trial-to-trial variability in place-cell responses
The firing profile of a place cell within its place field represents a spatial tuning curve. For instance, in a symmetric firing profile (e.g., Fig. 4A-B), the spatial location at the center of the place-field elicits the peak firing response and the response progressively reduces for spatial stimuli on either side of this peak. Within the place field of this neuron, does maximal spatial information transfer occur at the peak of this tuning curve or at the high-slope regions of the tuning curve? Prior studies in other brain regions have shown that the answer to this question depends on several factors, with trial-to-trial variability playing a prominent role in regulating the relationship between the tuning curve and information transfer (Butts & Goldman, 2006;Montgomery & Wehr, 2010). To address this question for spatial information within the place field of individual place cells, we incorporated trial-to-trial variability in neural responses by introducing noise into the afferent input rate (Eq. (19)).
The introduction of input noise as additive Gaussian white noise (AGWN) manifested as trial-to-trial variability in the firing rate responses, enhanced the firing rate (Fig. 4C) and reduced the width (Fig. 4D) of place-cell responses. Across all 127 valid models, progressive increase in trial-to-trial variability, introduced by increasing σ noise (Eq. (19)), resulted in a progressive increase in the peak firing rate (Fig. 4C), and progressive reductions in the FWHM (Fig. 4D), theta power ( Fig. 4E-F) and the voltage ramp ( Fig. 4G-H) of the place-field response profile. We performed 30 trial simulations for each of the 127 valid place-cell models, obtained their firing rate profiles for 3 different levels of noise (Fig. 5A-C; designated as low, medium and high) and computed stimulus-specific information (SSI; Fig. 5D-F) and mutual information (MI; Fig. 5G-I) for all these 127 models.
We noted marked heterogeneity in spatial information, assessed with the SSI and MI profiles across models (Fig. 5D-I). Importantly, at low levels of trial-to-trial variability, the SSI (Fig. 5D) and the MI (Fig. 5G) showed maximal spatial information transfer at the high-slope locations of the corresponding spatial tuning curves (Fig. 5A). Consequently, both the SSI and the MI profiles were bimodal when low levels of trial-to-trial variability was introduced, although the values of SSI at high-firing locations were higher compared to MI values at these locations. With increased trial-to-trial variability, introduced as AGWN, the out-of field firing rates increased ( Progressively enhancing trial-to-trial variability by increasing σ noise resulted in a marked reduction in spatial information across models, while still manifesting heterogeneity in spatial information transfer across the model population ( Fig. 5E-F; Fig. 5H-I). Whereas the MI profile maintained bimodality despite reduction in the transferred information with higher levels of trial-to-trial variability (Fig. 5H-I), there was a progressive transition from a bimodal (Fig. 5D) to a trimodal (Fig. 5E-F) distribution of the SSI profiles. The transition in the SSI profile was consequent to the suppression in spatial information transfer at the high-slope locations of the tuning curve, with relatively small changes to spatial information transfer at the high-firing locations (Fig. 5D-F).
To further assess this transition in the SSI profile with enhanced trial-to-trial variability, we increased σ noise to larger values and computed the values of the SSI at the high-slope locations (SSI slope , the average value from the two peaks of the SSI, computed for symmetric firing profile; Fig. 6A) and at the peak-firing locations (SSI peak ; Fig. 6A). We computed the ratio SSI peak /SSI slope and plotted this as a function of σ noise (Fig. 6A). A value less than unity for this ratio indicates that maximal stimulus specific spatial information was transferred at the high-slope regions, whereas a value above unity reflects maximal SSI at the peakfiring location. Whereas SSI peak /SSI slope was less than unity for low values of σ noise across all models (Fig. 5D, Fig. 6A), two sub-populations of models emerged with higher values of σ noise . In one subpopulation (N = 87), SSI peak /SSI slope was always lower than unity even with higher levels of trial-to-trial variability (teal and orange plots in Fig. 6A, bottom panel; example SSI profiles in Fig. 6B); in a second smaller subpopulation (N = 27), this ratio was less than unity for low levels of trial-to-trial variability but transitioned to values higher than unity for higher levels of trial-to-trial variability (black and purple plots in Fig. 6A, bottom panel; example SSI profiles in Fig. 6C). Thus, whereas a large proportion of models transferred maximal spatial information at the high-slope locations irrespective of the level of trial-to-trial variability, a subpopulation of models switch to transferring maximal information at the peak-firing locations with higher levels of trial-to-trial variability.
We found that there were no significant differences in the peak firing rate or the width of the place-field firing profiles of models within the two model subpopulations, the ones showing higher SSI at high-slope vs. high-firing locations with high levels of trial-to-trail variability (Fig. 6D). Were there systematic differences in the parameters that defined models within these two subpopulations? To answer this question, we performed principal component analysis (PCA) on parameters that governed the models within the two subpopulations ( Fig. 6E-H). We asked if there were distinct clusters representative of the two subpopulations in the reduced dimensional space, pointing to structured parametric differences between these two populations. We found that the three principal dimensions explained merely 24% of the total variance, and there was considerable overlap in the coefficients associated with these two subpopulations, suggesting the absence of systematic parametric differences in the subpopulations (Fig. 6E-H).
We developed 12 distinct profile-specific metrics for quantifying the SSI (Fig. 7A) and MI (Fig. 7H) profiles for the 127 models for three levels of noise. These quantitative metrics confirmed the considerable heterogeneities in spatial information transfer across the model population (Fig. 7). These results showed that across models, information transferred reduced with increase in trial-to-trial variability, with symmetry in spatial information transfer at the two-high slope regions ( Fig. 7B-C, Fig. 7I-J). These quantitative metrics also corroborated the emergence of the two subpopulations (Fig. 6) at high values of σ noise ; specifically, the value of SSIdip (Fig. 7F) was greater than zero in a small sub-population of models, indicating that these models transfer maximal information at the peak-firing location compared to the high-slope locations (Fig. 7A). The value of MIdip (Fig. 7M), however, was always negative across all measured values of σ noise .
Spatial information transfer in neurons with multiple presynaptic place-field inputs onto the CA1 pyramidal neuron with white or pink noise
The formulations in Eqs. (17)-(18) for presynaptic spike train generation within a single place field of the postsynaptic neuron implemented probabilistic activation of the presynaptic neurons within a single postsynaptic place field. These formulations did not account for the different presynaptic neurons, each endowed with heterogeneous place field locations and differential synaptic weights in connecting to the postsynaptic neuron (Bittner et al., 2015Grienberger et al., 2017). However, the summation of the probabilities of firing of each presynaptic neuron, weighted by their respective synaptic strengths (which mimics a Gaussian centered at the place-field center of the postsynaptic neuron) would result in a probability distribution that is approximated by a Gaussian with appropriate scaling factor and standard deviation (Seenivasan & Narayanan, 2020;Fig. 8A). Thus, the probabilistic formulation of presynaptic firing should be interpreted as that of a population of presynaptic neurons, each with differential synaptic strengths and heterogeneous place field locations, converging on the postsynaptic structure (Seenivasan & Narayanan, 2020).
The equivalence of our probabilistic formulation of synaptic inputs within a single place field to heterogeneous presynaptic inputs from multiple CA3 pyramidal neurons (with appropriate synaptic weights) is exact in a single-compartmental model (Seenivasan & Narayanan, 2020). However, in a multicompartmental model, owing to spatial distribution of synapses and the presence of dendritic nonlinearities, the equivalence could be hampered.
To address this, we simulated spatially modulated spike trains from 15 different CA3 pyramidal neurons with heterogeneous place fields to impinge on the postsynaptic neuron (Fig. 8A). Each of these 15 presynaptic neurons made 80 randomly dispersed synaptic contacts (AMPAR-NMDAR synapses) on the stratum radiatum of the CA1 pyramidal neuron, making a total of 80 × 15 = 1200 synapses. Although we have incorporated Gaussian white noise in our simulations to model trial to-trial variability, biological noise typically manifests 1/f characteristics (pink noise) in the frequency domain (Buzsaki, 2006;Gilden, 2001;Gisiger, 2001;Hausdorff & Peng, 1996;Ward, 2001). To account for this, we modeled trial-to-trial variability as pink noise, generated as a low-pass filtered version of the Gaussian white noise. Although there were minor differences in terms of the exact values of firing rate profiles (Fig. 8H-J) and the information transfer profiles (Fig. 8K-L), broadly our conclusions about SSI and MI profiles were similar with white or pink noise profiles (Fig. 8).
Degeneracy in the emergence of place cells manifesting similar rate-based spatial information transfer profiles
We computed the SSI and MI profiles for the five similar models shown in Fig. S2, and found they possessed similar SSI and MI metrics as well (Table S1). The parametric values of these similar models, however, were distributed over the entire span of the respective parametric space (Fig. S2G). These point to the expression of degeneracy in concomitantly achieving similar intrinsic properties and similar rate-based spatial information transfer in place cells.
In further exploring the dependencies of spatial information transfer on model parameters, we asked if any of the model parameters values would predict spatial information transfer with different levels of trial-to-trial variability. To answer this, we computed pairwise correlations between 20 physiological measurements (3 somatodendritic measurements of R in , | Z| max , f R , Q, Φ L and bAP; F max and FWHM for place-field profiles in the absence of noise) that defined the 127 valid models and the 12 information transfer measurements ( Table 3) that were obtained from the place-field responses of these models with low ( Our outcomes thus far froze synaptic locations at one specific randomized localization and varied ion channel conductances exploring parametric dependencies of spatial information transfer. In another set of simulations, we varied localization of the 80 distinct synapses along the dendritic arbor in the base model (Table 1; Fig. 1). Specifically, we randomly dispersed the 80 synapses across the apical dendritic arbor to 400 combinations of distinct locations, computed the firing rate profile and the information transfer profiles and plotted the associated measurements (Fig. S6). We found that the introduction of heterogeneities in synaptic localization profiles introduced heterogeneities in spatial firing profiles (Fig. S6A-B) and in the spatial information transfer measured through SSI (Fig. S6C-H) or MI metrics ( Fig. S6I-N). However, we also noted that spatial firing profiles endowed with similar firing rate and information transfer metrics could be obtained with distinct combinations of synaptic localization profiles. Together, these results demonstrated the ability of several disparate ion-channel parametric combinations and different synaptic localization profiles to elicit similar place cell firing profiles endowed with similar information transfer profiles.
Regulation of spatial information transfer by experience-dependent asymmetry in place-field response profiles
Our simulations thus far resulted in symmetric place field firing profiles (e.g., Fig. 4B) with a symmetric subthreshold voltage ramp (e.g., Fig. 4G), consequent to the symmetric input structure defined by a Gaussian (Eq. (17)). However, electrophysiological lines of evidence from behavioral experiments point to an experience-dependent asymmetric expansion of hippocampal place fields in the direction opposite to the movement of the animal (Harvey et al., 2009;Mehta et al., 1997Mehta et al., , 2002Mehta et al., , 2000. What is the impact of such experience-dependent asymmetry on spatial information transfer within a single place field through place-cell rate code? To address this, we first altered the input structure to a horizontally-reflected Erlang distribution (Eq. (19)) which yielded an asymmetric place-field firing (Fig. S7A-B) profile (Seenivasan & Narayanan, 2020). Consistent with our observations with the symmetric place-field firing profile (Fig. 4), enhanced trial-to-trial variability resulted in increase in F max (Fig. S7C) accompanied by reductions in FWHM (Fig. S7D), theta power ( Fig. S7E-F) and subthreshold ramp voltage (Fig. S7G-H). The subthreshold voltage ramp profile was asymmetric (Fig. S7G), and reflected the asymmetric firing rate profile (Seenivasan & Narayanan, 2020).
We computed the asymmetric firing rate profiles for all valid models with low (Fig. 9A), medium (Fig. 9B) and high (Fig. 9C) levels of trial-to-trial variability introduced as AGWN to the input structure (Eq. (18)). We found the baseline and the peak firing rates to shift with increased σnoise, manifesting heterogeneities across models in the populations (Fig. 9A-C). Strikingly, the stimulus-specific information transfer profiles were relatively insensitive to the asymmetry in the firing rate profile ( Fig. 10G-H), with increased trial-to-trial variability. With low levels of trial-to trial variability, we observed that the highest information transfer occurred at the high slope regions of the firing rate profile, computed either through SSI (Fig. 10E) or MI (Fig. 10K). With increase in level of trial-to-trial variability, in a manner similar to our findings with symmetric firing profiles (Figs. 6-7) a subpopulation of models switched to transferring maximal SSI at the peak of the firing rate profile (Fig. 10E; High σ noise ; subpopulation with SSIdip > 0), but no such transition occurred in the MI profile (Fig. 10K). Pairwise correlations between model physiological measurements and information metrics were mostly weak, irrespective of the level of trial-to-trial variability (Fig. S8-S10). Together, these results showed that the introduction of asymmetry in place-field firing profile introduced asymmetries in the spatial information transfer profiles computed through MI, but not through SSI.
The impact of activity-dependent trial-to-trial variability on spatial information transfer was minimal
We had introduced trial-to-trial variability as an AGWN, whereby the variability was independent of spatial location and synaptic activity (Eq. (19)). To understand the impact of trial-to-trial variability that was dependent on synaptic activity, we introduced trial-to trial variability as a multiplicative GWN (Eq. (20)) and repeated our analyses on spatial information transfer for the population of valid models, both with symmetric as well as asymmetric firing profiles (Fig. 11, Fig. S11-S20). Although we observed heterogeneity in firing profiles and information transfer, and found models expressing similar information transfer despite being governed by disparate parametric combinations, we found the impact of trial-to-trial variability with the higher range of σ noise (compared to σ noise for AGWN) to be minimal on place cell properties (Fig. S11), SSI and MI profiles (Fig. 11, Fig. S12, Figs. S16-S17) or pair-wise correlations between intrinsic and information metrics (Figs. S13-S15; Figs. S18-S20). The value of σ noise employed for achieving "high" level of trial-to-trial variability (=0.5 Hz 2 ) was the highest possible, as increases beyond that resulted in depolarization-induced block of action potential firing in several models. Experience dependent asymmetry in firing profiles introduced asymmetry in the MI profiles, but not the SSI profile, even with MGWN-based trial-to-trial variability (Fig. S16-S17). In summary, our results showed that the impact of activity-dependent trial-to-trial variability is minimal compared to activity-independent variability in trial-to-trial responses, across different levels of noise and with symmetric or asymmetric place-field firing profiles.
Regulation of spatial information transfer by ion channel conductances and synaptic receptors
Our results established degeneracy in the emergence of place cells with similar spatial information transfer profiles, and also showed an absence of strong correlations with any physiological measurement. What contributes to such degeneracy? Are there specific ion channels that play critical regulatory roles in spatial information transfer within a place field?
We took advantage of our conductance-based modeling framework, and applied the virtual knockout approach (Basak & Narayanan, 2018Jain & Narayanan, 2020;Mittal & Narayanan, 2018;Mukunda & Narayanan, 2017;Rathour & Narayanan, 2014;Seenivasan & Narayanan, 2020) to assess the contribution of individual ion channels to spatial information transfer. Specifically, we systematically assessed information transfer profiles in each of the valid models after virtually knocking out individual ion channels by setting their conductance value to zero (Fig. S21). We computed the SSI and MI metrics for the virtual knockout models (VKM) for each of the 8 active ion channels (Fig. 12). Virtual knockout of the spike generating conductances -NaF and KDR -was infeasible because the neuron ceases spiking on setting these conductance values to zero.
In terms of information transfer, we found that the impact of knocking out individual channels was heterogeneous across the model population. There were models where the SSI (Fig. 12A-B) or MI (Fig. 12G-H) values increased after knocking out the channel, but there were also models where these values decreased upon knockout. Among the channels assessed, we found the A-type potassium channel to have the maximal impact on spatial information transfer. Specifically, virtual knockout of the A-type potassium channel resulted in reductions in SSI (Fig. 12A-B) and MI (Fig. 12G-H) . These observations offer a clear testable prediction that A-type potassium channels play a critical role in regulating spatial information transfer in hippocampal place cells. These results also establish a many to-one mapping between the different ion channels and the efficacy of spatial information transfer, whereby different ion channels could contribute towards maintaining efficacious information transfer with heterogeneous contributions across neurons in the population. This many-to-one mapping provides a substrate for the expression of degeneracy where different combinations of ion channels could maintain similar functional outcomes in terms of spatial information transfer efficacy.
Finally, as the role of NMDA receptors and dendritic spikes mediated by sodium channels expressed in the dendrites have been considered critical in place-cell physiology (Basak & Narayanan, 2018Nakazawa, McHugh, Wilson, & Tonegawa, 2004;Sheffield, Adoff, & Dombeck, 2017;Sheffield & Dombeck, 2015), we explored the roles of these NMDARs and dendritic NaF channels in regulating spatial information transfer in our heterogeneous model population. To evaluate the role of dendritic fast sodium channels, we recomputed place-field firing rate and spatial information transfer profiles after setting the value of ḡ NaF to zero in apical dendritic compartments (Fig. S22A-B). Although there were heterogeneities in the impact of deleting dendritic sodium channels, we found a significant reduction in spatial information transfer computing either as SSI (Fig. 13A-B) or as MI ( Fig. 13G-H). To assess the role of NMDARs, we recomputed place-field firing rate and spatial information transfer profiles after setting the value of P̄ NMDAR in Eqs. (9)- (11) to zero (Fig. S22C-D). Deletion of NMDARs resulted in a significant reduction in spatial information transfer (SSI: Fig. 13A-B; MI: Fig. 13G-H).
Together, these results unveiled a many-to-one relationship between the different ion channels and spatial information transfer, while also providing testable predictions on the roles of A-type potassium channels, NMDARs and dendritic sodium channels in regulating spatial information transfer within a single place field of hippocampal place cells.
Conclusions
We demonstrated that hippocampal neurons, when they act as reliable (i.e., low trial-to-trial response variability) sensors of animal location by spatially modulating their firing rate, transfer peak spatial information at the high-slope locations (and not at peak firing location) of the firing rate tuning curve within their place field. Importantly, we showed that there was significant heterogeneity across a population of models that received identical distributions of afferent synaptic patterns, owing to differences in ion channel composition of these models. The heterogeneity manifested quantitatively in terms of the amount of information transferred, and qualitatively in terms of how they responded to increases in the level of trial-to-trial variability. Specifically, with increases in trial-to-trial variability, whereas one subpopulation of models switched to transferring peak stimulus-specific spatial information at the peak-firing locations, another subpopulation continued to transfer peak information at the high-slope locations. These heterogeneities in spatial information transfer did not show strong relationships between heterogeneities in intrinsic or tuning properties of the models. We demonstrated the dependence of the spatial information transfer profile on the type of trial-to-trial variability, whereby activity-dependent variability had little impact on spatial information transfer compared to the significant reduction introduced by activity independent variability.
To further delineate the relationship of spatial information transfer with place-cell characteristics and its components, we assessed the impact of experience-dependent asymmetry in the place-field firing rate profile. We found that mutual information metrics showed a dependence on the asymmetric nature of the firing profile, where information transfer was maximal in the second half of the place-field where the firing rate dropped at a higher rate. However, the peak values of stimulus-specific information metrics were largely invariant to the asymmetric slopes of the firing rate profile on either side of the peak-firing location. Finally, we asked if there were specific ion channels that played critical roles in regulating spatial information transfer by recomputing information metrics in models that lacked each of 8 different ion channels. We found heterogeneity in the impact of knocking out individual ion channels on these information metrics, pointing to a many-to-one relationship between different ion channel subtypes and spatial information transfer. Our analyses unveiled a potent reduction in information transfer consequent to knocking out transient potassium channels, NMDA receptors or dendritic sodium channels, providing direct experimentally testable predictions.
Trial-to-trial variability and spatial information transfer
Our results show that trial-to-trial variability in neural responses results in a marked reduction in spatial information transfer within a single place-field, in a manner that is dependent on how the noise was introduced. In demonstrating this, we had introduced trial-to-trial variability either an additive or a multiplicative GWN. The incorporation of synaptic additive noise is physiologically similar to a scenario where there is either a location-independent increase in afferent excitation or a reduction in tonic or spatially uniform inhibition (Duguid, Branco, London, Chadderton, & Hausser, 2012;Grienberger et al., 2017). Such a scenario, which could be a result of physiological plasticity or pathological synaptopathies, would enhance response variability in a location-independent manner. Our results demonstrate that the presence of such location-and activity-independent enhancement in trial-to-trial variability critically reduces spatial information transfer within a place field, irrespective of whether the place field profiles are symmetric (Fig. 5, Fig. 7) or asymmetric . With enhanced trial-to-trial variability of this form, our results show that the location of maximal SSI transitions from the high-slope regions to the peak-firing location in a subpopulation of models (Fig. 6).
In striking contrast, incorporation of trial-to-trial variability as a multiplicative noise had little impact on spatial information transfer for a wide range of noise variance values, and the location of maximal SSI was always tuned to the high-slope regions of the tuning curve (Fig. 11). Multiplicative noise, activity-dependent trial-to-trial variability, is physiologically similar to noise consequent to variability in synaptic release and receptor kinetics. In such a scenario, the amount of variability is dependent on the extent of synaptic activation, and therefore is activity-dependent. In place cells, as excitatory afferent activity is higher within the place field of the neuron (highest at the center of the place field), such multiplicative noise translates to location-dependent variability in neural responses. Our results show that the ability of such activity-dependent noise, especially with strong excitatory drives observed during place-field traversal, in altering spatial information transfer is minimal.
These results emphasize the importance of assessing the source of trial-to-trial variability and asking whether the variability is dependent or independent of activity, and caution against a generalization of all types of trial-to-trial variability to yield similar outcomes. Further explorations on the dependence of spatial information transfer on the specific types and sources of variability should account for several experimental details, some of which are listed below. First, although we consider two mutually exclusive versions of trial-to-trial variability (dependent or independent of activity), variability in neuronal responses under awake, behaving conditions is conceivably a mixture of both versions. Second, there are theoretical and electrophysiological lines of evidence for a critical role for asynchronous synaptic release, induced by active reverberation in recurrent circuits (such as the CA3, a presynaptic counterpart to the CA1 neurons studied here), on information transfer (Lau & Bi, 2005;Volman & Levine, 2009). Third, there are lines of evidence of stimulus independent noise improving the detection of subthreshold stimulus (Stacey & Durand, 2000. Fourth, although we had incorporated white noise sources in our analyses, it has been demonstrated that the color of the noise is a critical determinant of how information transfer is affected (Gingl, Kiss, & Moss, 1995). Finally, in our analyses the trial-to-trial variability was introduced solely as noise to the synaptic inputs. However, other factors such as thermal noise, noisy biochemical processes and stochasticity of ion channels could also contribute to the trial-to-trial variability, with different noise colors and different ways of interactions with the inputs (Faisal, Selen, & Wolpert, 2008;Gingl et al., 1995;Li, Luo, & Xue, 2020;Wang, Wang, & Zheng, 2014). It is essential that future studies incorporate these additional layers of mechanisms to the model and examine how different sources of variability, each with potentially different characteristics, synergistically affect stimulus-specific information content. It is possible that one or the other version dominates under specific physiological/pathological conditions, and therefore it is important that the variability-inducing mechanisms are delineated before the impact of such variability is assessed.
Place-cell characteristics and spatial information transfer
An important insight obtained from our study pertains to parametric degeneracy in effectuating spatial information transfer in place cells, with reference to ion channels and parameters that govern place cell biophysics and physiology (Figs. S1-S2; Fig. 3). Ion-channel degeneracy in the hippocampal formation is ubiquitous, and expresses across different scales of analyses (Mishra & Narayanan, 2019Mittal & Narayanan, 2018;Rathour & Narayanan, 2019). In hippocampal CA1 pyramidal neurons, the expression of degeneracy has been demonstrated with reference to the concomitant emergence of several somatodendritic intrinsic properties (Migliore et al., 2018;Rathour et al., 2016;Rathour & Narayanan, 2012Srikanth & Narayanan, 2015), spike-triggered average (Das & Narayanan, 2014Jain & Narayanan, 2020), short- (Mukunda & Narayanan, 2017) as well as long-term (Anirudhan & Narayanan, 2015) plasticity profiles. Degeneracy has been shown to express in the sharpness of place-field firing properties with reference to biophysical as well as morphological parameters (Basak & Narayanan, 2018, which has been confirmed in this study with a larger set of ion channels incorporated into the model. Finally, an earlier study had quantitatively defined efficiency of phase coding in hippocampal place cells and showed that similar spatial information transfer could be achieved with disparate ion channel combinations (Seenivasan & Narayanan, 2020). The findings of this study, demonstrating ion channel degeneracy with reference to spatial information transfer through the rate code within a single place field, further strengthen the expression of degeneracy in encoding systems such as the hippocampus (Rathour & Narayanan, 2019).
In encoding systems, it is essential that encoding of information occurs concurrently with maintenance of homeostasis of intrinsic neuronal properties, including neuronal firing rate (Rathour & Narayanan, 2019). In our study, we showed that similar amounts of spatial information transfer and similar firing rate (both with reference to place-field firing and responses to pulse currents) could concomitantly occur with disparate combinations of ion channel conductances and parameters that govern their expression (Table S1, Fig. S2). It has been shown that the balance between excitation, inhibition and intrinsic excitability (E-I-IE balance) is essential for achieving concomitant efficient phase coding as well as activity homeostasis. In our study, we had fixed the excitatory synaptic weights to account for synaptic democracy (Fig. 1I) and did not incorporate spatially-uniform inhibition as this would have translated to merely a negative bias term across locations (Basak & Narayanan, 2018). We also found that there were no correlations between information measurements and other intrinsic measurements (e.g., Figs. S3-S5). Future studies could alter excitatory synaptic weights associated with place-field inputs and explore the balance between excitation, location-dependent inhibition and the heterogeneous intrinsic excitability properties of hippocampal pyramidal neurons to assess the role of E-I-IE in the emergence of efficient information transfer through rate codes as well. Specifically, such studies could validate models based on their ability to transfer maximal spatial information through the rate code (i.e., efficient rate coding) and concomitantly maintain intrinsic homeostasis, and ask if E-I-IE was essential to achieve these when the search space involves excitatory/inhibitory synaptic weights and ion channel conductances (Seenivasan & Narayanan, 2020). Importantly, such models could maximize the joint spatial information transfer occurring through the rate as well as the phase codes (Mehta et al., 2002;O'Keefe & Burgess, 2005) within a place field, and explore the constraints required for such efficient encoding to occur simultaneously with the expression of intrinsic homeostasis.
Degeneracy in the emergence of similar spatial information transfer and signature intrinsic properties emerged as a consequence of a many-to-one relationship between ion channels and spatial information transfer. These observations were feasible only because we employed a heterogeneous population of models, derived from an unbiased stochastic search that covered heterogeneities in the underlying parameters (Marder & Taylor, 2011). If we had instead resorted to the use of a single hand-tuned model to arrive at our conclusions, that single model and its specific composition would have biased our results. In such a scenario, the identification of the aforementioned many-to-one relationship and the consequent heterogeneities on the impact of individual ion channels on information transfer would not have been feasible. These results emphasize the critical role of synergistic interactions among different ion channels in effectuating behavior, and underscore that the impact of any ion channel subtype is dependent on the relative expression profiles of other channels and receptors in the specific model under consideration.
Degenerate systems show dominance of specific underlying parameters in regulating specific physiological measurements (Basak & Narayanan, 2018Drion, O'Leary, & Marder, 2015;Mishra & Narayanan, 2019;Mittal & Narayanan, 2018;Mukunda & Narayanan, 2017;Rathour et al., 2016;Rathour & Narayanan, 2014. In our analyses, although we found that all ion channels had the ability to reduce or increase spatial information transfer in a model-dependent manner (Figs. 12-13, certain parameters played a crucial role in regulating information transfer. Specifically, our analyses provide specific experimentally testable predictions on the critical roles of dendritic sodium channels, NMDA receptors and A-type potassium channels in regulating spatial information transfer . Interestingly, these three components play critical roles in regulating the prevalence of dendritic spikes and in the sharpness of place-cell tuning profiles (Basak & Narayanan, 2018Gasparini, Migliore, & Magee, 2004;Golding, Jung, Mickus, & Spruston, 1999;Golding & Spruston, 1998;Losonczy & Magee, 2006), and form strong candidates in regulating spatial information transfer. Further studies could test the roles of these channels in regulating information transfer in hippocampal pyramidal neurons employing electrophysiological recordings during place-field traversal in the presence of pharmacological agents. As these components alter dendritic spiking in opposite directions (suppressing NMDA receptors or sodium channels suppresses dendritic spiking, whereas suppression of A-type potassium channels enhances dendritic spiking), such studies could also potentially assess the requirement of an intricate balance between mechanisms that promote and those that prevent dendritic spike initiation in maintaining efficient spatial information transfer.
Our results proffer a testable prediction that experience-dependent asymmetry in place-field profiles do not markedly alter SSI. As experience-dependent asymmetry is considered to be predictive, reduction in spatial information transfer during the early parts of place-field firing would have rendered this predictive capability to be ineffectual. Our observations demonstrate that although the low values of slope during the early parts of firing profile reduces mutual information as a consequence of the asymmetry, stimulus specific information remains high. Further explorations could test this prediction on electro physiologically obtained individual place cells transitioning with experience (Mehta et al., 1997).
Finally, the question on how spatial information transfer is regulated by activity-dependent plasticity and behavioral state-dependent neuromodulation of ion channels and receptors is critical in understanding the emergence of spatial information transfer in the context of novel place-field formation (Basak & Narayanan, 2018;Bittner et al., 2015Bittner et al., , 2017Cohen, Bolstad, & Lee, 2017;Kim & Lim, 2020;McKenzie et al., 2021;Robinson et al., 2020;Sheffield et al., 2017;Zhao, Wang, Spruston, & Magee, 2020). Future studies should therefore assess the impact of novel spatial environments, place-cell remapping, and different forms of neural plasticity on spatial information transfer. In this context, as with many other studies on the neurophysiology of place cells and their formation (Ahmed & Mehta, 2009;Basak & Narayanan, 2018Bittner et al., 2015Bittner et al., , 2017Dombeck et al., 2010;Dragoi & Buzsaki, 2006;Geisler et al., 2010;Grienberger et al., 2017;Harvey et al., 2009;Huxter et al., 2003;Lee et al., 2012;Mehta et al., 1997Mehta et al., , 2002Mehta et al., , 2000Seenivasan & Narayanan, 2020), our study analyzes animal traversal in a one-dimensional arena. Although one-dimensional arenas have proven to be useful approximations and have provided several important insights about place cell physiology and plasticity, it is critical to recognize that external space is not one-dimensional. There are emergent features of place cells in two and three dimensions that are not captured by one-dimensional arenas (Aghajan et al., 2015;Finkelstein, Las, & Ulanovsky, 2016;Geva-Sagiv, Las, Yovel, & Ulanovsky, 2015;Huxter, Senior, Allen, & Csicsvari, 2008;Lee, Briguglio, Cohen, Romani, & Lee, 2020;Moser et al., 2017Moser et al., , 2015Rich, Liaw, & Lee, 2014;Wang, Xu, & Wang, 2018;Yartsev & Ulanovsky, 2013). As animals interact with the real world, from an ethological perspective, it is essential that analyses on the impact of neural heterogeneities and trial-to-trial variability on spatial information transfer are expanded to two-and three-dimensional place field inputs. Future studies should therefore extend our conductance-based morphologically realistic analysis of the cellular neurophysiology of spatial information transfer to two-as well as three dimensional virtual arenas.
From a broader perspective, our analyses here focused only on the relationship between spatial information transfer and spatially modulated neuronal firing rate. However, the hippocampal formation has been implicated in other functions, such as recognition, completion and separation of patterns, associative memory, and in engram formation (Andersen et al., 2006;Josselyn & Tonegawa, 2020). Future studies should therefore focus on the possibility that there could be other molecular and cellular constraints that define the hippocampal architecture towards satisfying these additional functions, apart from accounting for energy considerations associated with neuronal and network physiology (Attwell & Laughlin, 2001;Laughlin, 2001;Laughlin, de Ruyter van Steveninck, & Anderson, 1998;Wang, Wang, & Zhu, 2017;Wang, Xu, & Wang, 2019;Zhu, Wang, & Zhu, 2018). (A) Two-dimensional reconstruction of the 3D morphologically realistic model employed in this study. (B) Distribution of parameters governing the passive properties g leak and R a ) and ten different active ion channels (g h , g NaF , g KDR , g KA , g KM , g SK , g CaN , g CaL , g CaR and g CaT ) along the somato-apical span to match multiple intrinsic measurements at the soma and along the apical dendrites, including input resistance (C), backpropagating action potential amplitude (D) maximum impedance amplitude (E), strength of resonance (F) resonance frequency (G), total inductive phase (H) and the maximum AMPAR permeability (I), all as functions of radial distance from the soma. The distance-dependent profile of maximum AMPAR permeability, P AMPA (I, right vertical axis) was set such that the somatic unitary excitatory postsynaptic potentials (uEPSPs) were around 0.2 mV, irrespective of synaptic location (I, left vertical axis). Out of 12000 randomly generated models, 127 satisfied 20 intrinsic somatodendritic measurements and manifested sharply-tuned place field firing. (A-G) The intrinsic measurements for the 127 valid models are shown: input resistance (R in , A), maximum impedance amplitude (|Z| max , B), resonating frequency (f R , C), strength of resonance (Q, D), total inductive phase (Φ L , E) and backpropagating action potential (bAP) amplitude (F), each of them at three locations (soma, ~150 μm from soma and ~300 μm from soma) on the apical trunk; and the firing rate for step currents of 100 pA, 150 pA, 200 pA and 250 pA at the soma (G). (H) A typical place-field firing profile illustrating the measurement of maximum firing rate (F max ) and the temporal distance between the places with half the maximum value of firing rate (FWHM). A relative criterion on tuning sharpness, involving high F max (>56 Hz) and low FWHM (<2.5 s), was applied to obtain the 127 valid place-cell models (out of the 12000 randomly generated models). (I-J) Place field firing measurements F max and FWHM at the soma for the 127 models. (A) Top, Illustration of the measurements SSI peak and SSI slope . SSI peak depicts the SSI value at the location where the place-field firing profile (F) is at its peak, and SSI slope represents the SSI value at the location where the absolute slope of the place-field firing profile, dF dt , is at its peak. Bottom, Traces from four representative models showing the heterogeneity in the evolution of SSI peak /SSI slope as a function of enhanced trial-to-trial variability. (B-C) There were broadly two classes of models, one where the SSI peak was low even at high noise levels (B; several representative examples shown in red), and another where SSI peak was the highest SSI when noise level was high (C; several representative examples shown in blue). (D) Peak firing rate (left) and FWHM (right) of the two classes of model subpopulations. The rectangles besides each plot represent the respective median value. σ noise = 5×10 −3 Hz 2 . p values provided correspond to the Wilcox rank sum test. (E-H) Principal component analyses on the parameters underlying the two classes of models shown in B (red) and C (blue). Shown are the coefficients associated with these model parameters with reference to the first three principal components. The percentage variance Roy (A) Idealized representation of stimulus-specific information (SSI) as a function of time, illustrating the various metrics developed here for quantifying spatial information transfer in place cell models. (B-G) SSI metrics for the population of valid models depicting the impact of three levels of noise on the first (B, SSI1) and second (C, SSI2) peaks of SSI, the full width half maximum of the SSI profile (D, SSIFWHM), the ratio of the first peak-to-center distance to the center-to-second peak distance (E, SSI dRatio), the difference between the SSI value at the place field center to the peak SSI value (F, SSI dip) and the difference between the location of SSI1 and SSI2 (G, SSI d (A-F) SSI metrics for the population of valid models depicting the impact of three levels of noise on the first (B, SSI1) and second (C, SSI2) peaks of SSI, the full width half maximum of the SSI profile (D, SSIFWHM), the ratio of the first peak-to-center distance to the center-to-second peak distance (E, SSI dRatio), the difference between the SSI value at the place field center to the peak SSI value (F, SSI dip) and the difference between the location of SSI1 and SSI2 (G, SSI d). (G-L) Same as (A-F) for mutual information profiles of the valid model population. MGWN variance values: Low: 0.01 Hz 2 , Medium: 0.1 Hz 2 , High: 0.5 Hz 2 . Roy and Narayanan Page 50 Neural Netw. Author manuscript; available in PMC 2021 August 27. Table 2 Intrinsic somatodendritic measurements of CA1 pyramidal neurons and their electrophysiological bounds for validating models. Bounds on intrinsic somatodendritic functional maps and firing rate measurements were derived from electrophysiological recordings reported in Malik, Dougherty, Parikh, Byrne, and Johnston (2016), , Johnston (2007, 2008) and Spruston et al. (1995). Bounds on place-cell tuning sharpness are relative in nature, where cells with high firing rate and low FWHM were selected (Basak & Narayanan, 2018. Quantitative metrics of information transfer.
Measurement name Symbol
SSI-based information metrics (Fig. 7A) 1st peak of the SSI curve SSI1 2nd peak of the SSI curve SSI2 Full width at half maximum of the SSI curve SSI FWHM Ratio of the distance between middle peak with 1st peak and the distance SSI dRatio between middle peak and 2nd peak of the SSI curve SSI middle peak value -average of SSI peak values at the slopes SSI dip Temporal distance between the two peaks in the SSI curve SSId MI-based information metrics (Fig. 7H) 1st peak of the MI curve MI1 2nd peak of the MI curve MI2 Full width at half maximum of the MI curve MI FWHM Ratio of the distance between middle peak with 1st peak and the distance MI dRatio between middle peak and 2nd peak of MI curve MI middle peak value -average of MI peak values at the slopes MI dip Temporal distance between the two peaks in MI curve Mid
|
v3-fos-license
|
2023-01-20T15:26:56.957Z
|
2016-10-01T00:00:00.000
|
256001949
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP10(2016)069.pdf",
"pdf_hash": "84eb3e56fa6aad3f41b773e5b4589a7e4b6fb63d",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46036",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "84eb3e56fa6aad3f41b773e5b4589a7e4b6fb63d",
"year": 2016
}
|
pes2o/s2orc
|
Bounding the space of holographic CFTs with chaos
Thermal states of quantum systems with many degrees of freedom are subject to a bound on the rate of onset of chaos, including a bound on the Lyapunov exponent, λL ≤ 2π/β. We harness this bound to constrain the space of putative holographic CFTs and their would-be dual theories of AdS gravity. First, by studying out-of-time-order four-point functions, we discuss how λL = 2π/β in ordinary two-dimensional holographic CFTs is related to properties of the OPE at strong coupling. We then rule out the existence of unitary, sparse two-dimensional CFTs with large central charge and a set of higher spin currents of bounded spin; this implies the inconsistency of weakly coupled AdS3 higher spin gravities without infinite towers of gauge fields, such as the SL(N) theories. This fits naturally with the structure of higher-dimensional gravity, where finite towers of higher spin fields lead to acausality. On the other hand, unitary CFTs with classical W∞[λ] symmetry, dual to 3D Vasiliev or hs[λ] higher spin gravities, do not violate the chaos bound, instead exhibiting no chaos: λL = 0. Independently, we show that such theories violate unitarity for |λ| > 2. These results encourage a tensionless string theory interpretation of the 3D Vasiliev theory.
1 Introduction and summary The study of quantum chaos has lent new perspectives on thermal physics of conformal field theories and gravity [1,2]. Geometric structure in the bulk may be destroyed by small perturbations whose effects grow in time and spread in space, otherwise known as the butterfly effect. This accounts for scrambling by black holes, destroys entanglement, and, via holography, gives a view into Lorentzian dynamics of conformal field theories at large central charge [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]. Inspired by the classical picture [19], a quantity that has been identified as a sharp diagnostic of quantum chaos is the out-of-time-order (OTO) four-point correlation function between pairs of local operators, (1.1) We use a common notation: V sits at t = 0, and the operators are separated in space. The onset of chaos is seen as an exponential decay in time of this correlator, controlled by exp(λ L t). The rate of onset is set by λ L , the Lyapunov exponent. Under certain conditions that are easily satisfied by many reasonable thermal systems, λ L is bounded above by [2] λ L ≤ 2π β . (1. 2) The bound is saturated by Einstein gravity, nature's fastest scrambler [20]. Understanding how exactly this bound fits into the broader picture of CFT constraints and their relation to the emergence of bulk spacetime, and studying the range of chaotic behaviors of CFTs more generally, are the general goals of this paper. This work touches on various themes in recent study of conformal field theory and the AdS/CFT correspondence. The first is the delineation of the space of CFTs. An abstract CFT is (perturbatively) specified by the spectrum of local operators and their OPE coefficients: {∆ i , C ijk }. As evidenced by the conformal bootstrap, imposing crossing symmetry and unitarity leads to powerful constraints on this data. It is not yet known what the precise relation is between {∆ i , C ijk } and the chaotic properties of a generic CFT, say, λ L . One would like to use OTO correlators to constrain the CFT landscape: given the existence of a bound on chaos, a natural goal is to exclude certain putative CFTs which violate it. This tack would provide a Lorentzian approach to the classification of CFTs.
JHEP10(2016)069
The strong form of the AdS/CFT correspondence posits that every CFT is dual to a theory of quantum gravity in AdS. At the least, a subspace of all CFTs can be mapped via holography to the space of weakly coupled theories of gravity or string/M-theory in AdS. Given that string and M-theory are tightly constrained by their symmetries, this suggests that any consistent CFT possesses a level of substructure over and above the manifest requirements of conformal symmetry. One might hope to enlist chaos in the quest to "see" the structure of AdS string or M-theory compactifications from CFT.
At large central charge c and with a sufficiently sparse spectrum of light operators ∆ i c, a universality emerges: such CFTs appear to be dual to weakly coupled theories of AdS gravity, that in the simplest cases contain Einstein gravity. These CFTs obey certain other unobvious constraints: for example, corrections to a − c in four-dimensional CFTs are controlled by the higher spin spectrum [21]. Identifying the set of sufficient conditions for the emergence of a local bulk dual is an open problem. There is already evidence that λ L = 2π/β is at least a necessary criterion, but one would like to make a sharper statement. Explicitly connecting the value of λ L with the strong coupling OPE data would permit a direct derivation of λ L = 2π/β from CFT, which is presently lacking in d > 2 and has been done under certain conditions on the operators V and W in d = 2 [7].
Not all weakly coupled theories of gravity are local: one can, for instance, add higher spin fields. In AdS D>3 , there are no-go results: namely, one cannot add a finite number of either massive or massless higher spin fields, for reasons of causality [21] and -in the case of massless fields -symmetry [22,23]. In AdS 3 , the constraints are less strict. For one, the graviton is non-propagating. Moreover, higher spin algebras, i.e. W-algebras, with a finite number of currents do exist.
Consider theories which augment the metric with an infinite tower of higher spin gauge fields. Other than string theory, these include the Vasiliev theories [24][25][26]; see [27,28] for recent reviews. These are famously dual to O(N ) vector models in d ≥ 3 CFT dimensions (and, in d = 3, Chern-Simons deformations thereof [29,30]). One widely held motivation for studying the Vasiliev theories in d dimensions is that they morally capture the leading Regge trajectory of tensionless strings in AdS [31]. For the supersymmetric AdS 3 Vasiliev theory with so-called shs 2 [λ] symmetry, this is now shown to be literally true [32][33][34]: CFT arguments imply that this super-Vasiliev theory forms a closed subsector of type IIB string theory on AdS 3 × S 3 × T 4 in the tensionless limit, α → ∞. More generally, it is unclear whether other, e.g. non-supersymmetric, Vasiliev theories are UV complete, or whether they can always be viewed as a consistent subsector of a bona fide string theory.
In AdS 3 , there seem to be other consistent theories of higher spin gravity: to every Walgebra arising as the Drinfeld-Sokolov construction of a Lie algebra G, one can associate a pure higher spin gravity in AdS 3 cast as a G × G Chern-Simons theory. This builds on the original observation that general relativity in AdS 3 can be written in this fashion with G = SL(2, R) [35,36]. Such pure Chern-Simons theories have been studied in the context of AdS/CFT, especially for G = SL(N, R) and G = hs [λ]. The former contains a single higher spin gauge field at every integer spin 2 ≤ s ≤ N which generate an asymptotic W N symmetry [37]. The latter, a one-parameter family labeled by λ, contains one higher spin gauge field at every integer spin s ≥ 2 which generate an asymptotic W ∞ [λ] symme-JHEP10(2016)069 try [38][39][40]. The 3D Vasiliev theory [26] contains the hs[λ] theory as a closed subsector. All of these theories should be viewed as capturing the universal dynamics of their respective W-algebras at large central charge. These theories have been studied on the level of the construction of higher spin black holes and their partition functions (e.g. [41][42][43][44]), entanglement and Rényi entropies and Wilson line probes (e.g. [45][46][47][48][49][50][51]), conformal blocks [51,52], and flat space limits [53], among other things.
To introduce dynamics, one would like to consistently couple these pure higher spin theories to matter, or embed them into string theory. However, it is far from clear that these SL(N )-type theories, with finite towers of higher spin gauge fields, are not pathological. The notion of a finite tower of higher spin fields feels quite unnatural, and is highly unlikely to descend from string theory. A heuristic argument is that in a tensionless limit α → ∞, all operators on the lowest Regge trajectory would become massless, not only a finite set; then if we are guided by the principle that every CFT is dual to (some limit of) a string theory in AdS, or by some milder notion of string universality [54,55], the notion of a holographic, unitary 2d CFT with a finite number of higher spin currents seems suspicious. As an empirical matter, the only known construction of a fully nonlinear AdS 3 higher spin gravity coupled to matter that is consistent with unitarity is the Vasiliev theory, which has an infinite tower of higher spin currents; likewise, there are no known W N CFTs with the aforementioned properties.
In this paper, we will initiate a systematic treatment of chaotic OTO correlators in CFTs with weakly coupled holographic duals. We will realize some of the goals mentioned above. Our results are in the spirit of the conformal bootstrap program: we exclude regions of the CFT landscape by imposing consistency properties on correlation functions. In our setting, we are working with Lorentzian, out-of-time-order correlators, relating dynamical statements about the development of quantum chaos and scrambling in thermal systems [1,2] to the question of UV completeness. Our work has a similar flavor to [56,57], which uses the Lorentzian bootstrap to enforce causality in shock wave backgrounds.
Summary of results
Our basic philosophy is, following [7], to study OTO four-point functions of the form (1.1) in d-dimensional CFTs by computing vacuum four-point functions, and performing a conformal transformation to a thermal state. In d > 2, this yields the Rindler thermal state. In d = 2, this yields the thermal state of the CFT on a line with arbitrary β. In the large c limit, we diagnose chaos by looking at planar correlators; in particular, we study their Regge limits. A conformal transformation leads to an OTO correlator of the form It follows that in these thermal states, the chaotic properties of the CFT can in principle be inferred from OPE data at O(1/c). In this paper, we make this concrete. (This last statement assumes that V and W are light operators, with conformal dimensions parametrically less than c, but we will also treat the case of V and W being heavy in d = 2, with similar results.)
JHEP10(2016)069
Chaotic correlators in holographic CFTs. In section 3, we consider chaos in CFTs with weakly coupled local gravity duals, with no higher spin currents. 1 We take V and W to be arbitrary light scalar primaries. By recalling properties of the strongly coupled OPE at O(1/c), we argue that for times β t β 2π log c, where the ij parameterize the Euclidean times of the operators, in a notation explained below and borrowed from [7]. f (x) is a function of the spatial separation whose general form we determine; see equation (3.16), where η = exp(−4πx/β). (1.4) implies λ L = 2π/β and t * = λ −1 L log c, matching the Einstein gravity behavior. The arguments used here are based on necessary conditions a prototypical CFT must satisfy for the existence of an emergent local bulk theory, in the spirit of [58,59], and as such are rather general. This analysis is analogous to section 6 of [60], where the emergence of bulk-point singularities is derived from properties of the OPE at strong coupling. We do not, however, rigorously apply conformal Regge theory techniques [83] to derive sufficient conditions for (1.4) using CFT arguments alone.
One way to phrase (1.4) is that λ L may be read off from the stress tensor exchange alone. A corollary of our result is a derivation of the butterfly velocity in Rindler space, This is determined by the exchange of the lowest-twist spin-2 operator, which is the stress tensor. In appendix B, we give an example of (1.4) in strongly coupled N = 4 super-Yang-Mills (SYM), where we take V = W = O 20 . Our analysis also clarifies the relationship between the sparseness condition and λ L = 2π/β: in particular, this result is somewhat insensitive to the density of scalar and vector primary operators, and does not require the strictest definition of sparseness.
Chaotic destruction of higher spin theories. In section 4 we focus on d = 2, and upgrade the previous analysis to include higher spin currents of bounded spin s ≤ N , where N > 2 is finite. The same principles imply that for generic V and W , The Lyapunov exponent is This violates the chaos bound. It follows from our assumptions that unitary, holographic 2d CFTs with finite towers of higher spin currents do not exist. Not only would such CFTs violate the chaos bound, but as we review, the results of [56] imply that they would be acausal: that is, these higher spin CFTs would be too-fast scramblers.
JHEP10(2016)069 Figure 1. A weakly coupled theory of higher spin gravity in AdS 3 may be viewed as matter coupled to a G × G Chern-Simons theory for some Lie algebra G. The boundary gravitons of G generate an asymptotic W-symmetry, W G . When rank(G) is finite, such theories are inconsistent.
To bolster these claims, we rigorously study the case where V and W have higher spin charges that scale with large c. This was inspired by the analogous calculation in the Virasoro context [7]. In sparse CFTs, correlators of such operators are computed exactly, to leading order in large c, by the semiclassical vacuum conformal block of the W-algebra [51,52,[61][62][63]. Taking W = W N , the semiclassical W N vacuum block is now known in closed form for any N [52], so we can compute its Regge limit explicitly. The resulting function again violates the chaos bound. (See e.g. equation (4.35) The bulk dual statement is that weakly coupled higher spin gravities with finite towers of higher spin gauge fields are inconsistent. This rules out the SL(N ) higher spin gravities. As we explain in section 4.3, our CFT calculations of the OTO correlators map directly to the way one would calculate the same quantity in the bulk, via certain bulk Wilson line operators studied in [45,46,51,52,64]; they therefore constitute a direct bulk calculation as well. In purely bulk language, the problem can be equivalently phrased as an acausality of a "higher spin shock wave" induced by a higher spin-charged perturbation of the planar BTZ black hole. Alternatively, the Regge limit of AdS 3 Mellin amplitudes grows too fast. A corollary of our result is that SL(N )-type higher spin gravities cannot be coupled to string or M-theory.
This result fits very nicely with known features of higher-dimensional gravity. Weakly coupled theories of gravity in AdS D>3 with a finite number of higher spin fields suffer from violations of causality [21]. In AdS 3 , this conclusion does not hold, as the bulk graviton is non-propagating. However, upon introducing matter, the physics is the same. It is useful to compare the status of SL(N )-type theories in AdS 3 with Gauss-Bonnet theory in AdS D>3 . Whereas both require infinite towers of higher spin degrees of freedom to be completed (though see [65]), Gauss-Bonnet comes with a coupling λ GB ∼ M 2 GB , which determines the energy scale E ∼ M GB at which massive higher spin fields must appear to restore causality; on the other hand, SL(N ) gravity has no scale besides L AdS , and cannot be viewed as an effective field theory.
JHEP10(2016)069
Altogether, this reduces various computations in SL(N ) higher spin theories -including entanglement entropies as Wilson lines, efforts to find gauge-invariant causal structure, and models of higher spin black hole formation -to algebraic statements about W-algebra representation theory, rather than dynamical statements about actual unitary, causal CFTs. 2 For non-dynamical questions, the SL(N ) theories may still be useful: for example, they remain approachable toy models of more complicated theories of higher spin gravity and of stringy geometry; some of their observables may be analytically continued to derive results in hs[λ] higher spin gravity (e.g. [52,[66][67][68][69]); and W-algebras do appear widely in CFT. 3 This begs the question of what happens when an infinite tower of massless higher spin fields is introduced.
Regge behavior in W ∞ [λ] CFTs and 3D Vasiliev theory. In section 5, we consider chaos in 2d CFTs with W ∞ [λ] symmetry. We continue to apply the principle that the W ∞ [λ] vacuum block at O(1/c) can be used to derive λ L . Doing so requires deriving its Regge limit. This in turn requires performing the infinite sum over single higher spin current exchanges; see figure 5. The result is highly sensitive to the relations among OPE coefficients, i.e. the higher spin charges of V and W . Taking both V and W to sit in the simplest representation of W ∞ [λ], i.e. the fundamental representation (which obeys unitarity for λ ≥ −1), we find a remarkably simple result for the W ∞ [λ] vacuum block: (1.8) In the Regge limit, the O(1/c) term goes like a constant, which implies There is no chaos. The result λ L = 0 is non-trivial, unlike higher spin CFTs in d > 2, because CFTs with W ∞ [λ] symmetry are not necessarily free. We expect that (1.8) will find other applications. This has intriguing implications for the status of non-supersymmetric Vasiliev theory. The quantum numbers of V and W chosen above are those of the Vasiliev scalar field. We believe that our result encourages the tensionless string theory interpretation described earlier. One especially relevant feature of string theory for our purposes is the phenomenon of Regge-ization of amplitudes, in which infinite towers of massive string states sum up to give soft high-energy behavior [74][75][76]. It is sometimes said that string theory is the unique theory with a consistent sum over higher spin states. Our calculation suggests that this is not strictly true: the non-supersymmetric 3D Vasiliev theory provides another, simpler example. This suggests that the non-supersymmetric Vasiliev theory may be shown to be a limit or subsector of string theory, as in the supersymmetric AdS 3 ×S 3 ×T 4 case described above.
JHEP10(2016)069
As an aside, using arguments independent of chaos, we show that unitary CFTs with a classical (that is, large c) W ∞ [λ] chiral algebra with λ > 2 do not exist. Correspondingly, 3D Vasiliev and pure hs[λ] higher spin gravities with λ > 2 have imaginary gauge field scattering amplitudes. This follows from the fact that, as we show, the classical W ∞ [λ] algebra is actually complex for λ > 2.
AdS/CFT sans chaos. In section 6, we give a selection of chaotic computations in familiar CFTs that are relevant to AdS/CFT: namely, chiral 2d CFTs, symmetric orbifold CFTs, and CFTs with slightly broken higher spin symmetry. Chiral CFTs are non-chaotic. We perform an explicit computation of an OTO correlator in the D1-D5 CFT at its orbifold point, Sym N (T 4 ), again finding an absence of chaos. Finally, we argue that in slightly broken higher spin CFTs [77] in arbitrary dimension, the Lyapunov exponent in thermal states on S 1 × R d−1 should vanish to leading order in 1/c. This gives a physical motivation to study λ L to higher orders in 1/c. The sections outlined above are bookended by a short section 2, in which we set up the calculations and briefly review the Regge limit and the chaos bound; and by a discussion in section 7. Finally, we include a handful of appendices with supplementary calculations.
Chaotic correlators
We will study OTO four-point functions of pairs of local primary operators in thermal states of d-dimensional CFTs, where operators are time-ordered as written. We achieve this by a conformal transformation from the vacuum. We focus mostly on d = 2 CFTs on the cylinder with inverse temperature β, so we set up the problem in those variables; the result for d-dimensional Rindler space can be read off at the end by setting β = 2π. Consider local scalar primary operators V, W with respective conformal weights h v = h v and h w = h w , where more generally, Conformal invariance constrains the vacuum four-point function of V and W to take the form for some function A(z, z) of the conformally invariant cross-ratios, where z ij ≡ z i − z j as usual. We refer to A(z, z) as a reduced amplitude. It is invariant under the conformal map to the cylinder, To summarize, the thermal, Lorentzian correlator relevant for chaos is V sits at t = 0. We will often refer to A Regge (z, η) when making statements about chaos, with the identifications in (2.11) understood. Precisely the same formula applies for chaos in d > 2 CFTs in Rindler space, setting β = 2π. The relation of z and z to the familiar higher-dimensional cross ratios u and v is In the rest of this section and the next, we leave the † implicit.
A bound on chaos
An especially useful, and fairly general, choice for operator positions around the thermal circle is to place them diametrically opposite in pairs: 2 = 1 + β/2 and 4 = 3 + β/2. Fixing 1 = 0 without loss of generality, we define the angular displacement between the pairs as In this arrangement, * 12 34 = 4e iθ , where 0 ≤ θ ≤ π (2.14) which implies the behavior z ≈ −4e iθ e for t − x β. The range of θ is bounded as indicated in order to preserve the ordering V W V W . Note that Im (z) ≤ 0. When θ = π/2, the operators are spaced equally. 6 We also note the imaginary time parameterization used in [2], This pairwise arrangement of operators leads to a bound on the rate of chaotic time evolution [2]. The authors prove a general statement about analytic functions bounded on the half strip, which they then apply to OTO correlators. Consider a function of complex time, f (t+iτ ), which obeys the following conditions: i) f (t+iτ ) is analytic in the half-strip |τ | ≤ β/4, i.e. for 0 ≤ θ ≤ π, ii) f (t) is real, and iii) |f (t + iτ )| ≤ 1 throughout the strip. Then f (t) obeys We are ignoring possible sources of error in this bound that are carefully discussed in [2] and reviewed in [17]; these are not important for the large c theories we will discuss. Actually, f (t) need not be real; the generalization is This is necessary when f (t + iτ ) is an OTO correlator of non-Hermitian operators. The bound may be applied to functions of the form where 0 < 1 is a small parameter. Its sign ensures that f (t) decays rather than grows as t increases. (2.18) implies λ L ≤ 2π/β. This expression is valid for λ −1 L t λ −1 L log 1/ . The upper bound defines the scrambling time t * , at which the -expansion breaks down; resummation of higher-order effects in ensure a smooth descent towards zero. For complex (2.20) The bound requires 1 > 0 and λ L ≤ 2π/β, while 2 is unconstrained. In a large c CFT, ∝ 1/c. A corollary of the chaos bound in large c theories is a bound on t * , This is the (updated version of the) fast scrambling conjecture [20].
JHEP10(2016)069
The origins of the bound. It is important to emphasize the fundamentality of the physical inputs leading to the chaos bound. The analyticity requirement is the statement that for operators separated along the thermal circle -that is, non-coincident in Euclidean time -the correlator must not have any singularities. The boundedness requirement is equivalent to the statement that the OTO correlator decays, rather than grows, due to chaos. Moreover, there is a close connection between chaos and causality bounds [56,57].
In the language of those papers, the correlator V V W W is the two-point function V V in the "shockwave state," |W ≡ W |0 . "Causal" means that for any choice of V and W , the commutator W |[V, V ]|W vanishes for spacelike separated V operators: This happens if and only if the second-sheet correlator is analytic and bounded above by 1 in the half-strip, which are the same inputs as for the chaos bound. 7
A toy model for violation of the chaos bound
A simple function that illustrates what the chaos bound is all about, and will be central to our later analysis, is the following: where | | 1 is a real small parameter of either sign, and n ∈ Z + for simplicity. The . . . can denote terms of O( 2 ), and/or higher powers of z. Viewing this function as a chaotic correlator, the exponential map (2.15) implies a Lyapunov exponent To get a feel for this function, take the relation between z and complex time to be that in (2.15). Rescaling by positive t-independent coefficients, Recall that 0 ≤ θ ≤ π. When does this grow with t? Equivalently, when is |f (iτ )| ≥ 1? For n = 1, we have When > 0, this is bounded from above by 1 for all admissible θ. But for general n > 1, there are n 2 sub-strips within the full strip 0 ≤ θ ≤ π in which the correlator grows exponentially with t. This is true for either sign of . 7 To get from [56] to here, take z there = (1 − z) here . Then σ there = −z here = 4e iθ e 2π β (x−t) , and the region Im(σ there ) ≥ 0 is Im(z here ) ≤ 0, which is the half-strip. The semicircle of [56] has radius R = 4e One concludes that the function (2.23) cannot describe the OTO correlator in a consistent chaotic system. This is equivalent to saying that λ L = 2πn/β violates the bound on chaos: for functions of the form (2.23), analyticity and boundedness of f (t + iτ ) follow from λ L ≤ 2π/β, and vice versa. 8 The same function was recently discussed in [56] in the context of causality violation in CFT, where f (t + iτ ) was a correlator in the lightcone limit, = η → 0 for fixed z.
Chaotic correlators in holographic CFTs
In general, the chaotic behavior of correlation functions is sensitive to the OPE data of the CFT, the choice of thermal state, and to some extent on the choice of operators W, V . However, calculations in classical Einstein gravity and in d = 2 CFT, performed for heavy operators with ∆ w ∆ v 1, suggest that in typical holographic CFTs, chaotic correlators of arbitrary local operators take a universal form, including a Lyapunov exponent λ L = 2π/β, and scrambling time t * = λ −1 L log c. The goal of this section is to connect this to knowledge of the OPE data of general holographic CFTs. For general scalar primaries V and W in the light spectrum -that is, with ∆ w , ∆ v ∼ O(1) -one would like to show not only that λ L = 2π/β, but that the dependence on the spatial separation of V and W takes a universal form. This is, in general, a difficult problem to analyze purely in CFT: it amounts to understanding sufficient conditions, currently unknown, on strongly coupled OPE data that give rise to Regge scaling A Regge (z, η) ∼ z −1 . A framework for this problem was put forth in [83]. For the present setting, we will simply discuss the Regge behavior of holographic correlators, translated into the language of chaos. This will also serve as a stepping stone to the next sections, where we apply this analysis to 2d CFT's with higher spin currents.
JHEP10(2016)069
We will also introduce a toy model which captures the essence of the physics, and uses only general features of prototypical holographic CFTs with local bulk duals.
Known properties of holographic CFTs with Einstein gravity duals support the following picture of OTO correlators, to be elaborated upon below: Take V and W to be arbitrary local primary operators in a CFT d with large central charge c, a sparse spectrum of light operators, and no parametrically light single-trace operators of spin s > 2. This is the characteristic spectrum of a CFT d with a weakly coupled Einstein gravity dual. Then for such a CFT d in Rindler space, or a CFT 2 on R × S 1 with any β, where . . . includes terms subleading in the Regge limit and in the 1/c expansion. In terms of x and t, It follows that λ L = 2π/β. If we further define t * as the time at which the 1/c expansion breaks down, then (3.2) also implies t * = λ −1 L log c. To obey the chaos bound, we must also have f (η) > 0 for 0 ≤ η < 1.
The main physical point of the result (3.1) is that z −1 = z 1−2 , where 2 is the spin of the stress tensor, which is the highest-spin current in the theory.
In what follows, we will be able to somewhat constrain the functional form of f (η): see (3.16). Moreover, (3.2) makes a prediction for the evaluation of V W V W as a bulk wave function overlap integral [9], for light fields V and W . We will say more about this in the Discussion.
This result comes from studying the Regge limit of vacuum four-point functions V V W W . We take V and W to be scalar operators. In a general CFT, the reduced amplitude A(z, z) can be expanded in s-channel conformal blocks of SO(d + 1, 1) for symmetric tensor exchange, G ∆,s (z, z): is the product of OPE coefficients for exchange of the symmetric tensor primary O p with conformal dimension ∆ p and spin s p . In a unitary CFT, a p is real, but can have either sign. This sum is infinite, but convergent for |z| < 1, |z| < 1 independently [56,79]. We consider CFTs in which both a p and ∆ p admit expansions in 1/c, where zeroth order quantities may be computed in mean field theory. G ∆,s is independent of c.
In general, one can only take the Regge limit of A(z, z) if it is known in closed form, which is rarely the case at strong coupling. Passage to the second sheet term-by-term in the conformal block expansion of A(z, z) generically requires a resummation: in particular, higher spin operators contribute more strongly in the Regge limit, and all CFTs contain operators with arbitrarily large spin. These include descendant operators, like ∂ µ 1 . . . ∂ µs V , or the "double-trace" operators appearing in the lightcone bootstrap. This is the essential challenge for which the tools of [83] were developed.
However, it is somewhat useful to introduce a toy model for prototypical holographic CFTs, inspired by [58], in which one can derive the above result directly. Before doing so, let us establish some useful facts.
First, the Regge limit of G ∆,s behaves like a spin-s exchange, even though G ∆,s includes descendant contributions of unbounded spin. In general d, the conformal Casimir equation for G ∆,s simplifies, admitting a closed-form hypergeometric solution [80] with C(∆, s) a positive prefactor given in appendix A. Note the z 1−s behavior as advertised. This can be easily checked against the Regge limit of the closed-form blocks in even d. See appendix A for details. The following two additional properties of G ∆,s (η) are significant. One, for all d ≥ 2 and s > 0, assuming ∆ satisfies the unitarity bound ∆ ≥ d − 2 + s. And two, the expansion around η = 0 is organized by the twists of the operators living in the conformal family (∆, s). The result (3.6) implies that whenever the conformal block sum is restricted to a sum over primaries of bounded spin, its Regge limit can be taken block-by-block. Recall that to compute λ L , we are interested in the O(1/c) part of the amplitude. This takes the form Now we note that holographic CFT spectra have a generalized free field structure. Let us enumerate their light operators 9 appearing in (3. In prototypical holographic CFTs with local bulk duals that obey causality, L = 2; more generally, such CFTs may also have L = ∞, but a Regge limit that corresponds to an exchange of effective spin L eff = 2. For holographic CFTs with higher spin currents of spins 3 ≤ s ≤ N , the above picture holds with 2 replaced by N . ii) Double-trace operators [V V ] n,s and [W W ] n,s . These take the schematic form and likewise for [W W ] n,s . These are indexed by a spin s = 0, 1, . . . L, where L is a maximum spin, and n = 0, 1, . . . , ∞. Their dimensions are To proceed, we introduce a toy model prototype of a holographic CFT, of the sort studied in [58]: in particular, consider a bulk theory of a graviton interacting with two scalar fields φ v and φ w , dual to V and W , respectively, with quartic interactions of the This theory is local and causal [21,56]. Its spectrum takes a generalized free field structure above, but the only single-trace operator appearing in the OPE is the stress tensor.
First, let us temporarily ignore the graviton. Computing the tree-level four-point amplitude holographically as a sum of quartic contact Witten diagrams, its connected piece has only double-trace exchanges; this is a known fact about contact diagrams. By construction, L = 2: thus, the total spin sum in (3.9) is bounded from above, and is dominated in the Regge limit by spin-2 exchanges. Using the form of G Regge ∆,2 , we see that where f (η) is determined by the sum over single-and double-trace spin-2 primary exchanges, and . . . includes the subleading spin-0,1 exchanges, and terms suppressed by powers of 1/c. What changes when we turn on gravity? For distinct operators V = W in our fourpoint function, only a single new bulk diagram contributes to A(z, z) at tree-level, namely,
JHEP10(2016)069
the graviton exchange in the φ v φ v −φ w φ w channel. In the dual CFT conformal block decomposition in the V V − W W channel, this is known to add the exchange of the stress tensor, and to contribute to the exchange of the double-trace operators [V V ] n,s and [W W ] n,s of spins s ≤ 2 only (e.g. [80,81]). Therefore, at O(1/c) we still exchange only operators of spin s ≤ 2, and A Regge (z, η) is dominated by the spin-2 exchanges whose contributions we can compute block-by-block. Said another way, the graviton exchange gives the universal dominant contribution. This completes the argument for (3.1).
This toy model is an oversimplification of full-fledged holographic CFTs, in which the sum over spins will not generically truncate (L = ∞). These high-spin double-trace exchanges come from crossed-channel exchange diagrams in the bulk. As noted above, in these cases A Regge (z, η) has an effective spin L eff = 2 at strong coupling, and more refined methods must be employed to compute it [80,82,83]. Nevertheless, this emergence of "graviton dominance" at strong coupling makes the above toy model somewhat useful.
The form of f (η). We can say more about f (η). The first term in (3.13) runs over singleand double-trace primaries. This includes the stress tensor. For single-trace primaries, a (0) p = 0 due to large c factorization. Whether the anomalous dimension terms turn on depends on the relative values of ∆ v , ∆ w [84]: The coefficients a [V V ] n,2 were determined in [85]. Noting that the double-trace anomalous dimensions lead to a log η term in f (η) if (3.14) is satisfied.
Altogether, then, f (η) can be written in the form We have pulled out the leading twist stress tensor contribution. Both f 1 (η) and f 2 (η) are analytic near η = 0, obeying f 1 (0) = 0 and f 2 (0) = 0. f 2 (η) reflects the double-trace anomalous dimensions, and vanishes unless (3.14) holds. While we derived (3.16) by focusing on spin-2 exchanges only, it may also apply to cases in which L = ∞ but L eff = 2. Appendix B presents one such computation in strongly coupled N = 4 SYM.
Positivity.
As for positivity of f (η), proving this in full generality would take us somewhat astray from the main thread of this paper. However, we make the following observations. Any single-trace operator with a (1) p > 0 contributes positively to (3.16), given (3.8). The stress tensor clearly contributes positively: where #(d) is a positive d-dependent coefficient that depends on whether we parameterize c in terms of C T , N , or a trace anomaly coefficient.
JHEP10(2016)069
The double-trace operators also contribute positively. This essentially follows from results of [58]: their contributions sum up into D-functions, which are associated with tree-level contact diagrams in AdS with four derivatives. For example, one can easily show using the method of [58] that a λ(∂φ v ) 2 (∂φ w ) 2 term in the bulk Lagrangian contributes to the reduced amplitude as whereD ∆v+1∆v+1∆w+1∆w+1 is the reduced D-function. For integer ∆ v , ∆ w , one may check the sign-definiteness of the Regge limit of this object using the expression forD 1111 (z, z) given in appendix B, together with D-function identities (see e.g. the appendix of [86]). (Essentially this same point was made in [60]; see also [87] for a similar constraint on the sign of λ.) But, contact diagrams do not account for the full spin-2 double-trace contributions: graviton exchange diagrams also contribute at O(1/c). One can show using the results of section 5.7 of [81] that these contributions to the amplitude take the same sign as the contact terms. 11 Example: N = 4 SYM. In appendix B, we perform an explicit computation of A Regge (z, η) in N = 4 SYM at large λ. We take V = W both to be the 1/2-BPS scalar operator in the 20' of the SU(4) R-symmetry. The features described above are all visible there, and may be cleanly interpreted in terms of the 20 × 20 OPE at large λ.
Lessons and implications
Before moving on, let us extract some key points from the above.
λ L from the vacuum block alone
A main message of (3.1) is that the stress tensor exchange is sufficient to read off λ L = 2π/β and t * = λ −1 L log c, while the remaining exchanges simply modify the x-dependence of V W V W . In the language of Regge theory, this is just the statement that in a strongly coupled CFT dual to Einstein gravity, the Reggeon spin j = 2, which is also the spin of the stress tensor.
It is important to note that, in general, the lightcone limit of A(z, z) cannot be used to read off λ L . In a CFT dual to string theory, for example, where one must sum over infinite towers of higher spin operators dual to massive string states in the bulk, λ L < 2π/β [9]. We have shown that the η 1 expansion of A Regge (z, η) at O(1/c) is a lightcone expansion, a feature which is special to holographic CFTs with local bulk duals.
Butterfly velocity in Rindler space
Taking V and W to have large spatial separation x 1, but still obeying x t, defines the butterfly velocity, v B : In particular, equation 4.18 there also holds for arbitrary spin exchanges. For s > 0 exchanges, the right-hand side of 4.18 is positive due to the unitarity bound.
JHEP10(2016)069
v B parameterizes the spatial growth of chaotic effects under time evolution. Since η = exp(−4πx/β), this is the η → 0 limit of A Regge (z, η). So in any holographic CFT, v B is determined by the spin-2 operator of lowest twist, which is of course the stress tensor. Its contribution is, ignoring constants, The result can be derived by an Einstein gravity calculation using shock wave techniques in a hyperbolic black hole background [88].
In CFTs with Einstein gravity duals, the butterfly velocity on the plane is The planar and Rindler velocities need not, and do not, agree. The Rindler result is robust under local higher-derivative corrections to the Einstein action. Rindler space is conformal to the hyperbolic cylinder H d−1 × S 1 with β = 2π. If we consider chaos in hyperbolic space for β = 2π, we have no right to use vacuum correlators, and our derivation does not apply. In planar geometries, v B does change as a function of higher derivative couplings [6]. Thus we expect that v B in hyperbolic space is actually temperature-dependent in higher-derivative gravity. It should be possible to check this using shock waves in hyperbolic black hole backgrounds; to verify this in CFT, one would need to compute OTO correlators not in the vacuum, but on H d−1 × S 1 with generic β. This is similar to the difference between computing entanglement entropy and Rényi entropy across a sphere.
On sparseness
The value of λ L is sensitive to the spectrum of spins, not conformal dimensions, present in the CFT. Since sparseness refers to the latter, it is not directly related to the value of λ L . First, Consider adding, say, 10 100 scalar operators of fixed ∆ c to an otherwise sparse CFT. This leaves λ L = 2π/β intact, but spoils sparseness. 12 Admittedly, this is a weak violation of sparseness: near ∆ ≈ c, the density of states is still sub-Hagedorn, so this modification does not ruin the validity of the 1/c expansion, and remains consistent with the existence of a bulk dual with Einstein gravity thermodynamics. 13 12 On the other hand, it is logically possible that imposing a gap for s > 2 operators implies sparseness; in other words, that all non-sparse CFTs must have an infinite tower of light higher spin operators. This is the operating assumption in [58]. 13 We thank Ethan Dyer for discussions on this point.
JHEP10(2016)069
Conversely, and more definitively, For example, the presence of higher-spin operators in the light spectrum will change the value of λ L . We will see explicit examples of sparse CFTs with λ L = 2π/β in section 4, where we consider symmetric orbifold CFTs, and in the next section, where we consider sparse 2d CFTs with higher spin currents.
3.1.4 Are pure theories of AdS 3 gravity chaotic?
V and W are primary operators dual to bulk fields carrying local degrees of freedom. In D > 3 bulk dimensions, such processes do not require the introduction of matter: gravitons can create geometry, and destroy entanglement. In D = 3, unlike in higher dimensions [89], there are no gravitational waves: gravitons in AdS 3 live at the boundary, and the dynamics of the stress tensor alone are not chaotic. This is to say that we should not consider λ L = 2π/β as a feature of pure AdS 3 gravity: only when mediating interactions between matter fields do the gravitons behave chaotically. It may well be that pure semiclassical AdS 3 gravity does not exist [90]. In any case, coupling the theory to matter, as in string or M-theory embeddings, is the only way to introduce non-trivial dynamics, including chaos. This is the sense in which we consider λ L = 2π/β to be a property of weakly coupled theories of 3D gravity.
Chaotic destruction of higher spin theories
We now add higher spin currents to the CFT. In d > 2, higher spin CFTs have correlation functions that coincide with those of free theories [23] so we take d = 2. We will first consider correlators V † W † V W β of generic V and W , generalizing the analysis of the previous section, and then take V and W to have charges scaling like c in a semiclassical large c limit. The latter will be more rigorous, as it allows us to compute some correlators exactly, at leading order in large c. The upshot is simple to state: in putative large c 2d CFTs with currents of spins s ≤ N for some finite integer N > 2, OTO correlators of local scalar primary operators violate the chaos bound.
The holographic dual of our conclusion is that would-be dual theories of AdS 3 higher spin gravity with finite towers of higher spin currents are pathological.
Chaotic correlators in higher spin 2d CFTs
Consider a set of holomorphic single-trace currents {J s (z)}, where s ≤ N for some N ∈ Z. We normalize our currents as for some N s . Altogether, the currents generate a W-algebra. There may be multiple currents of a given spin, but we leave this implicit. All but J 2 (z) = T (z) are Virasoro primaries.
JHEP10(2016)069
As in previous section, we will use vacuum four-point functions to diagnose chaos in the thermal state on the cylinder with arbitrary β. V and W are W-primaries carrying charges q (s) under the higher spin zero modes, In this notation, q (2) = h, the holomorphic conformal weight. Generically, q (s) = 0 for all s.
What is the spectrum of a putative holographic higher spin 2d CFT? Like all holographic CFTs, the spectrum is inferred from the properties of a weakly coupled theory of gravity in AdS. In the present case, the bulk theory would be a higher spin gravity in AdS 3 which has G N 1, a set of higher spin gauge fields {ϕ s } whose boundary modes give rise to an asymptotic W symmetry algebra generated by {J s }, and some perturbative matter fields whose density is fixed as a function of G N .
At O(1/c), we may repeat the analysis of section 3. The prototypical higher spin CFT now has higher spin currents, but is otherwise structurally unchanged. In our toy model, decomposing A(z, z) in the V V − W W channel yields the same class of operator exchanges as in the non-higher spin CFT, only now including operators up to a maximum spin L = N (see figure 4). 14 This captures the fact that, generalizing the "graviton dominance" of nonhigher spin theories, the universal contributions come from the exchange Witten diagrams of {ϕ s }, whose conformal block decomposition leads only to exchanges of spin s ≤ N . 15 Given this spectral data, the argument of the previous section immediately implies that for generic V and W , 3) The function f (η) is real and smooth, but need not be positive. In terms of x and t, This function is precisely of the form of our toy function (2.23). Therefore, it violates the chaos bound: Likewise, the correlator implies a scrambling time 14 Note that when W is finitely generated (i.e. the set {Js} is finite), a primary under W branches into exp(2π rank(W)∆/6) Virasoro primaries in the Cardy regime ∆ c. This gives an upper bound on the scaling near ∆ ≈ c. Since the d = 2 sparseness condition on the density of light states is exponential in ∆, the distinction between Virasoro primary and W-primary is irrelevant. 15 The bulk may also contain contact interactions; their contributions to A(z, z) would only modify the double-trace contributions, leaving the single-trace current exchanges alone. In particular, they could only serve to increase the effective spin L eff in the Regge limit; so as to retain universality, we will ignore them.
JHEP10(2016)069
As N increases, the correlator grows in an increasing number of sub-strips of the half-strip, arrayed in regular intervals with spacing linear in N . For N = 3, we drew this behavior in figure 3. It follows that: Unitary, holographic 2d CFTs with finite towers of higher spin currents do not exist.
In other words, the only finitely generated W-algebra consistent with unitarity in a large c CFT is the Virasoro algebra. This also rules out non-sparse CFTs of the sort discussed in section 3.1, which violate the sparseness condition with only low-spin operators. If infinitely-generated W-algebras with currents of bounded spin exist, then CFTs with these symmetries, too, are ruled out. Recalling the discussion in section 2.1, we may phrase this chaos bound violation in another way: holographic higher spin CFTs are non-unitary and acausal.
Are there exceptions? One family of large c CFTs with W N symmetry is the W N minimal models at negative level k = −N − 1, the so-called "semiclassical limit" [67,72]; but these are non-unitary. A large N symmetric orbifold Sym N (X), where the seed CFT X has W-symmetry W X and finite central charge, has a much larger chiral algebra, (W X ) N /S N , which becomes infinitely generated in the large N limit; it also has "massive" higher spin operators besides the currents.
There is one hypothetical class of CFTs that escape our conclusion: a CFT with no nonchiral light operators V and W . This would be dual to pure higher spin gravity, for example, which has only W-gravitons and black hole states. Given the difficulty in constructing duals of pure AdS 3 Einstein gravity, the viability of these theories is nevertheless dubious; we will return to this in the AdS/CFT context in section 4.3.
Chaos for heavy operators in W N CFTs
Given the extended conformal symmetry W, one may form conformal blocks with respect to W, rather than SO(3, 1). Instead of (3.3), we could have expanded in the s-channel as where F p,W (z) are the holomorphic blocks for exchange of a W-primary operator O p . A nice feature of F vac,W (z) is that at O(1/c), the only exchanges are the simple current exchanges: To gather more data on what goes wrong in holographic higher spin CFTs, we now consider operators V and W whose charges scale with c in the large c limit: In this limit, (4.8) is insufficient, because higher orders in 1/c come with positive powers of the charges. However, the point of computing these heavy correlators is that the sum over blocks simplifies: in particular, (4.7) is dominated by the semiclassical vacuum block, F vac,W , up to exponential corrections in c: The vacuum dominance follows from the definition of a sparse CFT, and has been supported by many computations [51,52,61,63].
To be concrete, we now take W = W N . The semiclassical vacuum block, which we call F vac,N , is known in closed-form for any N [52]. Then in the Regge limit, Compared to the case where V and W are light, this "re-sums" an infinite set of global blocks for multi-J s exchange. The resulting expressions for A Regge (z, η), exact to leading order in large c, are more intricate than our result (4.3). Nevertheless, they still violate the chaos bound. Our calculations are directly inspired by those of Roberts and Stanford [7] in the Virasoro case. We begin by reproducing, then reinterpreting, their calculation. 16
Warmup: Virasoro
We want to compute F Regge vac , the Regge limit of the semiclassical Virasoro vacuum block F vac ≡ F vac,2 . More precisely, we choose operators V and W whose holomorphic conformal JHEP10(2016)069 dimensions scale as in (4.10): In this semiclassical limit, the Virasoro vacuum block is [91] F (4.14) where Actually, the same result holds even in the "heavy-light limit" of [92], in which h v is held fixed as c becomes large. (And even in this regime, the vacuum dominance of the correlator is believed to hold [63].) [7] studies this object at small h w /c, whereby Imposing this limit on F vac and keeping only the terms inside the parenthesis that would contribute to linear order in , [7] write In the Regge limit, In the Lorentzian variables, this reads The initial decrease in time is exponential, with λ L = 2π/β. Note that the sign in the denominator is crucial: it ensures that the magnitude of the correlator decreases in time, for any choice of the ij . At even later times, the correlator has lost an order one fraction of its original value; this happens at t ≈ t * + x, where t * = β 2π log(c/h w ). As noted in [7], there is some subtlety in this interpretation. One point regards the scrambling time. In using the semiclassical conformal block, one holds h w /c fixed. Then strictly speaking, t * as defined above is parametrically smaller than the scrambling time one expects from Einstein gravity, t * = β 2π log c. This is presumably an artifact of the semiclassical limit. To wit, if one analytically continues (4.19) to a regime in which h w is held fixed but large in the large c limit, it exactly matches a shock wave calculation in 3D gravity in the same regime of dimensions. This suggests that (4.19) captures the correct physics even when W and V are not parametrically heavy, and that we may extrapolate (4.19) to with t * = β 2π log c. Then at early times t t * + x, the correlator decreases as exp(2πt/β), and at late times t ≈ t * + x decays to zero as exp(−4πh v t/β).
A more obvious point is that [7] only expanded α to linear order in ∝ h w /c, but don't fully expand the block to linear order. Doing so, one finds Expanding near 1 commutes with going to the second sheet, so the term of O( ) should simply be the Regge limit of the stress tensor exchange. Indeed, where we use the hypergeometric monodromy around z = 1, where s ∈ Z. We also note that expanding F vac (z) to all orders in and keeping only the leading term in small z at each order, the result re-sums to (4.18); this gives a partial justification for the method of [7].
W N
We now perform the calculation of (4.12) for W N with charges (4.10). F vac,N was derived in [51,64] for N = 3, and for arbitrary N in [52]. Its bulk interpretation is of a "heavy" field W generating a classical background with higher spin charge, in which the "light" operator V moves. Moreover, there is evidence that the W N vacuum blocks given below are also valid for q (s) v held fixed in the large c limit, and that even in that case, F vac,N is still the dominant saddle point of the correlation function [52]. As we discussed earlier, this is the case for Virasoro.
We will first do the computation for N = 3, where a single spin-3 current is added to the CFT. This case demonstrates all of the essential physics present at general N , an assertion we support with computations at N = 4 and at arbitrary N in appendix C.
In W 3 , there is only one higher spin charge, so we drop the superscript on q w . The semiclassical W 3 vacuum block is [51,52] where m 1 = 2 n 12 n 23 n 31 n 12 (1 − z) n 3 + cyclic , m 2 = 2 n 12 n 23 n 31 n 12 (1 − z) −n 3 + cyclic (4.25) 17 We ignore irrelevant factors of the UV cutoff of the CFT. Also, in this subsection, we use the normalization of [51], in which N3 = 5/6.
JHEP10(2016)069
with n ij ≡ n i − n j . The n i are roots of the cubic equation where we have defined a rescaled charge, Note the absence of a quadratic term in (4.26), which implies i n i = 0. Note also that under q → −q, one root has odd parity while the product of the other two roots has even parity. At small q, When q = 0, one recovers the Virasoro block (4.14). We want to take (4.24) to the chaos regime. We study each piece of (4.24) in turn, starting with the term ((1 − z) 2 m 1 m 2 ) −hv/2 . This is the only surviving term for an uncharged probe, q v = 0. To first non-trivial order in q, Taking the Regge limit, we find It is straightforward to proceed to higher orders. Perturbation theory through O(q 16 ) is consistent with the following result: Turning now to the (m 2 /m 1 ) 3qv/2 term in (4.24), the first few orders read
JHEP10(2016)069
Perturbation theory through O(q 16 ) is consistent with the following result: Putting this all together, restoring the c-dependence and plugging the block into (4.12), we find the OTO correlator to be where z is given in (2.9). Not unexpectedly, this violates the bound on chaos. The essential point is that every q w appears with a 1/cz 2 . This implies that the decay rate is exponential, controlled by a "spin-3 Lyapunov exponent" As in the Virasoro case, the calculation breaks down at late enough times, signifying the decrease of the correlator. Introducing the "spin-3 scrambling time" t the OTO correlator is thus hv +3qv 2 (4.38) Scrambling sets in when t ≈ t (3) * + x -long before t = t * + x -at which point the correlator decays to zero as exp (−4πh v t/β).
As we reviewed in section 2, because λ L > 2π/β, either the correlator is not analytic in the entire half-strip, and/or it grows in time rather than decaying. Either is a fatal outcome for a theory. To put the problem in sharpest relief, we take t ≈ t (3) * + x (or h w → 0) and q v → 0, and place the operators in the arrangement (2.15), with a displacement angle θ = π/4. Then ( * 12 34 ) 4 = −4 4 , and the correlator reads The correlator grows in time. Indeed, it diverges for t ≈ x + t (3) * . More generally, for operators diametrically opposite on the thermal circle, the correlator will diverge for any θ ∈ [0, π] such that Re (e 4iθ ) < 0. This carves out two substrips, θ ∈ [ π 8 , 3π 8 ] and θ ∈ [ 5π 8 , 7π 8 ],
JHEP10(2016)069
of the full strip in which the correlator is non-analytic. Turning on q v = 0 does not evade this conclusion. One can also check that to linear order in small q w /c, the result 18 matches the Regge limit of the spin-3 current exchange block, g 3 (z) = z 3 2 F 1 (3, 3, 6, z), using (4.23) and N 3 = 5/6.
General spins
These pathologies of the W 3 result only get worse as we add higher spins. For explicit calculations at N = 4 and at arbitrary N , see appendix C. The results are consistent with general expectations: when V and W are charged under a spin-s primary, its contribution to the onset of chaos is characterized by the "higher spin Lyapunov exponent" and to the onset of scrambling by the "higher spin scrambling time" When V and W carry charges of spins s ≤ s max , the leading chaotic behavior is controlled by s max .
In the bulk: ruling out AdS 3 higher spin gravities
Let us now invoke AdS/CFT. The dual higher spin gravities would contain the gauge fields {ϕ s } coupled to some matter. The gauge sector may be succinctly packaged as a G × G Chern-Simons theory for some Lie group G ⊃ SL(2, R). With AdS 3 boundary conditions, the boundary "gravitons" of G generate a W-algebra, call it W G , which is the Drinfeld-Sokolov reduction of G [93]. See figure 1. The bulk dual of our CFT conclusion is: Weakly coupled higher spin gravities with finite towers of higher spin fields are inconsistent.
This includes the oft-studied G = SL(N, R) theories [37], where W SL(N,R) = W N . In fact, our W N calculations of section 4.2 double as direct bulk calculations. This follows from a series of recent works [45,46,51,52,64]. It has been firmly established that the semiclassical vacuum block Importantly for us, this relation is believed to hold for any potential theory of SL(N ) gauge fields coupled to matter, as a matter of gauge invariance [45]. Given this, the CFT and bulk calculations are identical, and the inconsistency of SL(N ) higher spin gravity is explicitly established by our calculations. An analogous statement holds for any bulk algebra G.
Higher spin shock waves. An equivalent way of computing the OTO correlator in the bulk is not via analytic continuation of a Euclidean correlator, but by directly probing a backreacted shock wave solution [1]. At early times, consider perturbing a planar BTZ black hole by an operator (W ) carrying higher spin charge. Our calculation shows that the infalling higher spin quanta generate a "higher spin shock wave" that, in the absence of an infinite tower of higher spin gauge fields, acts acausally. For the two-sided BTZ black hole, the shock wave destroys the entanglement of the thermofield double state too fast. Said another way, these higher spin gravities are too-fast scramblers.
Note that, like a higher spin black hole [94], a higher spin shock wave is no longer purely geometric: the higher spin fields ϕ s are sourced by W . Just as the shock wave line element picks up a g uu component along the null direction u, the spin-s tensor fields ϕ s = ϕ µ 1 µ 2 ...µs should acquire components of schematic form with spin-dependent profiles h (s) (u, x) determined by the field equations. These couple to null spin-s currents J uu...u ∝ q (s) w induced by the motion of W . In a higher spin shock of a two-sided BTZ black hole, acausality will be manifest as a causal connectivity between the left and right CFTs in the perturbed thermofield double state, similar to the effect of a time-advance in higher derivative gravity. 19 This would be visible in the analytic structure of a two-sided correlator of a probe dual to V , in the shock wave background. 20 The properties of scattering through shock waves may also be phrased using amplitudes. 21 The effect of a particle traveling through a shock wave is determined by the highenergy behavior of four-point, tree-level, high-energy scattering amplitude, call it A tree (e.g. [80]). In flat space or AdS gravity in d > 3 dimensions, finite numbers of higher spin fields, massive or massless, lead to unacceptably fast growth of A tree in the high energy regime of large s and fixed t [21]. The tree-level amplitude for exchange of a spin-J field grows at large s like A 19 See e.g. [21], section 6. We thank Aitor Lewkowycz for discussions on this point. 20 A more tractable construction of the higher spin shock wave would use the Chern-Simons description.
In that language, a two-sided correlator would be computed as a two-sided Wilson line. In carrying out such a calculation, one must choose an appropriate gauge for the shock wave connections; [95] motivates a specific gauge choice for the pure thermofield double state, which would also be useful in the shock wave context. 21 Note also the recent work in massive 3D gravity, [96]. whereas the total amplitude A tree must not grow faster than A tree (s, t) G N s 2 (4. 46) In AdS d>3 , the s J behavior holds for spin-J exchange between external fields of any spin, including pure graviton scattering. In AdS 3 , pure gauge field scattering is trivial. However, spin-J exchange between external matter fields still behaves as A tree ∼ G N s J . In other words, AdS 3 Witten diagrammatics with external matter have the same high-energy scaling as in higher dimensions. In the Regge limit, the Mellin amplitude for massless spin-J exchange between pairs of scalars V and W in AdS 3 may be read off from the results of [97]:
JHEP10(2016)069
v,w and N J were defined in (4.1)-(4.2). (Note that for J = 2, the coefficient of G N s 2 is positive when t < 0 for all ∆ v , ∆ w ≥ 0, consistent with causality.) A theory with fields of spin J ≤ N with finite N > 2 violates (4.46), see figure 6. This growth with s is the AdS 3 manifestation of the CFT violation of the chaos bound in (4.3).
Higher spin gravitational actions from
CFT. An open question in the higher spin community has been whether one can consistently couple matter to SL(N ) higher spin gravity. The only example we know is SL(N ) Vasiliev theory -that is, Vasiliev theory at λ = ±N with the gauge fields of spin s > N truncated -which contains a scalar field. This theory is holographically dual to the "semiclassical limit" of the W N minimal models, as recently reviewed in [52]; this is a non-unitary limit. Our result shows that it is impossible to construct other, non-Vasiliev theories of SL(N ) gauge fields coupled to matter that are actually consistent with CFT unitarity.
Pure higher spin gravity. Pure AdS 3 higher spin gravity, with no matter, is not ruled out by our arguments. The discussion of section 3.1 could be applied essentially verbatim JHEP10(2016)069 to this case. While pure higher spin gravity, like pure Einstein gravity, is a conceptually interesting theory to consider, it has no dynamics. Based partly on interpretational difficulties in pure gravity, we suspect that weakly coupled pure higher spin gravities, and their would-be CFT duals, do not exist. 22 An obvious question is what happens when there is an infinite tower of massless higher spin fields. This will be the subject of the next section.
Regge behavior in W ∞ [λ] CFTs and 3D Vasiliev theory
We turn now to the most interesting case of a higher spin theory: one with an infinite tower of currents. In particular, we consider a 2d CFT with a current at each spin s = 2, 3, . . . , ∞, that altogether furnish a classical W ∞ [λ] symmetry. This is the asymptotic symmetry algebra of hs[λ] Chern-Simons theory in AdS 3 , which also forms the gauge sector of 3D Vasiliev theory [39,40].
With the same justification as in earlier sections, we assume that the W ∞ [λ] vacuum block at O(1/c) in the Regge limit is sufficient to read off λ L . In fact, we will be able to derive the vacuum block for all z, not only in the Regge limit. Due to the infinite tower of spins, we need to perform a resummation. The result will therefore be highly sensitive to the interrelations among the coefficients of the different terms in the sum, which are fixed by the higher spin charges of V and W . These are constrained to furnish a representation of W ∞ [λ]. Happily, we will find that the sum over spins "Regge-izes" to give a result consistent with the chaos bound: λ L = 0.
Resumming higher spins in W ∞ [λ]
With normalization (4.1), the block reads To evaluate this, we need to fix q w . V and W are primary operators, which means that they furnish a highest-weight representation of W ∞ [λ]. Highest-weight representations of W ∞ [λ] can be specified by Young tableaux; these may be thought of as SU(N ) Young tableaux, analytically continued to non-integer N . The simplest choice is to take V = W in the so-called "minimal" representation, or fundamental representation, which we denote V = W = f . This is an especially pertinent choice: the single-particle states of the scalar field in the Vasiliev theory carry these quantum numbers. 23 For the representation f , the 22 For a somewhat different perspective, see [98], which claims to compute the exact SL(N, C) higher spin gravitational path integral over manifolds with solid torus topology. We only note that the result is strongly constrained by several assumptions about the allowed saddle points in the path integration; also, the result is only determined up to an additive constant which is critical for distinguishing among possible dual CFTs. 23 There are also conjugate representations with charges obtained by taking λ → −λ. In Vasiliev language, this is the scalar in alternate quantization.
JHEP10(2016)069
higher spin charges for arbitrary s were derived in section 5 of [99]. The ratio q Note that in a normalization in which N 2 = 1/2, as is typical for the Virasoro algebra, the conformal dimension is h(f ) = q (2) (f ) = (1 + λ)/2. So it only makes sense to consider λ > −1 in this calculation, otherwise V and W have negative norm. 24 We now plug (5.3) into (5.2) and perform the sum. We do this by using the integral representation of the hypergeometric function, then exchanging the order of the sum and integral: For various rational values of λ, the sum can be done. Upon integrating, we infer the following elegant result for general λ: This formula admits a nifty proof. Labeling our two representations of F (1) vac,∞ (z|λ) as where A is (5.2) and B is (5.6), we want to prove that A = B. Writing out the series expansion of the hypergeometric functions in A, collecting terms of a given power of z, and performing the sum over s, one finds On the other hand, the series expansion of B reads 24 Note, though, that setting λ = −N for N ∈ Z, all charges q (s>N ) vanish and we recover our previous result λL = 2π(N − 1)/β for WN . This is to be expected from the fact that W∞[±N ] ∼ = WN after modding out generators of spins s > N .
JHEP10(2016)069
To show that A p = B p , we use the following welcome identity for a closely related 4 F 3 (see e.g. [100], p.561): Note that all of its parameters besides the 1 are shifted by 1 relative to the 4 F 3 of interest in (5.8). In fact, the two 4 F 3 's share a simple relation, which is clear by series expansion: vac,∞ (z|λ) in hand, we can now take its Regge limit with ease. For λ / ∈ Z, the monodromies under (1 − z) → e −2πi (1 − z) are Taking the Regge limit z → 0, and recalling that we restrict to λ > −1, the leading term is the constant coming from the log: The sum over spins gives a softer behavior than any term in the sum; indeed, the result does not grow at all! Thus, we conclude that the OTO correlator V † W † V W β is characterized by a vanishing Lyapunov exponent: λ L = 0 . (5.15)
Redux: conformal Regge theory
It was somewhat remarkable that the infinite sum over higher spin exchanges defining F (1) vac,∞ (z|λ) could be performed, yielding the simple expression (5.6) whose Regge limit was trivial to extract. We now perform a different computation: we instead take the Regge limit of each global conformal block appearing in the sum (5.2), before performing the sum, then keep the leading term near z = 0. This is essentially a realization of conformal Regge theory [83].
We start from (5.2) with charges (5.3). The monodromy of 2 F 1 (s, s, 2s, z) around z = 1 is given in (4.23). We have s ≥ 2, so the second term dominates as z → 0. Keeping only the leading order term at each spin and plugging into (5.2), the sum we want to perform is
JHEP10(2016)069
Each term is more divergent than the next. Performing the sum, the right-hand side of (5.16) becomes At small z, Recalling that λ > −1, the leading term precisely agrees with (5.14).
In conformal Regge theory, knowledge of the spectrum of exchanges and their couplings to the external operators is sufficient to compute the Regge limit of the correlation function, in the form of an effective "Regge pole" of spin j living in the complex spin plane. We have traded an infinite sum over higher spin current exchanges for a single effective exchange of j = 1. Presumably the same result could have been reached by directly employing the techniques developed in [83], although we did not use them here.
The analogous computation in CFTs with string theory duals -a sum over higher spin states dual to Regge trajectories of the closed string -is a hallmark of their UV finiteness. It is tantalizing to see a similar structure operating here.
Comments
Holographic interpretation. Unlike the case of SL(N )-type higher spin gravities, a weakly coupled hs[λ] higher spin theory is causal. It obeys the chaos bound, with λ L = 0 for all λ. Despite the fact that a dual CFT need not be free, these bulk theories behave similarly to Vasiliev theories in d > 2: their dynamics is non-chaotic, and thus, in a specific sense, integrable.
The 3D Vasiliev theory is the only known theory of hs[λ] higher spins that couples to matter. Our results suggest that the higher spin black holes of [101], with an infinite tower of higher spin charges, cannot be formed in Vasiliev theory by throwing higher spin quanta into a BTZ black hole. Perhaps they cannot form at all.
In our calculation, an infinite sum over higher spin exchanges yields a result with a causal Regge limit. The Vasiliev theory has far fewer fields than string theory -indeed, it has no massive higher spin states at all -but nevertheless exhibits a stringy structure, as discussed in the introduction. It would be fascinating to try to find a specific string theory in which the non-supersymmetric Vasiliev theory embeds.
Other representations of W ∞ [λ]. λ L is supposed to be independent of the choice of V, W . However, since our calculation is sensitive to the precise choice of charges, we repeat the derivation for a different choice. In appendix D, we take V and W to be distinct operators, with V = f in the minimal representation as before, and W now in JHEP10(2016)069 the antisymmetric two-box representation, asym 2 . With these charges, the result for F (1) vac,∞ (z|λ) on the first sheet is inferred to be Taking its Regge limit, the constant term from the log again dominates, giving λ L = 0. λ = 1 and free bosons. The case λ = 1 is special: It has trivial monodromy, going like z 2 , not z 0 , in the Regge limit. This is related to the fact that at λ = 1, the algebra linearizes. The linear algebra, often known as W PRS has a simple monodromy, and the connected part leads to a negative Lyapunov exponent, λ L = −2 d−1 2 . Note also that if one performed the Regge summation in (5.16) at fixed λ = 1, one would find a leading term of O(z 3/2 ), as opposed to the correct scaling O(z 2 ). This is secretly because the constant term of (5.14) vanishes at λ = 1, and in the Regge analysis, the subleading terms are not to be trusted. This example highlights the fact that the Regge technique is not always applicable: in particular, the same mismatch happens for the free O(N ) bosons in all d [104]. This may be true of free theories in general.
An upper bound on λ? Note that the sign of (5.14) depends on whether λ > 1. Because there is no z-dependence, and because it is imaginary, this term is not constrained by the chaos bound to be sign-definite. Nevertheless, in section 5.3 we do derive a bound on λ, without using chaos. Namely, we prove that unitary, large c CFTs with W ∞ [λ] symmetry can only exist for λ ≤ 2.
W N minimal models. One family of known, unitary CFTs with large c and W ∞ [λ] symmetry is the 't Hooft limit, introduced by Gaberdiel and Gopakumar, of the W N minimal models [105,106] . The limit CFTs have W ∞ [λ] symmetry with 0 ≤ λ ≤ 1. As this is a large N limit of a soluble CFT, it is unsurprising that it would have λ L = 0. What we have shown is that this is a feature of the W ∞ [λ] algebra, independent of any particular CFT realization.
JHEP10(2016)069
5.2 λ L = 0 at finite c One might worry that in the presence of an infinite tower of higher spin currents, using the vacuum block alone to diagnose chaos misses something. We now provide evidence to the contrary, in the specific context of the 't Hooft limit of the W N minimal models. These may be defined via the coset construction Correlation functions in the 3D Vasiliev theory may be computed using the W N minimal models in the 't Hooft limit, where λ is identified with the λ of the bulk. The bulk scalar field, which has m 2 = −1 + λ 2 , is taken in standard quantization, and is dual to the minimal model primary (f, 0). 25 Note that 0 ≤ λ ≤ 1. Euclidean correlators in the W N minimal models are known for all values of N and k (e.g. [108][109][110]). We can analytically continue these to compute the OTO correlators of interest, which contain all exchanges, vacuum and otherwise. The calculation to follow supports many of the statements in this paper: namely, that we can use the 1/c vacuum block alone to diagnose λ L ; and that higher orders in 1/c do not compete with λ L so derived. As in section 5.1, we take V and W both in the minimal representation. This identifies them with the minimal model primaries The (f, 0) operator, a scalar, has conformal dimension which becomes ∆ ≈ 1 + λ in the limit. In [108], the four-point function V V † W W † was computed to be where we define That λ is defined this way, and not as λ = N/(N +k), is required by the choice of standard quantization.
This ensures the consistency of the bulk 1-loop free energy with the O(N 0 ) CFT central charge [107].
JHEP10(2016)069
and We want to take the Regge limit of (5.27) normalized by the two-point functions, i.e. of the reduced amplitude 26 To begin, consider the h 2 terms. In the Regge limit, this will vanish: up to monodromy coefficients, The exponent is positive for all physical N, k. In the 't Hooft limit, k+2N . Turning now to the h 1 terms Using k+2N k+N − ∆ = 1/(N − λ), in the large N limit, this becomes where . . . includes higher order terms in 1/N and in z, z. Noting that c = N (1−λ 2 )+O(N 0 ) in the 't Hooft limit, the 1/c expansion of the chaotic correlator is therefore which precisely matches our result from F vac,∞ alone. Moreover, we learn something important: the leading term (5.32) has an expansion to all orders in 1/c. This strongly supports the notion that λ L = 0 can indeed be read off by taking large c first.
Furthermore, if we expand in s-channel conformal blocks, we can trace (5.34) back to the W ∞ [λ] vacuum block. Consider the reduced amplitude
JHEP10(2016)069
In the V V -W W channel, this decomposes into W ∞ [λ] conformal blocks F p,∞ (z|λ), as in (4.7), in the 't Hooft limit. Focusing on the holomorphic pieces to save space, we find where ∂ (i) acts on the i'th parameter. The last line of (5.36) encodes the contribution of double-trace operators [V V ] n,s and [W W ] n,s , which have holomorphic weights h = 1 + λ + n + s. Together with its anti-holomorphic part, this is subleading in the Regge limit, cf. (5.31), so we focus on the first lines. The derivatives can be simplified easily, using the series representation of the hypergeometric function. One finds Plugging in above, we get The 1/c piece is precisely F vac,∞ (z|λ). This example provides evidence that the 1/c vacuum block is sufficient to determine λ L in general holographic CFTs.
No unitary W ∞ [λ] CFTs for λ > 2
This section has nothing to do with chaos. However, we include it because it provides a complementary constraint on the space of large c higher spin theories.
We now prove the following claim: Unitary, large c 2d CFTs with a W ∞ [λ] chiral algebra with λ > 2 do not exist.
The proof is simple. We observe that while the classical (i.e. large c) W ∞ [λ] algebra is defined for any λ, it is actually complex for λ > 2. 27 Mathematically, this is perfectly acceptable. However, if a chiral algebra describes the current sector of a CFT, its structure constants are identified with three-point coefficients of currents. In a diagonal basis of twopoint functions with real and positive norms, unitarity forces these coefficients to be real. CFT three-point functions are holographically computed as cubic scattering amplitudes in AdS. So the dual claim is that 3D Vasiliev and pure hs[λ] higher spin gravities with λ > 2 have imaginary gauge field scattering amplitudes. Figure 7. In hs[λ] higher spin gravity, which also forms the pure gauge sector of 3D Vasiliev theory, an infinite set of three-point scattering amplitudes is imaginary for any value λ > 2. This follows from the existence of a classical W ∞ [λ > 2] asymptotic symmetry algebra, which is complex.
JHEP10(2016)069
This statement is independent of the matter sector, and extends to any bulk theory, known or unknown, containing a hs[λ] subsector of higher spin gauge fields.
Onto the proof. In [67], the structure of the quantum (i.e. finite c) W ∞ [λ] algebra was studied, and several structure constants C k ij -that is, the three-point coefficients of higher spin currents of spins i, j and k -were determined. It was convincingly argued that all C k ij are determined in terms of two free parameters, which can be taken to be c and C 4 33 . In the diagonal basis (4.1), normalizing the currents as N s = 1/s, 28 C 4 33 is related to λ and c as [67] For fixed c, there is a range of values for which γ 2 < 0. Taking c → ∞ yields the value of γ in the classical algebra, which we denote W cl ∞ [λ]: lim c→∞ γ 2 = 64 5 Clearly, γ 2 < 0 when 2 < λ < 3 (5.41) As explained above, this rules out W cl ∞ [2 < λ < 3] as the chiral algebra of a unitary CFT. We can now starting climbing our way up the spin ladder. [67] where χN is the ideal consisting of generators of spins s > N . The latter algebra obviously exists, and is real. Our statement applies to the W cl ∞ [±N ] algebra before truncating, when it is still an infinite-dimensional algebra. 28 Note that the choice Ns = 1/s is the same normalization as in [67], but their use of the symbol Ns is different.
JHEP10(2016)069
The pattern at higher spins is clear. In appendix E we exclude the entire range λ > 2, as OPE coefficients involving successively higher spins become imaginary for successively higher λ. In particular, we show, using only general properties of the algebra, that Taking s → ∞ proves our claim. When λ → ∞, one should be more careful. For example, if one takes the limit λ → ∞, c → ∞ with c/λ 3 fixed, the structure constants are all real (see e.g. section 4.2 of [111]). Indeed, there is a well-known unitary theory which realizes these structure constants as three-point couplings: namely, the d = 6, (2,0) superconformal field theories with Lie algebra su(N ), where N = λ → ∞. The aforementioned limit of W cl ∞ [λ] is the chiral algebra of the protected sector of the (2,0) theory at large N , and thus determines its OPE. It seems likely that CFTs with vector-like growth c ∝ λ → ∞ cannot exist.
AdS/CFT sans chaos
So far, we have studied theories with varying amounts of chaos. Vasiliev theory aside, the landscape of AdS/CFT contains other interesting and more familiar examples of theories with λ L far below its upper bound. We present some cases here.
Chiral CFTs
Chiral CFTs are d = 2 CFTs with c R = 0 and c L = 24k where k ∈ Z. These are perhaps the most symmetric of all CFTs: every operator is either a current or a descendant of a current. In other words, the CFT consists solely of the vacuum module of an exotic W-algebra with c = 24k. Due to the high degree of symmetry, such theories should not be chaotic, for any value of k. Indeed, given some holomorphic primary current J s (z) with conformal weight h = s ∈ Z, its four-point function is constrained to take the form [112,113] where the constants c n are determined by the OPE. This has trivial monodromy around z = 1, hence λ L = 0. The same conclusion obviously holds for holomorphically factorized CFTs. It has been suggested that the CFT dual to pure AdS 3 quantum gravity holomorphically factorizes [114]. This would imply a hidden infinite-dimensional symmetry among the tower of BTZ black hole states. The result λ L = 0 for factorized CFTs would seem to be in tension with a value λ L = 2π/β associated to chaotic evolution in Einstein gravity. Likewise for the behavior of two-interval mutual information after a global quench [115]: whereas chiral and factorized CFTs exhibit a "dip" in their entanglement entropy after the quench, classical AdS 3 gravity shows no such effect, as the entanglement "scrambles" maximally. 29 As explained in section 3.1, this tension is illusory, for the same reason that 29 The presence of strong chaos and entanglement scrambling have recently been argued to be different manifestations of the same underlying physics [12].
JHEP10(2016)069
factorization of the dual CFT is not patently false: pure quantum gravity is topological. Analogous comments apply to chiral gravity.
The D1-D5 CFT
Consider a symmetric orbifold CFT, Sym N (X) CFT, for some seed CFT X. All such CFTs have a large chiral algebra that becomes infinitely generated at large N . We again expect these theories to be non-chaotic for generic choices of V and W . We note that all Sym N CFTs are sparse in the precise sense of [62].
Technology for correlators in Sym N CFTs has been developed in e.g. [116][117][118]. We now perform an OTO correlator calculation in a CFT especially relevant for holography, namely, the D1-D5 CFT at the symmetric orbifold point, Sym N (T 4 ). This is a N = (4, 4) SCFT with a global SO(4) I ∼ = SU(2) I 1 × SU(2) I 2 symmetry, and central charge c = 6N . In terms of the dual string description, N = N 1 N 5 , where N 1 and N 5 count D1-and D5branes, respectively. Each of the N copies is a c = 6 theory of four real bosons X i and their fermionic superpartners, where i = 1 . . . 4.
In [118], a four-point function of two twist operators and two non-twist operators (among others) was computed in this theory. In particular, consider the following two Virasoro primary operators: O d is an exactly marginal scalar primary, which is a superconformal descendant of a certain twist operator. A, B = 1, 2 are indices of the SU(2) I 1 . Φ dil is the BPS operator dual to the dilaton in the bulk; the index κ denotes the copy of (T 4 ) N . See [118] for further details. [118] viewed the four-point function We now take its Regge limit. Except for the constant terms, every term picks up a minus sign as we cross the cut. Expanding near z, z = 0 on the second sheet, there is no JHEP10(2016)069 divergence, and hence no chaos. This is consistent on general grounds with the existence of a higher spin symmetry enhancement of the tensionless type IIB string in AdS 3 × S 3 × T 4 .
Slightly broken higher spin theories and 1/c corrections
An obvious question is whether λ L receives 1/c corrections, and whether this idea is even sensible. (A first pass in d = 2 CFT was recently taken in [17].) One affirmative argument comes from considering slightly broken higher spin CFTs. We use this term in the original sense of [77]: these are CFTs with some large parameterÑ ∼ c and a couplingλ, with higher spin symmetry breaking of the schematic form This maps to a quantum breaking of bulk higher spin gauge symmetry. The canonical family of such theories is the set of d = 3 bosonic and fermionic O(N ) models at large N and their Chern-Simons deformations [29,30], whereÑ ∝ N andλ is fixed by the 't Hooft coupling λ ≡ N/k. How does chaos develop in these CFTs? We assert that, for finite temperature states on the cylinder S 1 × R d−1 , such theories have This motivates an extension of the usual definition of λ L to higher orders in 1/N . In contrast, CFTs with "classical" higher spin symmetry breaking, like planar N = 4 SYM, have λ L ≈ f (λ) + O(1/N ). One way to understand (6.5) for the O(N ) models is from the bulk 4D Vasiliev description. The scalar and higher spin gauge fields pick up mass shifts only through loops. But a shock wave calculation is classical. (6.5) may be generalized to other spatial manifolds, such as H d−1 , where λ L = 0 in the free theory atλ = 0: This allows us to perform an explicit check: we consider the d = 3 critical O(N ) model in Rindler space, reading off λ L for the vacuum four-point function of the scalar operator J 0 using the techniques of this paper. This correlator has been computed in both the free (cf. (5.21)) and critical models [119]; in the latter, Under the Lorentzian continuation (2.6), the half-integer powers of v pick up a minus sign; in the Regge limit, both (5.21) and (6.7) give J 0 J 0 J 0 J 0 ∼ z 2 , i.e. λ L = −2, confirming (6.6) for this particular case.
JHEP10(2016)069
A related statement pertains to any two holographic large c CFTs related by a doubletrace flow, as obtained by swapping standard (∆ + ) for alternate (∆ − ) boundary conditions of a bulk field in AdS. If we denote these CFTs' respective Lyapunov exponents as λ ∆ ± L , then a natural claim is that where c ∼ 1/G N . It would be interesting to verify this explicitly, and (if true) to determine the sign of the correction.
Discussion
We conclude with some additional directions for future work, and final reflections on the potential power of chaos in classifying conformal field theories.
Strings from chaos, and the CFT landscape. We have explored the way in which λ L depends on the OPE data, and seen that demanding λ L ≤ 2π/β constrains the spectrum of higher spin currents. What about other higher spin operators? String theory and AdS/CFT suggest that, at least for a wide class of CFTs, primary operators may be arranged in Regge trajectories. Given the sensitivity of λ L to the spin spectrum, it seems possible that demanding Reggeization of OTO correlators would lead to a CFT derivation of this picture. 30 Quite generally, it would be extremely useful to discover the path of least action between the set of spectral data and λ L . This would create a manifest link between the Euclidean bootstrap built on crossing symmetry, and the Lorentzian bootstrap built on causality and chaos. 31 Recently, it has been argued that a kind of "average" measure of chaotic behavior of general 2d CFTs can be directly related to the second Rényi entropy, S 2 , for two disjoint intervals separated in space and time [12]. Given that S 2 for two intervals is known to be proportional to the torus partition function, this implies that the spectrum alone, and not the OPE coefficients, can determine λ L and perhaps other broad features of chaos. 32 It would be fruitful to better understand the relation between S 2 and chaos in purely CFT terms. On the other hand, while an average notion of chaos would be useful, so would understanding the distribution of leading Lyapunov exponents over the space of possible OTO correlators in a given CFT. In generic CFTs, unlike in holographic CFTs, the value of λ L does depend on the choice of V and W .
If λ L is hiding in S 2 , where else is it hiding? How does λ L relate more generally to entanglement measures? In 2d CFT, do correlators on surfaces of higher topology contain complementary information about chaos?
We also would like to understand the evolution of chaotic data as we move through the space of CFTs via RG flows or marginal deformations. Along a conformal manifold, 30 We thank Tom Hartman, Dan Roberts and Douglas Stanford for conversations on this topic. 31 Some recent papers in this direction in the context of rational 2d CFT are [120,121]. 32 In contrast, entanglement entropy in the vacuum is not directly related to λL: while the Ryu-Takayanagi result for intervals in vacuum follows from sparseness and a mild assumption about large c growth of OPE coefficients [61], λL = 2π/β does not.
JHEP10(2016)069
λ L will be a smooth function of the moduli. In the D1-D5 CFT with T 4 or K3 target, for instance, there is a marginal deformation which triggers passage from the orbifold point to the supergravity point in the moduli space. Along these lines of fixed points, λ L is a function of the marginal coupling λ, interpolating between Sym N and supergravity regimes: 0 ≤ λ L (λ) ≤ 2π/β. Obtaining λ L (λ) in closed form is a distant goal, but a perturbative calculation near λ = 0 seems within reach. (See [122] for a recent perturbative spectral calculation.) It is inspirational to consider what role OTO correlators could play in revealing the emergence of a classical spacetime description from CFT.
Higher spin AdS 3 /CFT 2 . While SL(N )-type higher spin gravities may capture some crude aspect of how stringy geometry works, their acausality and dual CFT non-unitarity render them unfit for studying dynamical processes, the black hole information paradox, singularity resolution, and so on. Still, it is perhaps worth quantifying in some detail what goes wrong when trying to directly construct an effective gravitational action coupling SL(N ) gauge fields to matter. For comments based on experience with the Vasiliev formalism, see [123]. Seeking explicit violations of causality via two-sided Wilson lines in higher spin shock wave backgrounds would also be worthwhile.
We have also shown that 3D Vasiliev theory has imaginary scattering for all λ > 2. There are reasons to suspect that the range 1 < λ < 2 is inconsistent too, based on representation theory [124] and the absence of known, unitary holographic CFTs in this range of λ. 33 Such an inconsistency must come from the scalar coupling to the higher spins. For this reason and others, it would be useful to compute the four-point function of the 3D Vasiliev scalar. Expanding it in conformal blocks would presumably reveal any non-unitarity.
A natural question raised by our results is how large the space of higher spin 2d CFTs actually is. For example, at large c, are there CFTs with W ∞ [λ] symmetry besides the W N minimal models in the 't Hooft limit?
Another natural question is whether demanding λ L ≤ 2π/β uniquely determines the higher spin algebra of a single infinite tower of currents, with one current at each spin s ≥ 2, to be W ∞ [λ]. This is a baby version, phrased in 2d CFT, of the question of whether string theory is unique.
Slightly broken higher spin chaos. We would like to check whether (6.5) is correct. If so, then computing the leading nonzero term in λ L in slightly broken higher spin CFTs would seem to require knowing connected correlators at O(1/Ñ 2 ). This is a tall order: even in the critical O(N ) model, this is not known. A concrete calculation would be to adapt the ladder diagram techniques of [13], where 1/Ñ is the small parameter. In principle, λ L should be extractable directly from the spectrum of anomalous dimensions and OPE coefficients. In this way, determining λ L in these theories would be connected to the slightly broken higher spin bootstrap of [125].
A prediction for shock wave scattering in AdS. The result (3.16) makes a prediction for the bulk scattering problem of W and V quanta in the background of the hyperbolic AdS 33 We thank Matthias Gaberdiel for discussions on this point.
JHEP10(2016)069
black hole at β = 2π, or the planar BTZ black hole at arbitrary β. An integral representation for V W V W was derived in [9]. It was only explicitly evaluated for heavy operators. More precisely, there are two approximations used in the evaluation of the integral in [7,9]: one, that ∆ w ∆ v , and two, that ∆ v 1. The former permits an interpretation of V moving in a fixed shock wave background generated by W of sharply peaked momentum; the latter allows a geodesic approximation to V V evaluated in the shock wave background. For ∆ v , ∆ w ∼ O(1), neither of those assumptions holds. It would be worthwhile to try to evaluate the overlap integral for light fields and match the functional form of the CFT prediction, and to see whether it also extends to planar black holes in d > 2.
JHEP10(2016)069
For d = 2, 4, the Regge blocks are given by (A.1) with These results may be easily checked using the closed-form expressions for the blocks, in conjunction with the hypergeometric monodromy (4.23): where g h (z) is the SL(2, R) global block. Similar simplification occurs in all even d. The fact that G ∆,s has a hypergeometric representation in even d gives a natural explanation for the appearance of the C 0 (∆ + s) factor; interestingly, this factor appears in all d.
There is one exception to the above formulas: in d = 2 when s = 0, (A.1) is actually not correct when ∆ = 2h < 1. Looking at the full blocks (A.6) and the monodromy of the hypergeometric function, one sees that the term in (A.1) is actually subleading. Instead, when ∆ < 1, the Regge limit is simply A peculiarity in 3d CFT. Note that for 2 < d < 4, G ∆,0 (η) is negative for some portion of the region 0 ≤ η < 1 when d−2 2 < ∆ < 1, as allowed by unitarity. In particular, this includes d
B Chaos in N = 4 super-Yang-Mills
In this section, we derive an OTO four-point function in Rindler space in planar N = 4 SYM at large λ, by analytic continuation of the vacuum four-point function of the 20'. Regge limits of N = 4 correlators have been studied in some detail before (e.g. [82,83,[128][129][130]).
Here we wish only to present a result in the language of chaos. Our calculation explicitly exhibits the position-dependence ascertained in (3.16) on general grounds.
We take V and W to be the 1/2-BPS scalar operator in the 20' of SU(4), with ∆ 20 = 2. Its vacuum four-point function was computed using supergravity in [131]. We introduce 34 An exception is the line of parity-breaking Chern-Simons-matter fixed points connecting the free O(N ) and critical Gross-Neveu models at large N . 3d bosonization [127] implies that at O(1/N ), the scalar bilinear must have ∆ < 1 for at least some range of λ, so as to smoothly match onto the Gross-Neveu result. Then a result of [127] implies a nonzero OPE coefficient.
JHEP10(2016)069
only the most basic aspects of formalism needed to present the result. We follow the conventions of [132], see also [133] for a streamlined review. The 20' transforms in the [0,2,0] representation of SU(4) ∼ = SO (6), and is typically written as a symmetric traceless rank-two tensor of SO (6). Introducing a null vector t i , where i = 1 . . . 6 and t 2 = 0, we define The four-point function is written as where z, z are the usual coordinates in (2.12), α,ᾱ are defined in terms of the SU(4) invariants and the t subscript refers to the n'th operator. The function G(z, z; α,ᾱ) is constrained by superconformal symmetry to take the following form: where k is a constant. The second line is fixed solely by the exchange of SUSY-protected operators, hence is independent of the coupling. The first line depends on a free function H(z, z), which receives contributions from both protected and unprotected operator exchanges. The conformal block decomposition of H(z, z) includes exchanges of SU(4) singlets only, which is why H(z, z) does not depend on α,ᾱ. 35 The correlator O 20 O 20 O 20 O 20 at large N and large λ was computed from supergravity in [131]. The 20' is the lowest KK mode of a linear combination of g µν and C µνρσ with legs along S 5 [134,135]. Focusing on H(z, z), the result is [136] H(z, z) = − 4 N 2 (zz) 2D 2422 (z, z) (B.5) whereD 2422 (z, z) is the reduced D-function. It may be given a closed-form expression by using D-function identities (e.g. [86]) to writē
Expanding at large x, each term can be explained by an accounting of the SU(4) singlet, spin-2 operators appearing in the O 20 × O 20 OPE at O(1/N 2 ). The list of such operators is relatively short: including their twists [137,138], This matches the general structure of f (η) in section 3.
C More on chaos in W N CFTs
We repeat the W 3 calculation of section 4 for W 4 . We also do a computation at arbitrary N . In both cases, we find chaos bound-violating behavior consistent with (4.41).
C.1 N = 4
The semiclassical W 4 vacuum block was derived in [52] for general charges, following the derivation in [64] for the uncharged case. F vac,4 (z) is and α = √ 1 − 4 was defined in (4.15). For simplicity, we take V to be uncharged, so that q Both results have been obtained by resummation of perturbation theory through O(q 16 ). The features that plagued the W 3 result are also present here. In equation (C.6), every power of q 3 comes with a 1/cz 2 , and the correlator is non-analytic in parts of the half-strip. In (C.6), every power of q 4 comes with a 1/cz 3 , which implies a spin-4 Lyapunov exponent and associated scrambling time We can also extend these arguments to arbitrary N . Consider an uncharged probe (q (s>2) v = 0) and allow W to have arbitrary higher spin charges q (s) w . Expanding F vac,N (z) perturbatively in the q (s) w using the results of [52], one finds 36 the result for general N is F vac,N (z) ≈ F vac (z) 1 + N 2 − 4 5 6h v α 6 (z α − 1) 4 6α 2 z 2α + 1 z α log 2 z +α z 4α −14z 3α +14z α −1 log z−(z α −1) 2 5z 2α −22z α +5 (q (3) w ) 2 +O((q (4) w ) 2 ) z→1−z (C.9) Comparing to (4.29), the only N -dependence is in a coefficient. We conclude that when W carries spin-3 charge in a W N CFT, the OTO correlator evolves in time with λ Here we compute F (1) vac,∞ (z|λ) for V and W in the representations indicated. This is a supplement to section 5.1.
JHEP10(2016)069
Presumably this can be proven using generalized hypergeometric identities. For reference, we give the results at λ = 0, 1/2, 1: To take the Regge limit, we need to know the monodromy of 3 F 2 around z = 1. Moving around the branch point yields a linear combination of the three linearly-independent solutions of the hypergeometric equation near z = 0. For parameters {a 1 , a 2 , a 3 ; b 1 , b 2 }, the solutions are
JHEP10(2016)069
To determine whether/how this becomes negative, we need to know what the denominator looks like.
We now focus on C s+1 3s . Let us write this as We want to determine f s (λ) using the following facts about the W cl ∞ [λ] algebra: i) In the normalization of [67], all (C s 3 s 1 s 2 ) 2 are rational functions of λ 2 .
ii) For λ ∈ R, the only degeneration points of W cl ∞ [λ] are at λ ∈ Z.
iii) The denominator of (C s+1 3s ) 2 includes a λ 2 − 4 factor. This reflects the fact that at λ = ±2, all of the generators J 3 , J s , J s+1 are in the ideal. This is clear from the analysis of [67].
The above properties imply that the zero at λ = ±s, and the pole at λ = ±2, are the only real zeros and poles, respectively, of (C s+1 3s ) 2 . Putting these facts together, we can write where f s (λ 2 ) is a rational function which has no zeroes or poles for λ ∈ R. This further implies its sign-definiteness for λ ∈ R.
To complete the proof, we need to show that f s (λ 2 ) > 0 for all λ ∈ R. Since f s (λ 2 ) is sign-definite for λ ∈ R, it suffices to evaluate its sign for a single real value of λ. At λ = 1, we have the isomorphism W cl ∞ [1] ∼ = W PRS ∞ , and the latter has real structure constants [67]. Since λ 2 −s 2 λ 2 −4 is positive at λ = 1, this implies that f s (λ 2 ) > 0 (E.5) for all s and λ ∈ R. Actually, f 3 (λ 2 ) and f 4 (λ 2 ) are constant, which strongly suggests that f s (λ 2 ) is constant for all s. In any case, having established positivity of f s (λ 2 ), it directly follows from (E.4) that (C s+1 3s ) 2 < 0 when 2 < λ < s (E.6) A final comment: W cl ∞ [λ] inherits a triality symmetry from the quantum W ∞ [λ] algebra [67], which under which algebras with three different values of λ are isomorphic. One might wonder whether this plays a hidden role in invalidating our conclusions: namely, whether for λ > 2, either of the triality images of λ is less than 2. But they aren't. If we denote T(λ) as the triality orbit of the quantum W ∞ [λ] algebra, then it follows from [ This is related to property ii).
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
|
v3-fos-license
|
2021-10-28T15:11:43.088Z
|
2021-10-25T00:00:00.000
|
240056032
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/26/21/6428/pdf",
"pdf_hash": "7916e03e267b5bd618094fb1e99a570c614d3b39",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46037",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "7a8aeec14994108cab49f6679c99a84bbe53e0ac",
"year": 2021
}
|
pes2o/s2orc
|
In Search of the Perfect Triple BB Bond: Mechanical Tuning of the Host Molecular Trap for the Triple Bond B≡B Fragment
The coordination of the B2 fragment by two σ-donor ligands L: could lead to a diboryne compound with a formal triple bond L:→B≡B←:L. σ-Type coordination L:→B leads to an excess of electrons around the B2 central fragment, whereas π-back-donation from the B≡B moiety to ligand L has a compensation effect. Coordination of the σ-donor and π-acceptor ligand is accompanied by the lowering of the BB bond order. Here, we propose a new approach to obtain the perfect triple BB bond through the incorporation of the BB unit into a rigid molecular capsule. The idea is the replacement of π-back-donation, as the principal stabilization factor in the linear NBBN structure, with the mechanical stabilization of the BB fragment in the inert molecular capsule, thus preserving the perfect B≡B triple bond. Quantum-chemical calculations show that the rigid molecular capsule provided a linear NBBN structure and an unusually short BB bond of 1.36 Å. Quantum-chemical calculations of the proposed diboryne adducts show a perfect triple bond B≡B without π-back-donation from the B2 unit to the host molecule. Two mechanisms were tested for the molecular design of a diboryne adduct with a perfect B≡B triple bond: the elimination of π-back-donation and the construction of a suitable molecular trap for the encapsulation of the B2 unit. The second factor that could lead to the strengthening or stretching of a selected chemical bond is molecular strain produced by the rigid molecular host capsule, as was shown for B≡B and for C≡C triple bonds. Different derivatives of icosane host molecules exhibited variation in BB bond length and the corresponding frequency of the BB stretch. On the other hand, this group of molecules shows a perfect triple BB bond character and they all possess a similar level of HOMO.
Introduction
For the last decade, the boron-boron (B≡B) triple bond has been at the center of a dispute [1,2]. Braunschweig proposed [2] the chemical evidence of the BB triple bond's character: diboryne was found to react with chalcogens, affording the [2.2.1]-bicyclic systems via a six-electron process involving the insertion of five chalcogen atoms into the BB triple bond, which was completely cleaved during the reaction. This widely discussed issue is an example of a successful case study acquiring common interest [3]. The difficulty of this discussion is the absence of an obvious reference point-the parent molecule with a perfect B≡B triple bond. For example, Köppe and Schnöckel suggested that the BB bond in the diboryne adduct is intermediate between a single and a double bond [4].
Compounds with B-B multiple bonds are still rare [2]. Formally, the coordination of the B 2 fragment by two σ-donors L: should produce the structure with a triple bond L:→B≡B←:L (I) between B-atoms. The trivial Lewis structure hides the fact that the ground state of the parent B 2 species is a triplet state 3 Σ g − with a pair of degenerate π-MO populated by two electrons with parallel spin: (σ + ) 2 (σ − ) 2 (π x ) 1 (π y ) [1,5]. The electronic configuration with full π-MO: (σ + ) 2 (σ − ) 0 (π x ) 2 (π y ) 2 is a high-lying excited singlet state 1 Σg+ [6,7]. The singlet state of B 2 − 1 Σg+ has a triple bond electronic configuration involving one σ-bond, two π-bonds, and two empty sp-hybrid orbitals at both B-atoms (see the MO interpretation in Ref. [8]). Consequently, the structure with the B≡B triple bond requires very strong L:→B coordination for the compensation of the preference of the triplet electronic configuration of the parent B 2 moiety. The dicarbonyl adduct OCBBCO was the first experimentally detected diboryne. It was prepared in an argon matrix at 8 K, characterized by IR spectrum, and backed by quantum-chemical calculations [9,10]. The Lewis structure with a triple BB bond-OC:→B≡B←:CO for a singlet ground state was proposed by Zhou et al. [9,10]. The second representative of the diboryne family, the diboronyl diboryne anion OBBBBO −1 (bond order 2.5), was detected and investigated in a gas phase using photoelectron spectroscopy [11]. Bond order analyses of dianion OBBBBO −2 concluded the presence of a true triple B≡B bond in this negatively charged complex [11]. Later, a compound with a CBBC linear fragment was synthesized by Braunschweig et al. [12]. This compound, which is stable at a room temperature, combines two N-heterocyclic carbene (NHC) ligands coordinated to the B 2 fragment. The conclusion about the B≡B triple bond's character was based on the X-ray data, according to which the central CBBC fragment is linear and the BB distance 1.449 Å is considerably shorter, bỹ 0.1 Å, than the corresponding value for the double B=B bond. DFT calculations [1,[12][13][14] supported the findings regarding the structural characteristics, such as the linearity of the CBBC fragment and R(BB) < 1.46 Å. Calculations show the occurrence of the two π-HOMOs localized on the BB fragment, which, on the MO level, indicates the triple B≡B bond's character [1,13]. Different symmetric adducts L→B≡B←L (I, L= CO, CS, N 2 ) were computationally studied by Mavridis et al. [15] and later by Frenking et al. [8,13,14]. The bonding scheme for the complexes with an unsaturated ligand was interpreted in terms of donor-acceptor interactions between the σ-donor, π-acceptor ligands L and B 2 as the σ-acceptor, and the π-donor moiety [13][14][15]. Frenking et al. [14] specified that the π-back-donation L→ B≡B ←L for L=CO, N 2 is very strong, but this is still a triple bond. Braunschweig et al. [12] also supported the interpretation of diboryne (I) as a structure with a B≡B triple bond. Later, the NHC-diboryne adduct was investigated by Raman spectroscopy [16] and the Raman active BB stretching mode was observed at 1653 cm −1 , which is in agreement with the B3LYP predictions of 1681 cm −1 . We attempted to find the answer to the problem concerning the perfect B≡B bond through the elimination of the π-back-donation effect, which is the dominant factor in the weakening of the π-components of the triple bond. On the other hand, the design of such a molecule must compensate for the loss of π-back-donation stabilization and the need to remain linear.
Results and Discussion
Experimental observation of a stable compound with a formal B≡B triple bond initiated the discussion about the physical indication of this bond. These included the following: an assessment of the structural characteristics, such as the linearity of the central LBBL fragment and BB bond length [1][2][3][4]13,14]; experimental and computational assessments of the strengths of the BB triple bond through the stretching frequencies of their central BB unit [16]; a qualitative depiction of the charge transfer channels (σ, π, and polarizations) [17]; advanced solid-state NMR and computational methodology were used in order to directly experimentally probe the orbitals involved in multiple boronboron bonds via the analysis of 11 B-11 B spin-spin (J) coupling constants [18]. Köppe and Schnöckel contend that the force constant of the BB bond is lower than expected for a B≡B triple bond and the bond order is only slightly larger than 1.5; thus, consequently, NHC diboryne "does not contain a BB triple bond" [4]. Other authors rejected this conclusion [1,3,18,19].
The results of the previous studies of diboryne adducts L→ B≡B ←L can be summarized as follows ( Table 1): The numerous calculated data and two experimentally studied diborynes show: (1) the linearity of the LBBL fragment is caused by the maximal overlap of the empty σ-orbital of the B 2 fragment with the σ-lone pair of the ligand and the effective back-donation to the π-system of ligands. Trans-bending distortion leads to the loss of π-bonding [20,21]. VB calculations also show that the σ-frame favors trans distortion, while the π-system opposes it [22]; (2) diboryne's BB bond is 1.44 ÷ 1.47 Å, which agrees with the standard value for a triple bond (1.46 • A) [23], and it is shorter than the B=B double bond by >0.1 Å [17]; (3) the experimentally observed and computationally predicted Raman-active BB stretch in diboryne is near to 1700 cm −1 [16]. Strong π-back-donation is an obvious reason for the stabilization of a linear LBBL structure; however, on the other hand, this mechanism is responsible for the lowering of π-bonding between boron-boron atoms. Frenking summarized it as follows: "Thus, the bond order for the B-B can be expected between 2 and 3 while the triple bond character is retained in the diboryne whose bonding situation is properly sketched with the formula NHC→ B≡B ←NHC" [1]. Noble gas diborynes NG-BB-NG (NG = Ar and Kr) [15] are examples of σ-adducts without π-back-donation from B 2 to ligands L. The linear structure of NG-BB-NG Ar and Kr diborynes has a short < 1.4 Å BB bond (Table 1), but it undergoes spontaneous distortion into a zigzag configuration that is a direct consequence of the loss of the π-back effect.
The model compound bis-1-azaadamantane-diboryne adduct C 9 H 15 N→BB←NC 9 H 15 (II) is a non-rigid molecule with a shallow potential well corresponding to the two slightly distorted zigzag configurations (II). The configuration with a linear NBBN central fragment is a transition structure (II-TS, the barrier height of 1kcal/mol) that has a short BB bond length R(BB) = 1.422 Å ( Figure 1). of the empty σ-orbital of the B2 fragment with the σ-lone pair of the ligand and the effective back-donation to the π-system of ligands. Trans-bending distortion leads to the loss of π-bonding [20,21]. VB calculations also show that the σ-frame favors trans distortion, while the π-system opposes it [22]; (2) diboryne's BB bond is 1.44 ÷ 1.47 Å , which agrees with the standard value for a triple bond (1.46°A) [23], and it is shorter than the B=B double bond by >0.1 Å [17]; (3) the experimentally observed and computationally predicted Raman-active BB stretch in diboryne is near to 1700 cm −1 [16]. Strong π-back-donation is an obvious reason for the stabilization of a linear LBBL structure; however, on the other hand, this mechanism is responsible for the lowering of π-bonding between boron-boron atoms. Frenking summarized it as follows: "Thus, the bond order for the B-B can be expected between 2 and 3 while the triple bond character is retained in the diboryne whose bonding situation is properly sketched with the formula NHC→ B≡B ←NHC" [1]. Noble gas diborynes NG-BB-NG (NG = Ar and Kr) [15] are examples of σ-adducts without π-back-donation from B2 to ligands L. The linear structure of NG-BB-NG Ar and Kr diborynes has a short < 1.4 Å BB bond (Table 1), but it undergoes spontaneous distortion into a zigzag configuration that is a direct consequence of the loss of the π-back effect.
The model compound bis-1-azaadamantane-diboryne adduct C9H15N→BB←NC9H15 (II) is a non-rigid molecule with a shallow potential well corresponding to the two slightly distorted zigzag configurations (II). The configuration with a linear NBBN central fragment is a transition structure (II-TS, the barrier height of 1kcal/mol) that has a short BB bond length R(BB) = 1.422 Å ( Figure 1).
Molecular Design of Compounds with the Perfect Triple B≡B Bond
According to a recent suggestion [24], the perfect B≡B triple bond can be formed by σ-coordination of the N-atom with the B2 fragment, provided that the π-back-donation is prevented. We propose a new approach for designing compounds with the perfect triple B≡B bond-the incorporation of the B2 unit in a rigid saturated host structure with two nitrogen σ-donor centers. There is no π-acceptor in such an inert saturated host molecule, and, consequently, the π-back-donation effect is absent. Coordination of the B2 unit by Natoms acts as the stabilization factor, whereas the rigid saturated frame ensures the linearity of the NBBN central fragment. Cryptand-type host molecules (HM) look suitable
Scheme 1. Three bicyclic saturated host molecules III-V.
Host icosane molecules IV and V were modified by two terminal adamantane unitsstructures VI and VII-for the compression of the central icosane part (Scheme 2).
Scheme 2.
Derivatives of IV and V icosanes framed by two adamantane units VI and VII.
Scheme 1. Three bicyclic saturated host molecules III-V.
Host icosane molecules IV and V were modified by two terminal adamantane unitsstructures VI and VII-for the compression of the central icosane part (Scheme 2).
Scheme 2. Derivatives of IV and V icosanes framed by two adamantane units VI and VII.
DFT calculations (M06-2X, Table 2) show that diaza-compounds III-V (Scheme 1) and their adamantane derivatives VI and VII (Scheme 2) produce stable B2-host complexes III-B2÷VII-B2. Polycyclic diboryne adducts III-B2 ÷ VII-B2 have a linear NBBN fragment (calculations show very slight bending at the boron atoms NBB > 178.5°) and a short BB bond. The NBO (natural bond orbital) [33] analysis implemented in the GAUSSIAN package [28] shows a dominant (>99%) Lewis structure with a B≡B triple bond and a single NB bond. A comparison of diboryne complexes III-B2-VII-B2 with the reference diaza-adduct N2BBN2 shows the expected effects, namely, π-back-donation strengthens the N-B bond and simultaneously weakens the BB triple bond ( Table 2). Diaza complex N≡N:→B≡B←:N≡N has an obvious π-back-donation effect, but this feature is absent in the molecules III-B2 ÷ VII-B2. Calculations show an identical picture: one σ-MO and two π-MOs between two boron atoms for all adducts III-B2 ÷ VII-B2. However, the BB bond lengths ( Table 2) are shortened in the series III > IV > VI > V > VII despite the similar orbital structure. It must be noted that the experimentally observed BB bond lengths for (NHC)BB(NHC) [12] are longer and weaker (ν(BB) < 1700 cm −1 ) than the calculated BB distance and the corresponding BB frequencies for all studied adducts III-B2 ÷ VII-B2 ( Table 2). The variation in the N-B bond distance is stronger than the change in the B-B bond length in all cases. This is an obvious consequence of the different bond strength, namely, the BB bond is a very strong triple bond whereas NB is an ordinary single bond.
Scheme 2. Derivatives of IV and V icosanes framed by two adamantane units VI and VII.
DFT calculations (M06-2X, Table 2) show that diaza-compounds III-V (Scheme 1) and their adamantane derivatives VI and VII (Scheme 2) produce stable B 2 -host complexes III-B 2 ÷VII-B 2 . Polycyclic diboryne adducts III-B 2 ÷ VII-B 2 have a linear NBBN fragment (calculations show very slight bending at the boron atoms NBB > 178.5 • ) and a short BB bond. The NBO (natural bond orbital) [33] analysis implemented in the GAUSSIAN package [28] shows a dominant (>99%) Lewis structure with a B≡B triple bond and a single NB bond. A comparison of diboryne complexes III-B 2 -VII-B 2 with the reference diaza-adduct N 2 BBN 2 shows the expected effects, namely, π-back-donation strengthens the N-B bond and simultaneously weakens the BB triple bond ( Table 2). Diaza complex N≡N:→B≡B←:N≡N has an obvious π-back-donation effect, but this feature is absent in the molecules III-B 2 ÷ VII-B 2 . Calculations show an identical picture: one σ-MO and two π-MOs between two boron atoms for all adducts III-B2 ÷ VII-B2. However, the BB bond lengths ( Table 2) are shortened in the series III > IV > VI > V > VII despite the similar orbital structure. It must be noted that the experimentally observed BB bond lengths for (NHC)BB(NHC) [12] are longer and weaker (ν(BB) < 1700 cm −1 ) than the calculated BB distance and the corresponding BB frequencies for all studied adducts III-B 2 ÷ VII-B 2 ( Table 2). The variation in the N-B bond distance is stronger than the change in the B-B bond length in all cases. This is an obvious consequence of the different bond strength, namely, the BB bond is a very strong triple bond whereas NB is an ordinary single bond.
Strain Energy of the Host Molecules
Host molecules III, IV, and V have different inner cavity sizes for complexation with B 2 , but the complexation of B 2 is effective for all diaza hosts III-VII. The structural/mechanical characteristics of the host molecules are as follows: the icosane derivatives IV and V have a small frame with a six atom edge, whereas the hexacosane cryptand-222 (III) has a bigger capsule with an eight atom edge. Comparison between the initial diaza-host III-VII compound and the corresponding host-diboryne adduct shows the trivial linkage between two structural parameters, i.e., the shorter N . . . N distance of the host molecule leads to a shorter BB bond for the diboryne adduct. In other words, the strain of the molecular capsule determines the level of the compression or stretch of the BB fragment. For example, cryptand-222 (III) has the longest NN distance and adduct III-B 2 exhibits the maximal length BB bond relative to the other studied complexes. On the other hand, VII-hexa-oxoicosane framed by two adamantane units has the shortest NN distance, i.e., only 3.638 Å, and this is a perfect host molecule for the compression of the incorporated BB unit. VII-B 2 has an extremally short (1.347 Å) and strong (ν(BB)=2091 cm −1 ) BB bond. π-Back-donation is absent in both adducts III-B 2 and VII-B 2 and they differ only in terms of the mechanical behavior of the molecular frame. Host molecules are strongly distorted by incorporation of the BB fragment, which produces strain.
We estimated the strain energy of the host molecule by comparing the optimized structure with the structure of the host trap from the optimized diboryne adduct (∆∆E (host strain)). The large host molecule cryptand-222 (III) shows significant strain in its diboryne adduct III-B 2 (∆∆E (host strain) = 40.3 kcal/mol). The cavities of icosanes IV-VI are more appropriate for the incorporation of the B 2 fragment and the strain energy of the host capsule is reduced to~32-33 kcal/mol. However, the strain of the host molecule VII with the smallest cavity (R(N . . . N) = 3.638 Å only for the free host molecule) reaches the highest strain level of 42.8 kcal/mol in its strongly compressed adduct VII-B 2 ( Table 2).
The addition of B 2 to the host molecule III leads to the shortening of the NN distance from 6.048 Å to 4.543 Å. This deviation from the optimal cavity size produces strain, which could lead to the increased stretch of the BB and NB bonds. This is a reason for the BB bond length in the diboryne of hexacosane cryptand-222 (III-B 2 ) being longer and weaker than in other diboryne complexes IV-VII, despite the perfect triple bond electronic characteristics. The frequency of the BB stretch of 1710 cm −1 (III-B 2 ) is also the lowest in the group (Table 2). On the other hand, the distance between the two N-atoms in the parent host molecule VII is shorter than in its diboryne adduct VII-B 2 . This means that the driving force of the host molecule VII is a compression of the BB fragment.
The BB bond length is shorter than 1.4 Å for all icosane diborynes, but it is 1.347 Å for VII-B 2 , which is 0.1 Å shorter than the BB bond in the experimentally detected carbenediboryne adduct [12,16].
Previous calculations revealed the symmetric Raman-active BB mode, which could serve as an indicator for a multiple BB bond [16]. The frequency of this mode for all the studied linear neutral L-BB-L molecules varies from 1720 to 1800 cm −1 [24]. Strengthening of the central BB fragment was accompanied by a strong redshift of up to 2092 cm −1 of the corresponding BB stretch mode for VII-B 2 ( Table 2, frequencies ν(BB)) in the series of studied polycyclic borynes. This is the pure BB stretch mode in all studied cases III-B 2 -VII-B 2 . The Raman-active stretching mode of the triple N≡N bond in molecular nitrogen N 2 is observed at ∼2300 cm −1 [34]. The stretching frequencies of the C≡C bond in alkynes are normally in a range from ∼2100 to 2300 cm −1 [35]. A Raman-active B-B vibration at 2092 cm −1 (VII-B 2 , Table 2) indicates that the B 2 fragment of diborynes III-B 2 -VII-B 2 is a perfect B≡B triple bond on the same level as a classical N≡N and C≡C triple bonds. In addition, the experimentally observed [16] BB stretch at 1653 cm −1 indicated weaker BB bond compared to the corresponding feature in the compounds presented in our study ( Table 2). This is consistent with a previous interpretation in which the bond order of (NHC) 2 B 2 is intermediate between a double B=B and triple B≡B bond [1,24].
DFT calculations show that the two-body dissociation of the diboryne adduct to 3 B 2 (ground state of B 2 ) [5,6] and the corresponding diaza-polycycle is a strongly endothermic process ∆∆E(dissociation) ≥ 50.0 kcal/mol ( Table 2): 1 Host-B 2 → 1 Host + 3 B 2 (Host=III, IV, V, VI, VII) The stability of the diboryne adducts III-B 2 -VII-B 2 is provided by effective NB σcoordination, while π-back stabilization is absent. Nevertheless, the diboryne derivatives of diaza-host molecules III-B 2 -VII-B 2 are thermodynamically stable compounds according to the M06-2X/cc-pVDZ calculations ( Table 2). The dissociation energies of diboryne adducts III-B 2 -VII-B 2 are approximately of the same order as for N 2 B 2 N 2 , which has strong π-back stabilization. Our attempts to detect the rupture of one NB bond along the antisymmetric NBBN stretch were unsuccessful despite the significant strain energy. These data could serve as an indication of kinetic stability.
Time dependent TD-M06-2X calculations of the Franck-Condon area provide a similar picture for all studied cases (III-B 2 -V-B 2 ), namely, the lowest singlet excited state S 1 is 3.5 eV above the ground state S 0 , whereas the experimentally observed (NHC) 2 B 2 adduct has a gap of 2.4 eV [12]. The lowest excited state is a triplet state ∆E(S 0 −T 1 ) ≥ 1.5 eV, which has a zigzag structure for NBBN. The conjugation of the π-donor B 2 group with a ligand π-acceptor provides the lowering of the degenerate π-HOMO level, which is strongly localized on the BB fragment. Calculations of compounds III-B 2 ÷ VII-B 2 ( Table 3) show an approximately equal HOMO level, which is 0.7 eV higher than in the case of (NHC)BB(NHC). The HOMO level in the experimentally observed (NHC)BB(NHC) compound must be lower relative to the case without π-back-donation. It must be noted that all these diboryne adducts III-B 2 ÷ VII-B 2 have different BB bond lengths despite the similar π-HOMO levels. The IP (ionization potential) estimations, i.e., the calculated difference between the energy of the neutral and ionized form of the corresponding compounds in the ground state configuration, fully agree with the relative HOMO level. This is a direct indication on the very similar BB bonds in the group III-B 2 ÷ VII-B 2 . Consequently, all the studied diboryne adducts of bicyclic host molecules III-VIII have a de facto perfect B≡B triple bond. Consequently, the differences in BB bond lengths between molecules in the group III-B 2 ÷ VII-B 2 do not have the electronic donor-acceptor origin. We proposed that the induced strain has a mechanical origin, because the host molecules III-VII differ in size, configuration, and distance between N-atoms in the cavity (see survey of the strained organic molecules [36]).
The mechanical hypothesis could be analyzed through a comparison of diboryne adducts III-B 2 ÷ VII-B 2 with analogue propellane molecules III-C 4 -VII-C 4 with a C-C≡C-C fragment instead of a NBBN unit (Figure 2). No donor-acceptor interaction or π-backdonation occur in the case of pure carbon structures, which provides an opportunity to estimate the mechanical effect. Di-substituted alkynes VIII and IX (di-substituted acetylenes R-C≡C-R (R = 2-Adamantyl (VIII) and R=Methyl (IX))) are the reference triple bond C≡C molecules. The acetylene derivative III-C4 shows lengthening (0.015 Å) and weakening (blueshift ∆ν~−100 cm −1 ) of the C≡C bond relative to VIII and IX ( Table 4). The contraction (0.02 Å) and strengthening (redshift ∆ν~150 cm −1 ) of the C≡C triple bond is unusual and is the result of a pure mechano-chemical compression effect. These results show the strong mechanical influence of the molecular frame on the C≡C triple bond. Table 4. Bond distances (C-C and C≡C in Å) for the central linear C-C≡C-C unit of propellanes III-C 4 ÷ VII-C 4 and two di-substituted acetylenes R-C≡C-R (R=2-Adamantyl (VIII) and R=Methyl (IX)). Frequency of Raman-active C≡C stretch (in cm −1 ).
C-C≡C-C III-C 4 IV-C 4 VIII IX VI-C 4 V-C 4 VII-C 4
R(C-C) The addition of an adamantane unit to the molecular frame IV or V leads to the compression of the central fragment B≡B or C≡C in all cases, i.e., IV-B 2 vs. VI-B 2 ( Table 2), IV-C 4 vs. VI-C 4 (Table 4), V-B 2 vs. VII-B 2 ( Table 2), and V-C 4 vs. VII-C 4 ( Table 4). The effect is especially strong following the addition of the adamantane part to hexa-oxo-frame V-∆R(BB) = 0.037 Å and ∆ν(BB) = 164 cm −1 . This is an example of mechano-chemical compression leading to the amplification of the chemical bond.
Conclusions
Producing the perfect B≡B triple bond was shown to depend on two factors: (a) the elimination of π-back-donation, which means the perfect π-bonding can be preserved; and (b) the construction of the compressing molecular capsule, which allows the linear NBBN configuration to be maintained. Bicyclic host compounds IV-VII satisfy the following requirements: (a) the molecular capsules have a suitable size and configuration cavity; and (b) there are no π-acceptors in the host molecule, which excludes π-back-donation. The BB bond lengths of diboryne adducts in the selected host molecules are significantly shorter (R(BB) < 1.4 Å) and stronger (ν(BB) > 1850 cm −1 ) than those in the earlier experimentally detected diboryne derivatives.
The BB bond lengths and the frequencies of the Raman-active B-B mode indicate that the B 2 fragment of diborynes III-B 2 -VII-B 2 has a perfect B≡B triple bond character, as is the case in well-known classical N≡N and C≡C triple bonds.
The studied group of host molecules exhibits the mechanical strain effect on the triple BB bond lengths and corresponding BB frequencies without changing the BB bond order or the level of HOMO. An analogous molecular-mechanical effect was also detected for the corresponding compounds with a C≡C triple bond.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2009-12-17T00:00:00.000
|
16557818
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CC0",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1289/ehp.0901032",
"pdf_hash": "6dbcbcc8a16fa815879e35f12052e02e2684afb2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46038",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "6dbcbcc8a16fa815879e35f12052e02e2684afb2",
"year": 2009
}
|
pes2o/s2orc
|
Tuberculosis and Indoor Biomass and Kerosene Use in Nepal: A Case–Control Study
Background In Nepal, tuberculosis (TB) is a major problem. Worldwide, six previous epidemiologic studies have investigated whether indoor cooking with biomass fuel such as wood or agricultural wastes is associated with TB with inconsistent results. Objectives Using detailed information on potential confounders, we investigated the associations between TB and the use of biomass and kerosene fuels. Methods A hospital-based case–control study was conducted in Pokhara, Nepal. Cases (n = 125) were women, 20–65 years old, with a confirmed diagnosis of TB. Age-matched controls (n = 250) were female patients without TB. Detailed exposure histories were collected with a standardized questionnaire. Results Compared with using a clean-burning fuel stove (liquefied petroleum gas, biogas), the adjusted odds ratio (OR) for using a biomass-fuel stove was 1.21 [95% confidence interval (CI), 0.48–3.05], whereas use of a kerosene-fuel stove had an OR of 3.36 (95% CI, 1.01–11.22). The OR for use of biomass fuel for heating was 3.45 (95% CI, 1.44–8.27) and for use of kerosene lamps for lighting was 9.43 (95% CI, 1.45–61.32). Conclusions This study provides evidence that the use of indoor biomass fuel, particularly as a source of heating, is associated with TB in women. It also provides the first evidence that using kerosene stoves and wick lamps is associated with TB. These associations require confirmation in other studies. If using kerosene lamps is a risk factor for TB, it would provide strong justification for promoting clean lighting sources, such as solar lamps.
volume 118 | number 4 | April 2010 • Environmental Health Perspectives Research Tuberculosis (TB) is a major infectious disease that causes illness and death worldwide (Rieder 1999). In 2006, there were about 9.2 million new TB cases and 1.7 million TB-related deaths [World Health Organization (WHO) 2008]. Most new cases and deaths occurred in Asia and Africa. In Nepal, a South Asian country, TB is a major public health problem (Paugam and Paugam 1996), with an overall annual incidence of all forms of TB estimated at 176 per 100,000 persons (Harper et al. 1996).
A range of social, environmental, and behavioral factors influence exposure and susceptibility to Mycobacterium tuberculosis infection. Identifying TB risk factors and minimizing exposure to them could reduce the TB burden in Nepal and other developing countries. Active tobacco smoking, for example, has been shown to be a risk factor for TB, presumably by damaging immune and other protective mechanisms, allowing TB infection to prosper (Bates et al. 2007;Boelaert et al. 2003;Lin et al. 2007). The composition of tobacco smoke has many similarities to that of indoor cooking smoke from biomass fuel (Kulshreshtha et al. 2008;Shalini et al. 1994;Smith 1987), exposure to which is common in the developing world, including Nepal. Therefore, an association of TB with indoor cooking smoke is plausible. Six previous epidemiologic studies have investigated whether an association exists between TB and exposure to cooking-fuel smoke (Crampin et al. 2004;Gupta et al. 1997; Kolappan and Subramani 2009;Mishra et al. 1999;Padilla et al. 2001;Shetty et al. 2006). Although four of these studies found some evidence of an association, all the studies had limitations. The first study to find an association between exposure to cooking-fuel smoke and TB presented limited data on potential confounding factors, and the risk model was adjusted only for age, which left open the possibility of confounding by socioeconomic factors or smoking (Gupta et al. 1997). Mishra et al. (1999) also reported evidence of an association; however, they used data from the 1992-1993 Indian National Family Survey, which was based on selfreported TB status. This leaves the possibility of outcome misclassification. A third study found an association between cooking smoke exposure and TB but included no validation of key components of the questionnaire (Padilla et al. 2001). In a study conducted in Malawi, Crampin et al. (2004) found no association between cooking smoke exposure and TB, but the study participants varied little in the type of fuel they used, and the risk model was adjusted only for age, sex, area of residence, and HIV status, leaving open the possibility of confounding by other socioeconomic factors or smoking. The fifth study, conducted in South India by Shetty et al. (2006), also found no association of cooking-fuel smoke with TB, but they did find an association between TB and not having a separate kitchen. The sixth study was conducted by Kolappan and Subramani (2009) in Chennai, India; they found a marginal association between biomass fuel and pulmonary TB in their study population [adjusted OR = 1.7; 95% confidence interval (CI), 1.0-2.9]. The study participants in this study were primarily men (87%) but because women do most of the cooking, they are more likely to be exposed to smoke from cooking fuel.
We conducted a TB case-control study in the Pokhara municipality of Nepal where cooking with biomass fuels in unvented indoor stoves is a common practice. Our main objectives were to confirm results of earlier studies using clinically confirmed TB cases and to investigate possible confounding of the relationship using a validated questionnaire and exposure assessment in the kitchens of a subset of participants' houses.
Methods
Subjects' approvals were obtained from the institutional review boards at the University of California-Berkeley, and at the Nepal Health Research Council.
The study was conducted at the Regional Tuberculosis Center (RTC) and the Manipal Teaching Hospital (MTH), Manipal College of Medical Sciences, in Pokhara. The RTC and MTH are the two major health centers [directly observed treatment short-course (DOTS) clinics] that specialize in diagnosing TB and caring for people who live in Kaski (Pokhara) and seven adjoining hill districts: Syangja, Parbat, Tanahu, Lamjung, Myagdi, Baglung, and Gorkha, which are in the midwestern development region of Nepal. All subjects were recruited and interviewed between July 2005 and April 2007. The climate of the region is temperate but can be cool at times. For example, in Pokhara city (latitude 28.2° N), which is 827 m above sea level (Central Bureau of Statistics 2009), the mean temperature and mean daily minimum temperatures in January 2006 (the coldest month of the year) were 14.3°C and 7.2°C, respectively (Department of Hydrology andMeteorology 2006/2007). Other, more elevated parts of the region can be colder.
Recruitment procedure for cases and controls. Cases were all female patients, 20-65 years old, who visited TB clinics in RTC (90.4%) and MTH (9.6%) and who had been newly diagnosed with active pulmonary TB by chest X-ray and positive active sputum smears (two sputum specimens positive for acid-fast bacilli by microscopy), which are routinely conducted at the hospital using methods recommended by the WHO (1997). Women who were pregnant, who were on chemotherapy for cancer, who had HIV/ AIDS or diabetes, or who had a history of TB were excluded from the study.
Controls were recruited from outpatient and inpatient departments (dental, 1.6%; ear, nose, and throat, 1.6%; ophthalmology, 25.6%; general medicine, 56%; obstetrics and gynecology, 7.2%; orthopedics, 2.4%; skin, 1.6%; surgery, 3.2%; and psychiatry, 0.8%) at the MTH, in the same months when cases were identified. For each case, the control subjects were the first eligible female patients without pulmonary TB, matched to cases on age (5-year frequency bands), who presented at MTH between 0900 and 1000 hours after case enrollment. Controls were excluded from the study for the same reasons as for the cases. Control subjects were interviewed only after medical screening confirmed that they did not have TB. Confirmation procedures included a chest X-ray and an on-the-spot sputum exami nation. The ratio of cases to controls was 1:2.
After obtaining an informed oral consent to participate, all cases and controls were interviewed face-to-face by trained interviewers shortly after diagnosis while they were still at the hospital. The three interviewers were unavoidably aware of the case or control status of the interviewees but were not aware of the main exposure of interest or hypothesis of the study. All interviewers interviewed both cases and controls.
The questionnaire collected data on education level, area of residence (urban, periurban, and rural), history of use of cooking fuels and stoves that included present and previous (including in parents' houses, before marriage) cooking fuels and stoves, present kitchen type and location, kitchen ventilation, house type, participant's smoking history and smoking status of family members, alcohol consumption, vitamin supplement consumption, use of mosquito coils and incense, household crowding, vehicle ownership, and annual income level.
Statistical analysis. Liquefied petroleum gas (LPG) and biogas were designated "gaseousfuel stoves" (GFS), which was used as the reference category for most analyses compared with kerosene-fuel stoves (KFS) and biomass-fuel stoves (BFS). Very few participants (two cases and four controls) reported burning biomass in stoves with flues or chimneys venting to the outside, and no one reported using an electric cooker. For this reason, no separate category was created for vented BFS, and these subjects were included in the BFS category.
We examined the extent of agreement of responses on the exposure information (current stove/fuel type and ventilation) obtained during face-to-face interviews at the hospital with data obtained from actual inspection of these features in the houses of the first 28 study participants (13 cases and 15 controls). The effect of misclassification was calculated in terms of sensitivity and specificity.
We combined information on kitchen location and windows in the kitchen to create a composite dichotomous variable for ventilation. "Fully and partially ventilated kitchens" included open-air kitchens, separate kitchens outside the house, and partitioned kitchens with windows inside the house. This was used as the reference category for ventilation. Unventilated kitchens included partitioned and nonpartitioned kitchens without windows inside the house. We were unable to clearly interpret questionnaire data on closing doors in a way that could be used to characterize ventilation.
To calculate the number of pack-years of smoking, we combined the information on the average number of tobacco products (cigarettes or bidis) smoked every day multiplied by the duration of smoking in years divided by 20, assuming that a pack of cigarette contains twenty cigarettes/bidis. One participant who reported she smoked a hukka (water pipe) was excluded from this analysis.
We calculated crude odds ratios (ORs) between exposure and outcome. We decided a priori to include all statistically significant (p ≤ 0.05) variables in the model, as well as any other recognized risk factors for TB. Then we applied a stepwise backward elimination model, with a variable selection criterion of p = 0.2, to all the variables to identify any others that should be included in the final model. Using the selected covariates, we constructed a multivariate unconditional logistic regression model for risk of TB. We calculated adjusted female population-attributable fractions and associated CIs using the aflogit command in Stata (version 10; StataCorp LLC, College Station, TX, USA) statistical software (Eide 2008). This procedure assumes that the proportion of controls exposed is a good estimate of the proportion exposed in the target population.
Results
Four potential interviewees (all cases) did not meet the inclusion criteria: two were diabetic and two were HIV positive. During recruitment, one potential control was found to have pulmonary TB and was transferred to the case group. Except for one control, all potential interviewees agreed to participate in this study. In total, we recruited and interviewed 125 cases and 250 controls. Cases were more likely to be referred by a health care professional (30.4%) than were controls (7.2%). This might reasonably be expected because TB causes serious illness, but many of the controls would have had much less severe conditions. Table 1 lists descriptive data for the cases and controls, with unadjusted ORs and CIs. With the exception of the income variable, few data were missing. Confirming the success of the matching process, distributions of cases and controls were similar in terms of age. Most cases and controls (72.0% of cases, 94.4% of controls) were from the Kaski district. Cases were more likely than controls to be Buddhist, to live in urban and periurban areas, to reside in poorer quality houses (kuccha), to be illiterate, to have nonpartitioned and unventilated kitchens indoors, and to use kerosene wick lamps as their main source of light. Cases were also more likely than controls to regularly consume alcohol, to be tobacco smokers, to have more smokers in the family than controls, and to have not always lived in their present house. We think that, to some extent, the latter variable probably captures the likelihood of previously having used other cooking fuels. Except for three cases, none of the participants who had smoked reported that they had ever quit smoking for 6 months or more. Therefore, we classified smokers as ever-smokers and never-smokers. The median smoking experience for both cases and controls was 8 packyears (SD = 13.37 pack-years). More cases than controls had had household members with TB. Moreover, cases were more likely to be using BFS or KFS than were controls (p = 0.004). The distribution of cooking fuel used by the study participants was biomass from wood or volume 118 | number 4 | April 2010 • Environmental Health Perspectives crop residues (44.3%), LPG (42.7%), kerosene (11.2%), and biogas (1.9%).
We created a heating fuel variable that treated participants who reported either using electricity (1 case, 3 controls) or using no heating fuel (38 cases, 137 controls) as the reference category, and the remaining subjects, who mainly used wood (84 cases, 107 controls), as the biomass fuel category. The biomass group included a few women who used coal (one control) and kerosene (one case, one control) for heating.
We verified stove-fuel types and ventilation characteristics in the houses of 28 participants. All 18 participants who had reported their main cookstove as being a biomass stove were found to be correct, as were the five reporting use of a LPG stove. One of the four participants who had reported using a kerosene stove, however, was found to be using an LPG stove. On that basis, the accuracy (true reports ÷ total reports) of stove reporting was 96%. In the inspection of ventilation characteristics, one participant who had reported not having a window in her kitchen was found to have a temporary outside kitchen with a windowsized opening. Two participants who reported having a window in the kitchen actually did not have a window. Based on these data, the accuracy for reporting ventilation was 89%. As shown in Table 1, the unadjusted exposure ORs for cooking in BFS and KFS were 1.98 (95% CI, 1.24-3.17) and 2.54 (95% CI, 1.26-5.12), respectively. Use of kerosene lamps had an unadjusted OR of 10.35 (95% CI, 3.42-31.3), and use of biomass fuel for heating had an OR of 2.81 (95% CI, 1.78-4.42). Compared with cooking in a fully ventilated or partially ventilated kitchen, cooking in an unventilated kitchen was associated with a doubling of the risk of TB (OR = 2.02; 95% CI, 1.31-3.13).
The univariate analysis showed statistically significant (p ≤ 0.05) associations of TB with use of mainly biomass, coal, and kerosene as a source of heating fuel, urban/rural locality of residence, residence outside the Kaski district, religion, literacy, construction type of present house, not always having lived in the present house, ventilation, use of a kerosene lamp, tobacco smoking, one or more smokers in the family, alcohol consumption, vitamin consumption, and having had a family member with TB. Although not selected by the stepwise algorithm, in the multivariate model we also included annual family income in Nepali rupees as an additional indicator of socio economic status, and age, because it was a matching variable. Table 2 shows the results of the main logistic regression model. Compared with use of GFS, use of a biomassfueled stove for cooking showed a slight positive relationship, but the CI was so wide that this provides little evidence of an association with TB. Kerosene cooking-fuel use, however, was associated with TB. Also particularly strongly associated with TB in the model were use of biomass as a heating fuel (OR = 3.45; 95% CI, 1.44-8.27) and kerosene lamps as the main source of lighting in the house (OR = 9.43; 95% CI, 1.45-61.3).
We investigated possible effect modification of the biomass fuel variables by other exposures. However, investigation was limited because of small numbers of participants in many of the exposure categories, leading to very unstable estimates. Covariates with sufficient numbers in separate categories permitting some useful examination of effect modification were ventilation, literacy, and house construction. We found evidence of effect modification of the effects of heating fuel by ventilation status: participants who lived in houses with unventilated kitchens were at much higher risk (adjusted OR = 26.0; 95% CI, 4.24-159) than were those who lived in houses with ventilated kitchens (adjusted OR = 7.07; 95% CI, 1.48-33.9). Corresponding estimates for biomass cooking fuel were much more equivocal, with the adjusted ORs for ventilated and unventilated kitchens being 0.80 (95% CI, 0.19-3.37) and 0.47 (95% CI, 0.08-2.94), respectively. For illiterate and literate participants, adjusted ORs for heating fuel use were 5.12 (95% CI, 0.96-27.4) and 2.93 (95% CI, 0.87-9.91), respectively. We found no evidence of effect modification of literacy status on biomass cooking-fuel effects. Finally, participants who lived in kuccha construction houses (bamboo and mud, with thatched roofs) appeared to be at higher risk from both biomass cooking and heating fuels than were participants who lived in pucca or semipucca construction houses (brick and cement or brick and mud). For heating fuel, the adjusted ORs were 11.9 (1.38-102) and 2.73 (0.88-8.41) for kuccha and pucca/semi-pucca houses, respectively. The corresponding values for biomass cooking-fuel use were 4.07 (95% CI, 0.43-38.8) and 0.73 (0.22-2.40), respectively. With the possible exception of the modification by ventilation of the effects of biomass cooking fuel, these effects might generally be considered to be in the predictable direction-higher ORs associated with less ventilation and more-deprived socio economic circumstances.
Exposure response. We investigated whether associations with TB varied according to duration of cooking with BFS or KFS (Table 3). We categorized the total durations of cooking on BFS and KFS by cases and controls into bands. The adjusted exposure ORs were 1.17 (95% CI, 0.32-4.32), 0.64 (95% CI, 0.18-2.20), and 0.47 (95% CI, 0.11-2.02) for use of a BFS for less than 5 years, 5-10 years, and >10 years, respectively. For KFS, the unadjusted ORs were 4.96 (95% CI, 1.44-17.1) and 4.60 (95% CI, 1.34-15.7) for less than and more than 5 years of use, respectively, relative to no KFS use. Because we did not collect duration data for either heating fuel use or household lighting, we could not carry out comparable analyses for these variables.
Discussion
The results of this study suggest that indoor exposure to smoke from biomass fuel combustion is a risk factor for TB. The association, however, appears to be mainly with use of biomass for heating, rather than cooking. The study also strongly suggests that exposure to smoke from kerosene fuel combustion, either in stoves or in lamps, is a risk factor for TB.
Religion, income, residence outside Kaski district, vitamin consumption, a family history of TB, and not always having lived in the present house also showed statistically significant associations with TB (Table 1). Packyears of smoking (> 8 pack-years) showed an association with TB (p = 0.06), which did not change appreciably after adjustment. Smoking is now an established risk factor for TB (Bates et al. 2007;Chiang et al. 2007;Leung et al. 2004;Yu et al. 1988). The very elevated relative risk estimate for Other studies have also shown differences in TB rates between racial and religious groups, including Tibetan Buddhists (Bhatia et al. 2002;Hill et al. 2006;Mishra et al. 1999;Nelson et al. 2005;Truong et al. 1997). Before concluding that statistical associations are causal, it is important to consider alternative explanations, particularly whether study results might be a result of selection bias, information bias, or confounding in the study design, data collection, or analysis. As with all case-control studies, selection bias in the recruitment of controls is a potential concern. In this study, a systematic procedure for recruitment of all controls from inpatient and outpatient departments of MTH was used, and only one potential control refused to participate. Because most cases were recruited from the RTC, and all controls from MTH, the catchment areas for MTH and RTC might have been different. RTC patients came from a broader area, because it is a referral center for the western development region of Nepal. A higher proportion of cases (28%) than controls (6%) were from five districts other than Kaski. The Kaski district includes Pokhara city, and in general, Kaski residents are more likely to live in urban areas and to be wealthier. This could simply mean that living outside of Kaski is associated with higher exposure to TB risk factors but, alternatively, could indicate some selection bias. We adjusted for area of residence (Kaski or other districts) in the final model, but this would not necessarily have eliminated such a bias.
Another possible source of selection bias arises because we did not exclude some other, non-TB respiratory disease cases from the control group. Unfortunately, control diagnoses were not collected at the time of the study and proved impossible to obtain in retrospect, because of the limited period for which the hospital retains patient records. Because absence of TB was confirmed in controls by X-rays, we can, however, be confident that no chronic obstructive pulmonary disease or pneumonia cases were among our controls. It is possible that inclusion of respiratory disease cases among the controls could have produced a bias toward the null, if risk factors for those cases were similar to risk factors for TB.
Information bias may take the form of outcome misclassification or exposure misclassification. Because all cases were newly diagnosed with active pulmonary TB on the basis of evidence from clinical tests, and controls were also confirmed by chest X-ray and on-the-spot sputum smear testing as not having active pulmonary TB, we consider that disease misclassification is unlikely to have occurred. We obtained all the exposure data by questionnaire. Case-control studies are often considered susceptible to recall bias, in that cases may be more likely than controls to remember past exposures. Because questions asked in this study were about common exposures, however, which both cases and controls experience on a day-to-day basis, we expect recall to have been accurate and any differential recall to have been minimal. We verified the high level of accuracy of reporting of two key exposure variables (stove type and ventilation) by visiting the homes of 28 study participants. Considering this, and that there is no prevailing belief that indoor smoke exposure from biomass-burning stoves or kerosene-burning stoves or lamps is related to TB occurrence, we believe exposure misclassification is likely to be minimal. One possible limitation, however, is that we only asked about the main cooking fuel used. This might have led to some misclassification of exposure status.
The third main area of potential bias is confounding. We collected data on a much more comprehensive range of exposures than did previous studies and investigated their potential to confound the associations with fuel use. Although confounding was present, adjustment with these variables did not eliminate the key associations. There may, of course, be some residual confounding due to misspecification of the variables, and there is no way to rule out the possibility of unknown confounding factors causing the associations found. One possibility is malnutrition, for which we obtained no data and which is a known risk factor for TB. However, family income, for which we did obtain data and which is an excellent indicator of a family's ability to feed itself, was taken into account.
A notable finding in our study was the association with biomass used as a heating fuel. This was unexpected because the study design focused on cooking-fuel use. Hence, the study population was limited to women, who generally do the cooking in Nepal. Although we collected data on history of stove and cookingfuel use, we did not collect a comparable level of data for heating fuels and so are unable to examine heating-fuel use for evidence of an exposure-response relationship.
In hindsight, the findings with biomass as a heating and a cooking fuel make sense. Women may light a cooking fire, set the pot atop it, and leave the room, returning only periodically while cooking takes place. On the other hand, use of heating fuel involves minimization of ventilation and deliberate exposure, as the family sits around the fire. In tropical India and Africa, where several of the other TB and biomass studies have been carried out, use of heating fuel is less common than in the mid-hills of Nepal, where nighttime and winter temperatures are lower.
Our study also found the OR for TB to be high among both kerosene stove and lamp users, particularly the latter. Kerosene cooking fuel and kerosene lamp users were for the most part mutually exclusive groups. Only one of the 22 kerosene lamp users in the study used a kerosene stove. Kerosene stove users were more likely to use electricity for lighting. With one exception, as far as we are aware, no previous studies have examined a relationship between kerosene and TB (Padilla et al. 2001). This one study, carried out in Mexico, obtained crude ORs for use of kerosene-burning stoves of 1.9 (95% CI, 0.8-4.5) for active TB and 4.4 (95% CI, 1.7-11.5) for past TB; no adjusted estimates were presented. We have been unable to find any studies where the relationship between kerosene lighting and TB has been investigated or even incidentally reported.
The question arises as to why kerosene as a cooking fuel could be a TB risk factor but not biomass cooking fuel. This could have something to do with the nature of the emissions. Biomass burning produces very obvious smoke, which may irritate the eyes and respiratory tract, encouraging avoidance behavior. Kerosene, on the other hand, has the appearance of burning more cleanly, even if it does produce substantial amounts of fine particulate matter and vapor-phase chemicals, and may not encourage the same avoidance behavior as biomass smoke. Cooks may be more likely to remain in the room while cooking with kerosene fuel. There are also likely to be differences in the toxic effects of the pollutant mixtures from the two fuels.
Kerosene is one of the main sources of cooking fuel in urban areas and lighting fuel in rural areas of developing countries, including Nepal. Therefore, if kerosene burning can be confirmed as a TB risk factor in other studies, the public health implications would be substantial. In rural areas not connected with electric power, kerosene wick lamps are burned at least 4-5 hr every day. Commonly, these lamps are homemade devices that are highly energy inefficient, with low luminosity. Simple wick kerosene lamps emit substantial amounts of smoke and particles (Schare and Smith 1995). A study conducted in rural Malawi has shown a higher loading of particulates in alveolar macrophages in men from exposure to kerosene in lamps compared with candles, hurricane lamps, and electric lamps (Fullerton et al. 2009). Other emissions from kerosene combustion include carbon monoxide, carbon dioxide, sulfur dioxide, nitrogen dioxide, formaldehyde, and various VOCs (volatile organic carbons) (Traynor et al. 1983). An indoor air pollution study conducted in Bangladesh slums has shown significantly higher concentrations of benzene, toluene, xylene, hexane, and total VOCs emitted from kerosene stoves than from woodburning stoves (Khalequzzaman et al. 2007).
The use of kerosene fuel is associated with harmful effects that have been documented in a few studies. These effects include impairment of ventilatory function and a rise in blood carboxyhemoglobin in women exposed to kerosene fuel smoke (Behera et al. 1991), and a higher incidence of acute lower respiratory infection in children in homes using KFS and BFS (Sharma et al. 1998).
A causal relationship between exposure to biomass fuel smoke and TB is biologically plausible. The smoke could affect either risk of infection or risk of disease in infected people, or both, as has been shown to be the case with tobacco smoking (Bates et al. 2007). Without knowledge of the time of infection, however, the present study cannot distinguish between the two possibilities. Inhalation of respirable particles and chemicals found in smoke from these sources generates an inflammatory response and impairs the normal clearance of secretions on the tracheobronchial mucosal surface, and may allow TB bacteria to escape the first level of host defenses, which prevent bacilli from reaching the alveoli (Houtmeyers et al. 1999). Smoke also impairs the function of pulmonary alveolar macrophages, an important early defense mechanism against bacteria (Health Effects Institute 2002). Alveolar macrophages isolated from the lungs of smokers have reduced phagocytic ability compared with macrophages from nonsmokers and secrete a lower level of proinflammatory cytokines (Sopori 2002). Exposure to wood smoke in rabbits has been shown to negatively affect antibacterial properties of alveolar macrophages, such as their ability to phagocytize bacteria (Fick et al. 1984).
Conclusion
Our study provides evidence that the use of biomass fuel for household heating is a risk factor for TB, but little evidence that the use of biomass as a cooking fuel is a risk factor in this population. The association is biologically plausible and consistent with the results of some other epidemiologic studies. Nonetheless, there is the possibility of a selection bias arising from differences in the sources of cases and controls. The study also strongly suggests that kerosene fuel burning, particularly for lighting, is a risk factor for TB. That kerosene lamp burning was more strongly associated with TB than kerosene stove use may be because lamps are likely to be kept burning for longer periods than are stoves, which are used only during the period of cooking, and the lamps may be kept closer to people during the evening, increasing the effective intake fraction. In addition, most of the kerosene lamps were wick lamps (21 of 22), whereas most (33 of 42) of the stoves were pressurized (pumped), which produce fewer emissions per unit fuel. Because these kerosene findings are apparently unique, more studies in different settings are needed to confirm them. Should the association with kerosene lamp use be confirmed, replacement of the kerosene lamps with solar lamps or other clean lighting systems would be a solution.
Considering the strong associations of both religion and district of residence in this study, in any future case-control study examining this issue in Nepal, consideration should be given to matching on these factors.
Irrespective of the evidence for associations between indoor biomass use and TB, it is clear that such use produces substantial indoor air pollution with health-damaging chemicals and particulate matter. One, at least partially effective, remedial measure is to replace unflued stoves with chimney stoves. Such stoves, however, require continuing maintenance to maintain good indoor air quality, and because they usually just exhaust emissions to the near outdoors but not reduce them, even welloperating chimney stoves can only partly reduce total exposures Smith et al. 2009). Ideally, electric stoves or low-emission biomass stoves, such as semigasifier stoves, or those with cleaner burning fuels (biogas or LPG) would be used. It is more difficult to generalize about kerosene stoves and lamps, because emissions vary greatly by type of device and fuel quality, which is not uniform (Smith 1987). Pressurized kerosene stoves and lamps using good-quality fuel may have low particulate emissions if properly maintained, but inexpensive wick lamps can be dirty, particularly with low-quality fuel. Their replacement with cleaner burning devices may also be justified.
|
v3-fos-license
|
2019-04-19T13:04:28.265Z
|
2009-01-01T00:00:00.000
|
120854544
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1088/1367-2630/11/1/013054",
"pdf_hash": "da7a127840fbc3029ba140ee38922a3518981f95",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46039",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "d00e60fea33236bcb2d55609a3d0bd510d06817c",
"year": 2009
}
|
pes2o/s2orc
|
Coulomb-field-induced conversion of a high-energy photon into a pair assisted by a counterpropagating laser beam
The laser-induced modification of a fundamental process of quantum electrodynamics, the conversion of a high-energy gamma photon in the Coulomb field of a nucleus into an electron–positron pair, is studied theoretically. Although the employed formalism allows for the general case where the gamma photon and laser photons cross at an arbitrary angle, we here focus on a theoretically interesting and numerically challenging setup, where the laser beam and gamma photon counterpropagate and impinge on a nucleus at rest. For a peak laser field smaller than the critical Schwinger field and gamma photon energy larger than the field-free threshold, the total cross section is verified to be almost unchanged with respect to the field-free case, whereas the differential cross section is drastically modified by the laser field. The modification of the differential cross section is explained by classical arguments. We also find the laser-dependent maximal energy of the produced pair and point out several interesting features of the angular spectrum.
Introduction
The creation of an electron-positron pair by an external electromagnetic field is a striking manifestation of the equivalence of matter and energy. That not only energetic photon fields, but also strong, macroscopic electric fields can produce pairs was first predicted by Sauter [1] and later considered by Schwinger [2]. The basic prediction is that pairs are spontaneously created, but the rate is exponentially damped unless the electric field strength exceeds the so-called critical field E c = m 2 /|e|, where m is the electron mass, e = −|e| the electron charge, and we use natural units such that c =h = 1. The transition from the nonperturbative, tunnelling regime for pair production to high-frequency perturbative pair production was studied in [3]- [5]. At present, the strongest electromagnetic fields available in the laboratory are laser fields. However, a plane laser wave cannot alone produce any pairs from the vacuum due to the impossibility of satisfying energy-momentum conservation. Just like in a static magnetic field [6,7], a probing particle is needed in order to obtain a nonvanishing pair production rate. If the laser wave is not plane but a focused pulse [8], or a standing laser wave [9]- [11], pair production is possible without a second agent.
Laser-induced pair production with an additional source of momentum was first investigated theoretically in the context of pair production by simultaneous absorption of one nonlaser-mode photon and a number of laser-mode photons [12,13]; quite recently, this process was also observed experimentally [14,15]. Another possibility discussed in the literature is laser-induced pair creation in the vicinity of a nucleus. Unfortunately, for a nucleus at rest, the pair production rates are very low [16]- [20]. Recently, this process has been re-examined, with the idea of introducing a moving nucleus [21]- [27]. By letting the nucleus collide head-on with the laser beam at high Lorentz factor γ , in the rest frame of the nucleus the frequency of the laser beam will be blue-shifted or enhanced with a factor of approximately 2γ . In this way, the peak electric field seen by the nucleus in its rest frame approaches the critical field, and the rates are calculated to reach observable values. Other promising schemes are [28]- [30], where muon-antimuon pairs are created from a laser-driven positronium atom, and [31], where the photon-assisted Schwinger effect is considered. Also the creation of virtual, unobservable electron-positron pairs, produces observable effects, such as photon-photon scattering [32]- [35], photon splitting [36] and photon merging [37].
In this paper, we investigate the possibility to create pairs from vacuum in the presence of three external fields: a laser field, a Coulomb field and a single photon, whose frequency exceeds the pair production threshold. All calculations are performed in the rest frame of the nucleus. In contrast to [38], where the same process was considered for a gamma photon and a laser beam propagating in the same direction, we here consider a different geometry: counterpropagating gamma photon and laser wave. We also consider a different regime for the laser parameters. The fact that the gamma photon and the laser photons propagate in different directions renders the numerical treatment of the problem more complex compared to the setup in [38], but also more interesting theoretically. Employing the full formula, including the fully laser-dressed Dirac-Volkov propagator, allows us, in principle, to treat the general situation where the laser beam and the gamma photon cross at an arbitrary angle. For comparison, one example where the photon and laser beams cross at right angles is therefore included.
The relevant Feynman diagrams are shown in figure 1. The matrix element for this process was first calculated by Roshchupkin [39], and also by Borisov et al in [40,41]; however, without performing any concrete numerical evaluations. The matrix element has a crossing symmetry with the one for laser-assisted bremsstrahlung, which was studied previously in many papers, including [42], and by us recently in [43,44].
In our case, pair production is possible in the absence of the laser field through the Bethe-Heitler process [45], because we assume the angular frequency ω γ of the single photon to be larger than the threshold 2m (we denote the frequency of the single photon by a superscript rather than a subscript in view of a rather large number of Lorentz subscripts that we will need to introduce in the analysis later). The presence of the laser will then modify the process, so that we can speak about laser-assisted pair production. By contrast, if ω γ < 2m, the laser field would not really assist; it would be necessary even to produce any pairs at all, and we would call the process laser-induced rather than just laser-assisted.
We note the general observation [46] that to produce an appreciable number of pairs, the electric field in the rest frame of the nucleus has to exceed the critical field. We thus expect that for a subcritical field, the total rate of laser-assisted pair production will be essentially unaffected by the laser field. In particular, the total cross section is expected to be very small for a subcritical field and ω γ < 2m, where the Bethe-Heitler rate vanishes identically. However, the differential cross section, that is, the dependence of the cross section on the directions and energies of the produced particles, can change drastically. In particular, we find that the laser field tends to reverse the direction of the emitted pairs, so that they are produced preferentially in the propagation direction of the laser field, the more so with rising laser intensity. The effect persists also for the case when the gamma photon and laser beam cross at right angles. In comparing the various directions of the laser beam relative to the gamma photon beam, we point out the most favourable geometrical setup for focusing the angular distribution of the created pair. We furthermore show that the angular distribution calculated with the full quantum formula can be explained from the classical motion of the electron and positron in the laser field, if the field-free cross section is utilized as initial distribution. An interesting directional dependence of the maximal energy obtained by the produced pair is also discussed.
The paper is organized as follows. In section 2, we introduce the theory necessary to describe the laser-assisted process, including Volkov states and the Dirac-Volkov propagator, leading to the expression for the S-matrix elements. Next, we present numerical results together with a detailed discussion in section 3.
Theory
In this section, we review the theory used to describe laser-matter interaction. The interaction of the electron and positron with the laser field will be treated nonperturbatively, whereas the interaction with the high-frequency photon field and the Coulomb field is taken into account by the first-order perturbation theory.
Volkov wave functions and propagator
We start from the Dirac equation coupled to an external plane electromagnetic wave A µ (φ): where φ = k µ x µ is the phase of the wave, and k µ = (ω, k) is the wave vector. Scalar products will be written with a dot as a · b = a µ b µ = a 0 b 0 − a · b, and a hat denotes the contraction with the Dirac gamma matrices:Â = γ µ A µ . The solution to equation (1) is the well-known Volkov solution [47] and reads where Here, ψ − (x) denotes the electron wave function, and ψ + (x) is the negative energy wave function, corresponding to the positron. Note that e always denotes the charge of the electron.
The spinor u ∓ p satisfies (p ∓ m)u ∓ p = 0. In the following, we specialize to a monochromatic laser wave of linear polarization, where µ = (0, ) is the polarization vector satisfying 2 = −1, k · = 0, and a is the amplitude of the vector potential. The integral in equation (3) can then be performed analytically and reads where in the last line we have defined the effective momentum q µ = p µ + e 2 a 2 k µ /(4k · p), with corresponding effective mass m 2 * = q 2 = m 2 + e 2 a 2 /2, effective energy Q = q 0 , and the other parameters are α = e a ( p · )/(k · p) and β = −e 2 a 2 /(8k · p). Later, when we write down the matrix element we will use the following Fourier decomposition of the wave function (2): where the generalized Bessel function A 0 (s, α, β) is defined as an infinite sum over products of ordinary Bessel functions, and for positive integer j The generalized Bessel function was first introduced by Reiss [12], and was later studied by several authors [13], [48]- [51].
To write down a second-order matrix element, we also need the Dirac-Volkov propagator G(x, x ), which can be expressed in a number of different ways [52]. We use the representation [53,54] where ε is small and positive. In the last equality of (9), we have used the specific form (4) of the vector potential, expanded the propagator into a product of two Fourier series, and finally changed variables p µ → p µ + e 2 a 2 k µ /(4k · p). This transformation makes the appearance of the effective mass m * in the propagator denominator explicit.
Matrix element and cross section
In our treatment, the final states of the electron and the positron are described by Volkov states, and the Dirac-Volkov propagator is employed for the intermediate, virtual states, i.e. the interaction of all fermions with the laser field is taken into account to all orders. The effect of the Coulomb field of the nucleus and the gamma photon is calculated using the perturbation theory. To this end, we introduce the vector potential A C µ (x) of the nucleus with atomic charge number Z = 1 (the scaling with Z can later be restored easily) and the vector potential A γ µ (x) of the perturbative photon Here, ω γ denotes the frequency and k γ µ the µth component of the momentum four vector of the gamma photon. Note the minus sign in the exponential in A γ µ (x), since photon absorption is the desired process. Expressions (2), (9) and (10) now permit us to write down the matrix element S for the production of one electron with effective momentum q e and one positron with effective momentum q p , by absorption of one photon k γ , corresponding to both Feynman diagrams in figure 1: where We recall that index e (p) is used to label the electron (positron) momentum vector. Expression (11) was first obtained in [39]. The first line in equation (11) implicitly defines the nth-order matrix element S n , and the argument of the delta function in equation (11) expresses energy conservation in terms of the effective energies Q p and Q e . The number −n (+n) can be interpreted as the number of photons absorbed from (emitted into) the laser mode during the process. In particular, 7 the threshold ω γ − nω 2m * for pair creation is higher than the field-free case, due to the larger effective mass m * > m. We further remark that in contrast to the case with copropagating gamma photon and laser field [38], where the condition k γ · k = 0 provides for a considerable simplification of the matrix element (11), the present case with k γ · k = 0 requires the full expression (11). In particular, all terms in the sum over s have to be included, which renders the numerical evaluation of the differential cross section rather demanding. From the matrix element, we obtain by the usual methods [55] the differential cross section dσ , averaged over the polarization of the gamma photon and summed over the spins of the electron and positron: where in the last line the delta function is explicitly written out. The matrix element (11) is gauge invariant, both under the gauge transformation µ → µ + C 1 k µ of the laser field and γ µ → γ µ + C 2 k γ µ , where C 1,2 are constants. Gauge invariance, especially for the gamma photon field, provides a sensible numerical check of the computer code used to evaluate the differential cross section (14).
Another numerical test of correctness is the behaviour of the cross section at the apparent singularity when k ·p e,p → 0 in the F functions in the expression on the right-hand side of equation (11) (we recall that p 2 =p p and p 3 =p e ). The matrix element can be shown to be finite in this limit, but the calculation constitutes a test of numerical stability as the arguments of the generalized Bessel functions tend to infinity.
Results and discussion
In this section, we present results of a concrete numerical evaluation of the differential cross section (14). The frequency of the laser is taken to be ω = 1 keV, and the amplitude a is chosen such that the classical nonlinearity parameter ξ = −ea/m is of order unity. Experimentally, this choice of parameters can be realized in either of the two following scenarios. For a highpower laser, operating at a photon energy of 1 eV and intensity of 9 × 10 17 W cm −2 , head-on collision with a relativistic nucleus with a Lorentz boost factor γ ≈ 500 will give ξ = 1 and ω = 1 keV in the rest frame of the nucleus. In an alternative scenario, a focused x-ray freeelectron laser [56] applied to a nucleus at rest may also give access to the parameters above. Here ξ = 1 and ω = 1 keV in the laboratory frame requires an intensity of 9 × 10 23 W cm −2 at the focus of the laser. In this regime, the peak electric field of the laser is still much smaller than the critical field, E peak /E c = ξ ω/m 1. In view of the admittedly high laser frequency ω, we note that we expect the results presented here to be insensitive to ω (at fixed ξ ), as long as we have ξ ω/m 1. We will mostly consider the case where the laser counterpropagates with the gamma photon, and describe the direction of the produced electron and positron by an angle θ e,p , as depicted in figure 2(a). Also examples where the gamma photon and laser photons copropagate, depicted in figure 2(b), and where k and k γ are perpendicular to each other, as shown in figure 2(c), will be discussed. The gamma photon has three-momentum k γ , the laser field has wave vector k and polarization vector , the positron has effective threemomentum q p , and the electron has effective three-momentum q e . The vectors q p and q e lie in the plane spanned by k and .
Energy cutoff
In principle, since the sum over n in equation (11) extends from −∞ to +∞, the created pair can acquire arbitrarily high effective energies Q p and Q e . This should be compared with the fieldfree case, given by the Bethe-Heitler formula [45], where the cross section vanishes identically for positron (or electron) energies E > ω γ − m. In practice, however, an apparent cutoff will occur in the energy spectrum, and thereby limit the available energy for the produced pair. In the following, we will assume the directions q e /|q e |, q p /|q p | of the positron and electron given, and consider the differential cross section (14) as a function of the effective energy Q p of the positron. The effective energy Q e of the electron is fixed by energy conservation for each n. It follows from expression (11) that to find the energy cutoff, we should consider the behaviour of the function as a function of n. As follows from the discussion in section 3.2, we can assume that C is a noninteger. As shown in appendix A, function (15) has the same cutoff properties as the generalized Bessel function provided C is larger than the cutoff index of the first of the A 0 in the numerator in equation (15).
As β e − β p = −[(k · q e ) −1 + (k · q p ) −1 ] e 2 a 2 /8 < 0, and high values of Q p are obtained by absorbing photons, that is, for negative n, it follows that Q cutoff p is the largest positron energy for which the inequality is still satisfied. The integer n pos.cutoff is defined in equation (A.1). Since the quantities k · q e and k · q p involve direction cosines, it becomes clear that the energy cutoff is direction-dependent.
In particular, this implies that the maximal energy Q cutoff p will depend not only on the direction of the positron, but also on the direction of the electron. In order to determine the directiondependent energy cutoff, one therefore proceeds as follows. In the first step, one fixes the directions of the electron and positron, which define n pos.cutoff as a function of n and Q p . In (17). For comparison, we also show the effective energy that would result if the positron were created with the largest available energy in the absence of the laser, E p = E max = ω γ − m, and then placed in the laser field with fixed direction of q p (all curves are labelled accordingly). The difference of the latter two curves to the laser-dressed solution is because of the correlation between the electron and positron induced by the laser. This kind of correlation was also observed in [22]. In (b), we show a concrete example of the cross section, for θ = 2.8 rad in the counterpropagating setup, chosen to maximize the cutoff for ξ = 2. The 'laser-assisted' curves show complex oscillatory behaviour, with a peak just before the cutoff. The cutoffs predicted by equation (17) are indicated by arrows. Note that the curves for ξ = 1 and 0 were multiplied by a factor 50; the ordinate axis is kept on a linear scale. the second step, one varies Q p and in this way finds the largest positron effective energy Q p satisfying equation (17).
As a concrete example, we let the positron and electron be ejected at equal angles θ p = θ e ≡ θ in the counterpropagating setup ( figure 2(a)), and show in figure 3 the cutoff as a function of θ for different values of the intensity parameter ξ . The frequency of the single photon is ω γ = √ 6 m, which corresponds exactly to the threshold value 2m * for ξ = 1. In the same figure, we also show a concrete evaluation of the differential cross section for the corresponding parameters, compared with the laser-free case. The magnitude of the differential cross section is here significantly larger than the case without the laser, and also displays complicated oscillatory behaviour.
Resonances and competing processes
In principle, the matrix element (11) for some integer s. Physically speaking, this means that the considered second-order process splits up into two consecutive first-order processes, laser-induced pair creation by a gamma photon followed by Coulomb scattering of the electron or the positron. This phenomenon has been studied before in the context of laser-assisted electron-electron scattering [57]- [59] and laser-assisted bremsstrahlung [42]- [44], [60]. The usual way to regularize the matrix element, so that it remains finite also at condition (18), is to add a small imaginary part to the energy of the electron (positron) [61], related to the total probability for the intermediate state to decay by Compton scattering. Finite values will also result if the finite extent of the laser field or the frequency width of the laser or photon beam is taken into account. In the current paper, however, we consider a regime of parameters where the resonances are strongly suppressed.
Mathematically, this means that the value of s needed to satisfy the resonance condition (18) is larger than the corresponding cutoff index for the generalized Bessel function, and that the contribution from this index in the sum over s is negligible, once properly regularized. Physically speaking, we are dealing with laser parameters such that purely laser-induced processes, that cannot occur in the absence of the laser, have vanishingly small probability to occur. The basic requirement for laser-induced processes like pair creation by a photon [13] (at photon frequency ω γ ≈ 2m) or pair creation by a nucleus [16] to have substantial probability is that the peak electric field E peak = aω should be comparable with the critical field, E peak /E c ≈ 1, and, as mentioned before, we consider only laser parameters a, ω such that E peak E c . This also means that at the field strengths considered, there are no competing processes, so that our process will indeed be the dominant one.
Angular distribution
For the field-free case, the pairs prefer to emerge at an angle θ ∼ m/ω γ with the vector k γ [45]. When the laser field is turned on, we expect to find more pairs in the direction of the laser wave vector k. In figure 4, we display the differential cross section integrated over dQ p and dQ e , for ξ = 1, 2. The peak is seen to shift from the direction of the gamma photon to the direction of the laser wave.
In figure 5, we consider for comparative purposes a different setup: here we let the gamma photon beam and the laser beam cross at right angles, so that k · k γ = 0. The angles for the positron and electron are defined in the same way as before, so that θ = θ e = θ p , with cos θ = −q p · k/(ω|q p |) (see figure 2(c)). As expected, the laser-assisted angular distribution is distorted compared with the rather broad field-free distribution. Comparing the three relative directions of laser beam and gamma photon beam (figures 4 and 5), we see that the setup most favourable for focusing of the created pair is when the laser photons and the gamma photon propagate in the same direction, shown in figure 4(b). In this case, the laser field considerably narrows the angular distribution, so that the pair is ejected into a much smaller solid angle, compared with the field-free cross section. An intuitive explanation for this conclusion is offered below.
Interestingly, the angular distribution can be explained from the classical motion of the electron and positron in the laser field, with the Bethe-Heitler cross section as the initial momentum distribution. To this end, assume that the particle (electron or positron) with mass m and charge e, is created instantaneously by the Bethe-Heitler process with initial momentum p µ 0 at laser phase φ 0 . This should be a good approximation since the creation process is expected to take place on a scale comparable to the Compton wavelength λ C = 1/m 1/ω, much (14), circles: classical approximation (22)) and for ξ = 2 (solid green line: quantum, circles: classical). For transformation to other frequently employed units for the cross section one uses MeV −2 = 389 b = 389 × 10 −24 cm 2 . As in figure 3, ω γ = √ 6 m. The pair is emitted at equal angles θ p = θ e = θ (see figure 2), in the plane spanned by k and . In the geometry of counterpropagating gamma photon and laser beam, the direction θ = 0 corresponds to the gamma photon propagation direction, whereas θ = π indicates the propagation direction of the laser field. The curves for ξ = 0 and 1 were multiplied by a factor of 10. We note that the area under these curves is notably different, which implies that the presence of the laser enhances the number of pairs produced at θ p = θ e . The differential cross section integrated over all angles is however, as we will see later (see figure 6), almost unchanged as compared with the laser-free case. For comparison, we show in panel (b) the case where the laser beam and gamma photon beam are copropagating ( figure 2(b)), so that θ = 0 corresponds to the direction of both gamma photon and laser beam. The parameters are otherwise unchanged. The ξ = 0 curve is the same as in (a) and therefore not shown. In this case, the peaks are much sharper, due to the combined effect of the gamma photon and the laser beam. Also in the copropagating case, as verified in [38], the total cross section is the same as the field-free cross section. smaller than the laser wavelength. According to the classical, relativistic equations of motion for a charged particle in a plane electromagnetic wave with vector potential given in (4), the momentum p µ at a later phase φ reads [62] Figure 5. Differential cross section integrated over Q p , for the case where the propagation direction of the gamma photon is perpendicular to the propagation direction of the laser field, k · k γ = 0 (figure 2(c)). Here θ = π corresponds to emission of the pair in the direction of the laser photons. The parameters are otherwise identical to those employed in figure 4. The curve for ξ = 0 was multiplied with a factor 10 3 , and the curve for ξ = 1 was multiplied with a factor 10.
Averaging over φ yields the effective momentum q µ : Here an important remark is that k · q = k · p = k · p 0 , and that (20) is independent of ω. The final effective momentum thus depends on the laser phase when the particle was created. Conversely, given a final effective momentum q and a phase φ 0 , the initial momentum p 0 (q, φ 0 ) follows. Now, assuming the initial electron and positron momenta p e0 , p p0 to be distributed according to the Bethe-Heitler differential cross section dσ BH /(d 3 p e d 3 p p ) ≡ f BH ( p e , p p ) [45], the classically laser-modified cross section dσ class. /(d 3 q e d 3 q p ) is obtained by averaging over the initial phase, where (∂ p e0 /∂q e )(∂ p p0 /∂q p ) is the Jacobian. Integrating over Q p and Q e , we arrive at the cross section differential in the solid angles of the electron and positron: The cross section (22) is plotted with circles in figure 4, as a comparison to the full quantum formula (14). The agreement is very good, confirming the picture that the pairs are instantaneously created by the gamma photon, and subsequently accelerated by the laser field as classical particles. From the above arguments, the intuitive picture of why the angular distribution is distorted by the laser field compared to the field-free case is clear: in addition to the initial momentum from the created gamma photon, the positron (or electron) receives an additional momentum kick by the laser field. Since the momentum transfer in the laser propagation direction grows with the laser field strength as ξ 2 , compared to ξ in the polarization direction (see equation (20)), it follows that the tendency for the pair to be ejected in the propagation direction grows with ξ , or the field strength of the laser. The width of the distribution is largest for ξ = 1 in figure 4(a), because in this regime the momentum transfers in the laser propagation direction from the laser field, p laser , and from the gamma photon, p 0 , are comparable and opposite, so that the net momentum transfer p laser + p 0 is rather small. A quantitative estimate yields p laser = ξ 2 m 2 ω/(2k · p 0 ) ≈ 0.3ξ 2 m, and p 0 ≈ −0.4m for the energy E 0 = ω γ /2 = √ 6m/2 and the angle θ = 1. This results, for ξ = 1 in figure 4(a), in a broad distribution where neither the laser photon direction nor the gamma photon direction is preferred as ejection direction. In contrast, in figure 4(b), where the copropagating setup is shown, the transferred momenta from the gamma photon and from the laser photons along k point in the same direction, p 0 ≈ 0.4m and p laser = ξ 2 m 2 ω/(2k · p 0 ) ≈ 0.6ξ 2 m for the same p 0 as above, so that the sum is larger than without the laser field, | p laser + p 0 | > | p 0 |. Since the phase-averaged momentum absorbed from the laser beam along vanishes, the total momentum component along is given by the momentum p ⊥ 0 from the gamma photon. The final angle θ = arctan[ p ⊥ 0 /( p 0 + p laser )] in figure 4(b) is therefore essentially smaller compared with the laser-free case, and consequently a narrower angular distribution follows.
We conclude by the remark that in general the copropagating setup is the most favourable for laser-assisted channelling of the pairs. For practical purposes of measuring the created pair, or creation of, for example, a positron beam, the copropagating setup is thus to be preferred.
Total cross section
The total cross section is obtained by integrating the differential cross section (14) over the energies Q p , Q e , and solid angles e , p of the produced positron and electron: Here, it is convenient to replace the sum over the number of exchanged photons n by an integral, and to evaluate this integral with the delta function so that n equals the integer closest to (ω γ − Q p − Q e )/ω. This is a good approximation since ω Q e,p , ω γ . The remaining sixfold integral has to be performed numerically (we employ a Monte Carlo method). We note that this method has been used before to obtain total rates for the production of pairs from a colliding laser beam and a nucleus [25,26]. In general, Monte Carlo integration is the method of choice for integrals of high dimensionality where the accuracy demand is modest. The result of one such calculation, for the counterpropagating setup, is shown in figure 6, where we present the total cross section as a function of the frequency ω γ of the perturbative photon. As expected, in the region where pair production is possible without the laser, the rates are almost indistinguishable.
Conclusions
In this paper, we have presented calculations of the laser-assisted Bethe-Heitler process, i.e. pair production by a high-frequency photon in the presence of a nuclear Coulomb field and an intense laser field. The regime of parameters considered was for a subcritical laser field, that is the peak electric field of the laser was much smaller than the critical field E c = m 2 /|e|, but with the nonlinear parameter ξ of order unity and the gamma photon frequency ω γ > 2m. In this regime, pair production is possible without the field, and as the laser field strength is below the critical field, it is expected that the total rates are almost unaffected by the laser. This was confirmed by evaluating the six-fold integral for the total cross section numerically (see figure 6). However, the differential cross section was found to be drastically altered by the presence of the laser wave, as shown in figures 4 and 5. For practical purposes, the copropagating setup is concluded to be superior, although drastic enhancement of the pair production is also predicted for the counterpropagating setup and the setup 'at right angles', provided the detection of the pairs is restricted to a narrow angular region (see figures 4 and 5). Finally, we note that all cross sections shown here are evaluated for a nuclear charge number Z = 1 and scale as Z 2 , since we have taken into account the Coulomb field in first-order perturbation theory.
Clear laser-assisted signatures are thus expected in the differential cross sections, and these might provide an opportunity for interesting experiments in the near future.
Important for the understanding of physical processes expressed through generalized Bessel functions is the cutoff behaviour. A rule is needed for how many terms should be included in sums like equation (11) to reach convergence. For the ordinary Bessel function J n (α), the cutoff rule is well known: for n > α (positive n, α) the magnitude of J n (α) will drop sharply as J n (α) ∼ α n /n n+ (1/2) , and the cutoff is therefore n ≈ α. For the generalized Bessel function A 0 (n, α, β), the correct rule reads for positive α and β: Beyond the cutoff, |A 0 (n, α, β)| will show inverse factorial decrease ∼ n −|n| , similar to J n (α). These cutoff rules can be derived from the asymptotic expansion by the saddle point method [13,48,63] or from the maximal and minimal values of the classically allowed energy for an electron moving in a plane electromagnetic wave [62].
|
v3-fos-license
|
2017-12-05T02:27:13.263Z
|
2017-10-23T00:00:00.000
|
43099413
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.library.brocku.ca/brocked/index.php/home/article/download/608/322",
"pdf_hash": "797ca338dbe07f0df363e55e00062b618c2b3e7e",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46040",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"sha1": "d2f36d240e2a325bf15a9dc3488c2dc80754777e",
"year": 2017
}
|
pes2o/s2orc
|
The diagnosis dilemma : Dyslexia and visual-spatial ability
Visual-spatial ability is important for mathematics learning but also for future STEM participation. Some studies report children with dyslexia have superior visual-spatial skills and other studies report a deficit. We sought to further explore the relationship between children formally identified as having dyslexia and visual-spatial ability. Despite our best efforts, and despite recruiting from a large potential sample population, we were unable to secure a sufficient amount of participants for statistical power. Thus, our findings consider the ethical dilemma of diagnosis; namely, (1) how do children come to be tested for disabilities? And, (2) what are the potential implications, mathematical or otherwise, for children who have disabilities but are not formally identified? This report has important implications for children with disabilities and for educators.
Visual-spatial ability is comprised of the following subcomponents: spatial visualization, mental rotation, and spatial perception (Linn & Peterson, 1985).It involves the ability to perform movements of various two-or three-dimensional figures and to mentally combine, transform, and move these figures to produce a new design (Casey et al., 2008;Clements, 2004).The research demonstrating the importance of visual-spatial ability in children is compelling.Numerous studies support the notion that visual-spatial ability promotes and is linked to mathematics learning and enhances the possibility of an individual participating in science, technology, engineering, and mathematics (STEM) careers (Lubinski, 2010;Newcombe, 2010;Tolar, Lederberg, & Fletcher, 2009;Wai, Lubinski, & Benbow, 2009).Noteworthy in this growing body of research is the finding that visual-spatial ability is malleable; that is, it can be taught and children can show improvement over time (Uttal et al., 2013).
Our initial aim in this research was to explore the relationship between visual-spatial ability and children formally identified with "dyslexia."Approximately 4 to 10% of the population is estimated to have dyslexia (Aleci, Piana, Piccoli, & Bertolini, 2010;Osisanya, Lazarus, & Adewunmi, 2013;Snowling & Melby-Lervåg, 2016).Dyslexia is defined as "a pattern of learning difficulties characterized by problems with accurate or fluent word recognition, poor decoding, and poor spelling abilities" (American Psychiatric Association, 2013, p. 67).Aleci and colleagues (2010) have proposed that individuals with dyslexia may also have a general impairment of spatial perception whereby a crowding effect occurs in the reading of texts.However, some studies report that individuals with dyslexia have superior visual-spatial ability (Wang & Yang, 2011) while others suggest that no significant differences exist (Duranovic, Dedeic, & Gavrić, 2015).The research is also rather scant with younger elementary students (age 8 to 10), which was the intended foci age of this research.
Our interest in exploring the relationship between visual-spatial ability and dyslexia was motivated by the conflicting research, the proposed importance of visual-spatial ability to STEM participation, and the highly malleable nature of visual-spatial ability.As we explain shortly, despite our best efforts, we were unable to secure a sufficient amount of participants to create a statistically reliable sample, despite our research-based estimates of a potential sample population.Consequently, our findings had less to do with dyslexia and visual-spatial reasoning and more to do with the dilemma of diagnosis; namely, (1) how do children come to be tested for disabilities ? And,(2) what are the potential implications, mathematical or otherwise, for children who have disabilities but are not formally identified?
We state up front that we do not take the opportunity to challenge constructions of disability.This was not our intention and nor the focus of the unintended shift in foci.Given the importance of mathematics education to a child's future, ensuring that all children have access to mathematics education, or access to additional supports if a disability is identified, should be a common global concern for teachers, educators, and policy makers.Consequently, reflecting on the outcomes of our recruitment efforts we believe is an important commentary that may serve to advance discussions of equity and school-based processes.
Dyslexia and Visual-spatial Ability
Wang and Yang (2011) looked at visual-spatial abilities in Chinese and Taiwanese students aged 10-12 with dyslexia against a control from both countries.Participants were asked to rotate a computer 3D model of a field of columns hiding a ball and were then asked to pick the correct location of the ball from the plan.Their results showed no significant difference between the groups with dyslexia and the control on accuracy.They did find a significant difference in answering speed with the participants with dyslexia answering more Brock Education Journal, 26(2), 2017 quickly than the controls.This suggests that individuals with dyslexia have improved visualspatial abilities based on faster response times without an increase in error rates.Brunswick, Martin, and Marzano (2010) found no task in which university-aged students with dyslexia outperformed a control group when using a virtual reality test and a paper-and-pencil test.A sex effect was noted, however.Males with dyslexia outperformed females with dyslexia and unimpaired individuals on a variety of measures.This finding further suggests that superior visual-spatial ability in those with dyslexia may be sex-specific.
Testing the hypothesis that children with dyslexia have enhanced visual-spatial abilities, Duranovic, Dedeic, and Gavrić (2015) used multiple visual-spatial tasks, including the Vandenberg Test of Mental Rotation (1978), and found no significant differences between groups, which suggests that children with dyslexia have similar visual spatial abilities to unimpaired children.In contrast, Winner et al. (2011) found that high school students with dyslexia compared to a non-dyslexic group did not have enhanced visual-spatial skills but rather deficits on many visuospatial tasks.This contradicts results from Duranovic, et al. who found equivalent scores on similar tasks.
Russeler, Scholz, Jordan, and Quaiser-Pohl (2005) aimed to determine the significance of mental rotation ability in children with developmental dyslexia.These researchers compared the mental rotation abilities among children with dyslexia to children without dyslexia.They compared the results from three tests in which letters, three-dimensional figures, and coloured pictures tested the children's mental rotation abilities.Results suggested that children with dyslexia, when compared to the control group, showed a deficient in mental rotation and spatial abilities.
Jones, Branigan, and Kelly ( 2008) tested dyslexic and non-dyslexic university-level readers' visual attention through a visual-search task and letter position encoding through a symbols task and found significant differences in dyslexic and non-dyslexic readers, in favor of those without dyslexia.These findings support the connection between developmental dyslexia and decreased visual attention ability.Similarly, Facoetti, Corradi, Ruffino, Gori, and Zorzi (2010) tested the phonological, rapid automatized, and visual spatial attention skills in children with familial risk of developmental dyslexia to a group of children without familial risk of developmental dyslexia.Results from a comparison of the two groups suggest that children at risk show a deficit in visual-spatial attention.
Given the importance of spatial ability to mathematics and to future STEM participation, we sought to explore the relationship between children formally identified as having dyslexia and visual-spatial ability and we sought to contribute to the understudied population of school-aged children in grades three to eight.This was the preliminary phase of a sequence of studies that would then ultimately consider the malleability of spatial ability in children with dyslexia.
Intended Study Participants
Students were recruited from 10 elementary schools from a mid-sized urban center.Only students "formally" identified with dyslexia were invited to participate in our research.The list of potential participants was first established by each of the school's special education teacher who oversees education plans provided to students with exceptionalities, and who would have knowledge of those students formally identified.
In our own jurisdiction, there is a distinction between students whose exceptionalities have been identified either formally or informallyand this may also be common in other school boards.A formal identification, as we explain shortly, would have involved psychometric assessments and would be more reliable in our view than informal and perhaps inconsistent identification of students by teachers.For this research, the psychometric Brock Education Journal, 26(2), 2017 assessments may have overtly stated "dyslexia" as a diagnosis or would have indicated "difficulties characterized by problems with accurate or fluent word recognition, poor decoding, and poor spelling abilities" (American Psychiatric Association, 2013, p. 67).
In our jurisdiction, parents or the school principal can initiate the formal identification of a student and this occurs through a review and recommendation by the Identification, Placement and Review Committee (IPRC).This committee is legislated to identify exceptional students and to determine an action plan for meeting the needs of the student.The IPRC includes numerous education professionals and formal identification usually involves significant psychometric assessment, usually at the expense of the school board.All psychoeducational assessments of children aged 18 and younger require informed consent from parents.Long delays for school board funded testing of children are often reported by parents and teachers (Blackstock, 2016); consequently, some parents pay for private psychometric assessment to expedite the formal identification (Dunn, 2006).
A student who has been reviewed by the IPRC is considered to be formally identified.An individual education plan (IEP), which outlines the special education program and learning goals for the student, must be completed within 30 school days of a student's formal identification by the IPRC (OME, 2002).The formal identification ensures services and supports for the student because there is an explicit and legal obligation on behalf of the school board to be accountable to the recommendations of the IPRC.This is not to say that those students who have been informally identified are not receiving appropriate services.These students may also have IEPs.However, there is no formal accountability to the IPRC.We surmise that there are advantages for a formal identification or otherwise such a process would not exist.Formal identification creates an obligation by the school to accommodate or modify services and supports based on the needs of the student, and these obligations are not subject to constraints that may arise in terms of budget cutbacks for teaching support, resources, and so forth.
We take the time to explain this process of formal identification in our jurisdiction because our results are directly impacted because of this process.Using conservative population estimates of the prevalence of dyslexia (approximately 4%), based on the population of students (n = 4138) at the 10 elementary schools participating in the study, we anticipated approximately 165 potential participants.Instead, only 25 students were formally identified across the 10 schools, of which 13 parents agreed to allow their child to participate in the study (boys n = 8, girls n = 5).Participants ranged from the third to eighth grade.Therefore, less than 1% of the students at these 10 schools were formally identified as having dyslexia and thus officially receiving the supports and services necessary to develop their reading and/or their writing.
Measures and Procedures
A variety of measures were collected for the students that agreed to participate.These included official school-level achievement data, psychometric assessment, and a demographic questionnaire completed by the parents.The children were then tested individually on different days and in different locations as they were tested at their respective schools.Students were tested on spatial transformations (Levine, Huttenlocher, Taylor, & Langrock, 1999), the Piagetian Water-Level-Task (Quaiser-Pohl, Lehmann, & Eid, 2004), the Rod-and-Frame Test (Quaiser-Pohl et al., 2004), and the Vandenberg Mental Rotations Task (MRT) (Quaiser-Pohl et al., 2004;Shepard & Metzler, 1971;Vandenberg & Kuse, 1978).The tasks were selected because they were either previously used or had similar properties to in other studies to explore visual-spatial ability and thus the results would enable us to consistently contribute to prior research findings.These tests were also selected because they were easily administered by Brock Education Journal, 26(2), 2017 classroom teachers and thus could be used in the future to assist with identifying students if our results should a robust pattern.
Given the very limited amount of participants, and the wide range in ages and grades, we do not report the full results of this testing in this paper given the lack of statistical power in the small sample (n = 11).As outlined, our focus shifted to consider why so few students were formally identified and the implications of this unexpected and corollary finding for children with exceptionalities.
Educational Implications
From the early on during the recruitment period of the research, it became apparent that there were not going to be enough students to compose an adequate sample.However, this led to what may be an even more important question regarding identification of students with learning disabilities and potential equity issues in special education.To make clear, at each of the schools there were students who were informally identified as having reading and/or writing challenges.These students were receiving some level of supports and services if informally identified and we make no judgement on the quality of what is provided for these students.Nevertheless, given our criteria for inclusion in this research, these students were not invited to participate because their diagnosis was not independently confirmed through psychometric assessments and formalized through the IPRC.
It may be that there were, by chance, few students with dyslexia compared to what might be expected.Or, it may be that some parents have declined to have their child formally identified for various reasons, such as fear of stigma.Parents have the right to refuse sharing the psychometric assessments with the school, including any diagnoses (Ontario Psychological Association, 2013).This concern may have contributed partially to the low number of possible participants for our study but, in our view, not sufficiently enough to account fully for the very low number of formally identified students.
In each of the participating schools, we were told consistently and clearly by the special education teachers that quotas existed on the number of students that were funded annually for psychometric assessment.As a result, a plausible explanation is that students who may need to be tested and identified formally are not because of limited funding.Our results raise important ethical questions about who gets tested, who gets identified formally, and to what extent are instances of comorbidity of other learning challenges missed because formal testing and IPRC review is not occurring?
The discrepancy between how many students actually struggle with reading and the number who are formally identified is problematic.Firstly, identification is important because many students have difficulties that extend beyond reading.Dyslexia tends to "co-occur with other disorders, including specific language impairment, speech sound disorder, and attentiondeficit/hyperactivity disorder" (Snowling & Melby-Lervåg, 2016).Students who are not identified are not only going to continue to struggle with reading, but with potentially other comorbid disorders that are perhaps less obvious, and may impede cognitive and social development in other ways.Consequently, a lack of formal identification may also prevent learning about other challenges that might otherwise go undetected.
For example, a comorbid diagnosis of dyslexia and dyscalculia (i.e., problems processing numerical information, learning arithmetic facts, and performing accurate or fluent calculations) occurs in approximately 40% to 65% of identified cases (Barbaresi, Katusic, Colligan, Weaver, & Jacobsen, 2005;Osisanya et al., 2013;Wilson et al., 2015), despite the fact that they are proposed to have different cognitive profiles (Landerl, Fussenegger, Moll, & Willburger, 2009).We would assert that children are more likely to be tested for dyslexia than dyscalculia, although no research was found to indicate the prevalence of one over the other.Brock Education Journal, 26(2), 2017 Secondly, identification can lead to early intervention, which ensures that the student receives support before they get too far behind their peers.Knivsberg, Reichelt and Nødland (1999) suggest that symptoms become apparent during the pre-school years, meaning that intervention can begin before students have a chance to fall too far behind.Reading skills are crucial in most school disciplines, raising the concern that students with poor reading skills will fall behind in multiple subjects.For example, Beringer et al. (2008) found that students with dyslexia also had problems with both handwriting and written composition, again going back to the comorbidity of disorders.Therefore, early identification ensures that support is available for not only reading, but also all compounding academic difficulties.
Finally, in the absence of formal identification, accountability and the full range of services and supports may not be accessible to a student, or potentially even scaled back in instances where resources are limited.According to Dunn (2006), teacher's observations are not given equal weight to psychoeducational assessments in terms of support recommendations.In fact: In order for a student to be classified, the standardized assessment scores completed by the school psychologist or speech and language pathologist had to render a profile commensurate with an exceptionality category (e.g., learning disability).If this was not the case, the student would be considered as a slow learner and denied the services he/she needed (Dunn, 2006 ,p. 129).When recruiting participants for this study, many special education teachers suggested that the sample size could be increased by including those students with IEPs for reading difficulties, despite not being formally diagnosed.While it may be viewed that our inclusion criteria was a limitation of this research, depending on teacher judgment only of learning challenges would have opened up greater concerns over the validity of our participant sample and thus was not considered at any time.
The observation by the special education teachers that more students could be included based on identifications done by teachers, demonstrates that there are students who are informally identified and receiving some level of support.However, the validity of the identification, the extent of the support, and whether the support adequately addresses all the learning challenges of the student would be uncertain without the psychometric assessments and the IPRC review.Moreover, the extent to which the support and services might continue consistently through a child's education and whether these supports and services are scaled back in times of fiscal constraint are unknown.To be clear, we make no rehabilitation judgement; that is, we are not suggesting that formal diagnosis results in beneficial outcomes for the student or more beneficial outcomes than that of an informally identified student.Rather, formal identification results in consistent and sustained learning support services and may also yield comorbid diagnoses.
Perhaps one of the most important reasons that so few formal identifications are occurring is due to the high costs of psychometric assessments, approximately $1,500 to $2,500 in Ontario (Blackstock, 2016).As a result, many of the schools report that there are restrictions placed on how many students they can recommend for these assessments.For low socioeconomic status (SES) schools in particular, which tend to have higher levels of students with special education needs (People for Education, 2013), sending every student for testing is just not practical.Regardless, parents who can afford the assessments can expedite the process.
Parents of our student participants were also asked in which grade they noticed their child had a reading difficulty and also the grade their child was formally diagnosed.The number of years between the onset of reading difficulties and formal diagnosis was as follows: 15% 0 years, 8% 1 year, 23% 2 years, 38% 3 years, 8% 4 years, and 8% 5 years.Therefore, the majority of children had to wait three years for a formal diagnosis.Evidence from our own small sample of students supports an SES advantage for formal identification.We found that Brock Education Journal, 26(2), 2017 61.5% of the mothers of participants had some form of post-secondary education, which is said to be a predictor of high SES (Mistry, Biesanz, Chien, Howes, & Benner, 2008).This means that diagnosis and support goes to perhaps to those who can afford it rather than those who are most in need.Alternatively, parents from higher socioeconomic backgrounds tend to advocate more for their children (Lareau, 1987), and this may offer a partial explanation for the higher SES amongst those children formally identified in the small sample.
Our intent in this research was to examine the visual-spatial abilities of elementary students with dyslexia.Given the mixed research in this area, the importance of visual-spatial reasoning, and its highly malleable nature, this goal is still laudableand more research is still needed.Our unexpected results of low formal identification suggest that research that comparative research that explores learning and longitudinal socio-economic implications for learners who are formally versus informally identified as having exceptionalities is also, and perhaps urgently, needed.Research of this nature would also investigate the extent to which teacher judgements are sufficient for developing plans of action for special education, in the absence of specialized professional support and recommendations (i.e., educational psychologist).Whether a student is truly marginalized over the long-term by receiving only an "informal" identification is unknown.From an equity perspective, research of this nature should be a priority for all stakeholders, including researchers, parent groups, and also schools.
|
v3-fos-license
|
2020-07-28T05:04:48.684Z
|
2020-07-01T00:00:00.000
|
220794145
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ags3.12350",
"pdf_hash": "f5c7c0c5b9678cc5dda9505be89345c685e24c1d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46041",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "f5c7c0c5b9678cc5dda9505be89345c685e24c1d",
"year": 2020
}
|
pes2o/s2orc
|
Multidisciplinary treatment of esophageal cancer: The role of active surveillance after neoadjuvant chemoradiation
Abstract The optimal treatment of esophageal cancer is still controversial. Neoadjuvant chemoradiotherapy followed by radical esophagectomy is a standard treatment. Morbidity after esophagectomy however is still considerable and has an impact on patients' quality of life. Given a pathologic complete response rate of approximately 30% in patients after neoadjuvant chemoradiation followed by surgery, active surveillance has been introduced as a new alternative approach. Active surveillance involves regular clinical response evaluations in patients after neoadjuvant therapy to detect residual or recurrent disease. As long as there is no suspicion of disease activity, surgery is withheld. Esophagectomy is reserved for patients presenting with an incomplete response or resectable recurrent disease. Active surveillance after neoadjuvant treatment has been previously applied in other types of malignancy with encouraging results. This paper discusses its role in esophageal cancer.
| E SOPHAG E AL C AN CER
Esophageal cancer (EC) is an aggressive disease. The two most common types of EC are adenocarcinoma (AC) and squamous cell carcinoma (SCC). AC and SCC differ with regard to etiology, geographic distribution, response to chemotherapy/ radiotherapy, prognosis and possibly need for surgical resection. Esophagectomy is the cornerstone in the treatment of EC. During the last two decades, studies on the lymph node dissection during esophagectomy have shown improved survival in patients who underwent an extensive nodal dissection. 1 A total number of 23 lymph nodes were proposed as the optimal threshold in order to achieve a maximal survival benefit after esophagectomy. The extent of lymph node dissection expressed as the total number of nodes dissected was found to be an independent predictor of survival. 2 Whether the observed relationship between the number of nodes dissected and survival reflects a true benefit of more extensive surgery or is due to stage migration, is not clear yet. However, a transthoracic esophagectomy with a two-field nodal dissection is considered by many as the standard surgical approach nowadays.
Esophagectomy is associated with major complications. [3][4] The diminished quality of life of patients after neoadjuvant therapy plus esophagectomy is another drawback. A patient's quality of life is substantially impaired after surgery including role and social functioning. 5 Reducing morbidity after esophagectomy is a challenge.
The application of minimally invasive surgical techniques, better selection of surgical candidates, preoptimization of patient condition, and enhanced recovery protocols have shown to be associated with a reduction in complications and quicker return to normal functioning. [6][7][8][9][10]
| MULTIMODALIT Y TRE ATMENT
Perioperative therapies have been incorporated in the treatment of locally advanced EC in the last decade. The rationale is to downstage the disease, facilitate a curative (R0) resection, treat micrometastases and improve overall survival. Studies on SCC mainly originate from Asia whereas AC is mostly seen in the Western world. Besides the published Japanese JCOG9907 and Dutch CROSS studies, [11][12] an impressive number of (ongoing) randomized controlled trials (RCTs) aim to clarify the benefit and harm of perioperative regimens in the treatment of the disease. There is currently no consensus on the optimal neoadjuvant treatment regimen yet as the CROSS trial (neoadjuvant chemoradiotherapy-nCRT), English OEO2 trial (neoadjuvant chemotherapy), MAGIC trial (pre-and postoperative chemotherapy), French FFCD trial (nCRT), and German FLOT4 trial (pre-and postoperative chemotherapy) all were beneficial but had different regimens. [12][13][14][15][16][17] The role of neoadjuvant radiation, as an adjunct to chemotherapy, is still questioned by some, especially for esophageal AC. Proponents feel that radiotherapy in the neoadjuvant setting treats both locoregional disease as well as subclinical micrometastases. This is illustrated by the high rate (92%) of patients that underwent radical surgery (resection margins negative) after nCRT. 12 Moreover, almost one third of the patients after nCRT in the CROSS trial had a pathologically complete response (pCR), i.e. no viable tumor cells in the resection specimen. This opens the way to think about the concept of an organ sparing treatment for EC.
| THE CON CEP T OF ORG AN -PR E S E RVATI O N
Thorough understanding of the impact of neoadjuvant therapies on rectal cancer patients led to the hypothesis that radiation may be responsible for increasing tumor necrosis over time justifying a less extensive resection. 18 Prolongation of the time interval between nCRT and surgical resection supported the rationale for preservation of the anal sphincter. Although the initial goal of nCRT was to facilitate a radical surgical resection and decrease rates of locoregional recurrence after surgery, the observation of a clinically complete response (i.e. no proof of residual tumor by clinical staging modalities including endoscopy and imaging techniques; cCR) after neoadjuvant therapy in a proportion of cancer patients who were unfit for surgery led to an active surveillance or "wait and see" policy. Herein, systemic and local treatment can lead to regression of the primary tumor, while control of undetectable micrometastases at time of diagnosis might also be another benefit. Resection of the primary tumor and locoregional lymph nodes is reserved for patients with residual/recurrent disease only. Standard surgery is now omitted from the multimodal treatment in several types of malignancy, as CRT alone was found to be curative in some patients with bladder, prostate, head and neck, and rectal cancer. [19][20][21][22][23][24][25]
| ORG AN -PRE S ERVATI ON IN E SOPHAG E AL C AN CER
The CROSS study showed improved overall and disease-free survival after five weekly cycles of carboplatin and paclitaxel with concurrent 41.4 Gy radiation plus surgery for patients diagnosed with locally advanced EC. [12][13]26 Distant recurrence rates were also lower for patients that underwent combined treatment compared to patients that underwent esophagectomy alone. Furthermore, the CROSS study showed that nearly one third of the patients had a pCR: 49% in SCC and 23% in AC. 12,26 This finding fueled the debate on applying active surveillance after nCRT. Theoretically, patients with a cCR (based on endoscopy with biopsies, endosonography (EUS), positron emission (PET) and computer tomography (CT) scanning) may have been cured (i.e. have a true pCR) and could potentially be spared an esophagectomy. A second possible benefit of an active surveillance strategy in patients with a cCR is that tumors with an aggressive biological behavior and yet undetected disseminated disease that cannot be cured with surgery will be identified over time before recurrent local disease becomes detectable. The main argument supporting an organ-sparing approach in this group of patients is that, despite surgery, early systemic recurrence will occur (within 1 year) and surgery for local disease control is not needed; therefore, patients are put at risk for morbidity and mortality of an operation without changing prognosis. [27][28] In other words, avoiding unnecessary major surgery at a time when distant metastases are present but cannot be detected may result in similar oncologic outcomes with a high likelihood of improved quality of life and preservation of immune system activity.
The feasibility of an active surveillance approach for EC has been investigated in a step-by-step process. Shapiro et al concluded that a prolonged time to surgery up to 45 days after nCRT had no effect on disease-free and overall survival. 29 Interestingly, postponed surgery up to 12 weeks not only did not affect the oncologic outcome but increased the probability of a pCR. The importance of delaying surgery up to 12 weeks after nCRT is that this allows for a more accurate assessment of a cCR by endoscopy, EUS, and PET-CT scanning. By 12 weeks post-surgery, most inflammatory changes due to CRT have largely resolved. Another retrospective study found no differences in postoperative complications or survival between patients operated on less or more than 8 weeks after neoadjuvant treatment. 30 Some other studies found that a time interval of at least 10 weeks for AC and 13 weeks for SCC after completion of neoadjuvant treatment was associated with a higher probability of pathologic pCR. [31][32] As it was felt that a response assessment would be optimal 12 weeks after nCRT, the next question was what modalities are best for clinical response assessment? And if there is residual disease in the esophagus, where is this located and can this be targeted?
To answer these questions, the resection specimens of 102 consecutive patients after nCRT and esophagectomy were evaluated. In non-complete responders (i.e. residual cancer in the resection specimen after nCRT), 89% of the patients had residual tumor cells in the mucosa and/ or the submucosa. 33 Hence, concentric and toward the lumen regression seem to compose a mixed pattern of residual disease despite lack of involvement of the surrounding stroma and regional lymph nodes. This finding may allow a safe and reliable follow-up based on both endoscopic (with biopsies) and imaging modalities. The accuracy of diagnostic tests for the assessment of a cCR has been evaluated in the preSANO (Surgery As Needed for Oesophageal Cancer) trial, which was designed as a prospective, single arm, multicenter trial. 34 The clinical response evaluation (CRE) was proposed as a two-step process (CRE I and II) in six centers in In this study, 31% of tumor regression grade (TRG) 3 or TRG4 (>10% residual carcinoma in the resection specimen) were missed by endoscopy with regular biopsies and FNA, 10% were missed by biteon-bite biopsies plus FNA, 28% were missed by EUS plus FNA, 15% were missed by PET-CT. 35 Sensitivity of endoscopy alone hardly exceeds 60% in the existing studies. [36][37] Cheedella et al compared cCR to pCR in one of the largest cohort studies published. 38 Two hundred and eighty-four patients with EC were evaluated after nCRT.
Among the 77% of patients with cCR after nCRT, only 31% achieved pCR after surgery. Overall, sensitivity of cCR for pCR was 97.1%, but specificity was too low (29.8%). These findings confirm that preoperative staging remains one of the biggest challenges in the management of EC albeit the evolving technologic advances. Focusing on the role of endoscopic biopsies, the preSANO study proved that bite-on-bite biopsies increased the chance of detecting residual cancer cells in deeper layers of the esophagus, such as the submucosa, compared with regular biopsies. 34 Moreover, at least two independent expert pathologists revised each endoscopic and surgical specimen, while the learning curve of accurate endoscopy and precise pathologic examination seems to improve over time based on strict protocols and technologic novelties.
A side study of the preSANO trial revealed the inaccuracy of the PET-CT for the identification of TRG3-4 and the inability to distinguish relapse of the disease from inflammation at 12 weeks post-nCRT. 39 However, distant metastases were detected in almost 10% of patients and surgery was withheld in this group. These patients would otherwise be operated on when no PET-CT was performed 12 weeks after nCRT. Hence, PET-CT is useful for the detection of interval metastases and may have a role in an active surveillance strategy with serial scanning. According to a recent meta-analysis, endoscopic biopsies, EUS, PET-CT, and PET-CT with SUVmax or %DSUVmax identified residual disease with a sensitivity of 33%, 96%, 74%, 69%, and 73% and specificity of 95%, 8%, 52%, 72%, and 63%, respectively. 40 Although EUS presents the highest sensitivity among the other tests and endoscopic biopsies are followed by a significant specificity, the use of all tests increases the possibility of early detection of residual or regrowth disease during the follow-up period.
| DEFINITIVE CHEMOR ADIATION PLUS SALVAG E SURG ERY VER SUS NEOADJ U VANT CHEMOR AD IATI ON AND SURG ERY A S NEEDED
The idea of CRT without surgery, also called definitive CRT (dCRT), for EC is not novel. Observational studies including patients with unresectable tumors or patients not eligible for a surgical resection due to limited physical status underwent dCRT with the aim to achieve cure without surgery. A French RCT showed that patients diagnosed with locally advanced cancer of the thoracic esophagus, mainly SCC, who respond to CRT do not benefit from additional surgery compared to continuation of CRT therapies (definite treatment). 41 This was also shown in a Chinese RCT. 42 The 5-year overall survival was comparable between the group that underwent surgery vs patients with dCRT. A third RCT compared the efficacy of induction chemotherapy plus CRT (40 Gy) plus surgery to induction chemotherapy plus dCRT (at least 60 Gy) without surgery. This study concluded that despite improved local control, surgical resection did not affect survival in patients with locally advanced SCC. 43 Nowadays, in patients with SCC, dCRT is considered a curative treatment, especially in patients that are not good surgical candidates.
A phase-II study evaluated the results of dCRT for resectable locally advanced EC in 41 patients. Some 28 (68%) patients had grade 3 or higher toxicity, while four therapy-related deaths were recorded reflecting the toxicity of the regimen. Twenty-one patients underwent surgery for residual or recurrent disease where dCRT had not cured the disease. 44 Additional esophagectomy after dCRT in patients with residual/recurrent cancer is defined as salvage surgery.
Surgery after dCRT should be considered as a "rescue" treatment rather than delayed surgery as proposed in the active surveillance protocols after neoadjuvant therapy.
A retrospective multicenter European study compared patients who underwent salvage esophagectomy after dCRT with patients who underwent planned esophagectomy after completion of nCRT.
Interestingly, anastomotic leak and surgical site infection rates were higher after salvage surgery while 3-year overall and disease-free survival were similar for the two groups. 45 nCRT (up to 40-45Gy radiation) is associated with lower complication rates and less toxicity as compared to dCRT (50-60Gy). Secondly, surgery is likely associated with less complications given the lower dose of radiation applied resulting is less mediastinal fibrosis. Limiting the dose and field of radiation may also limit cardiac and pulmonary toxicity reducing postoperative surgical and medical complications. Finally, there is no strong evidence from randomized clinical studies that nCRT regimes are less effective than the dose of radiation used for dCRT in terms of pathological response and survival. Therefore, an organ-sparing approach using a nCRT regimen with surgery as needed in patients that have residual or recurrent disease seems reasonable. Whereas in dCRT the aim of the treatment is to cure the disease by not applying surgery, in a "surgery as needed" approach resection is still anticipated but may not be needed in patients with a persistent cCR. Table 1 shows the differences between the two treatments.
| RE TROS PEC TIVE S TUD IE S ON THE EFFI C AC Y OF AC TIVE SURVEILL AN CE IN E SOPHAG E AL C AN CER
A Dutch multicenter study of 31 patients under active surveillance with surgery as needed and 67 patients in the immediate surgery group after nCRT (CROSS regimen) showed that 3-year overall survival was 77% and 55%, respectively (HR 0.41; 95% CI 0.14-1.20, P = .104). 46 Moreover, the 3-year progression-free survival was 60% and 54%, respectively (HR 1.08; 95% CI 0.44-2.67, P = .871).
Importantly, distant dissemination rate, R0 resections, and postoperative complications were comparable between the two groups.
However, this was a retrospective study in which the median followup of the active surveillance group was less than 3 years. Another drawback was the heterogeneity in the surveillance strategies.
The MD Anderson Cancer Center presented their experience
with surgery in patients with a cCR who underwent surveillance.
The 5-year overall survival was 58%. Twelve of 13 patients that had a locoregional regrowth could be operated on (delayed surgery) with excellent perioperative outcomes. Comparison of these patients with patients undergoing standard treatment (neoadjuvant therapy plus surgery irrespective of response to treatment) showed no statistically significant difference in median overall survival. [47][48] Similar studies comparing survival after active surveillance plus delayed surgery in patients with a cCR and standard treatment (neoadjuvant CRT plus standard surgery) come from Ireland and Italy, and support an active surveillance strategy. [49][50] On the contrary, a French retrospective study found a higher recurrence rate when surgery was omitted after CRT (50.8% vs 32.7%, P = .021). 51 In this study, the vast majority of the patients had a SCC (84.1%). Patients who underwent additional esophagectomy also had a higher 5-year overall survival compared to the non-operative group. Although it appears that these results were in favor of the operative approach over surveillance, selection bias of the patients who were included in the study and underwent surgery is a major limitation of the study. For instance, patients after dCRT who refused to undergo surgery and were included in the surveillance group could have a poor physical status. Indeed, patients that underwent surveillance were older, had higher age, more often a poorer nutrition status, and higher ASA score. Moreover, neoadjuvant setting and dosages were heterogeneous.
In summary, these studies support the feasibility and safety of an active surveillance approach in selected patients with a cCR after nCRT, and this is in line with a recent systematic review from the Netherlands. 52 The decision of a nonoperative strategy is also supported by patients' preference according to a recent study. It was shown that patients accept a lower chance of overall survival in order to avoid an esophagectomy. 53
| R ANDOMIZED CLINI C AL S TUD IE S
The Dutch SANO trial is a phase III multicenter RCT aiming to compare the clinical and oncologic outcome of neoadjuvant therapy with surgery as needed/active surveillance versus neoadjuvant therapy plus standard esophagectomy in patients with resectable AC or SCC. 54 The trial seeks to prove non-inferiority of active surveillance compared to standard surgery. The primary outcome of the study is overall survival. Secondary outcomes are the proportion of patients who do not undergo surgery, quality of life, irresectability (T4b) rate, radical resection rate, postoperative complications, progression-free survival, distant dissemination rate, and cost-effectiveness. In the intervention arm (active surveillance), patients with a cCR 12 weeks after nCRT will undergo intense follow-up (CREs) and (delayed) surgery is only done when there is a strong suspicion of cancer recurrence without distant metastases. In further detail, during CRE-I 6 weeks after completion of induction CRT (CROSS), all patients All patients with residual disease morbidity 60% mortality 3%-5% pR0 90%-100% TA B L E 1 Differences between definitive chemoradiation with salvage surgery and neoadjuvant chemoradiation with delayed surgery (surgery as needed) undergo esophagoduodenoscopy with biopsies, radial EUS with additional EUS-FNA in case of suspected lymph node disease, and PET-CT for exclusion of distant metastases. 46 The French Esostrate-Prodige 32 study is also comparing standard surgery with active surveillance after nCRT for resectable EC. 56 Randomization, in contrast to the SANO trial, is done on an individual and not institutional level. Moreover, the Esostrate trial uses a more intense neoadjuvant treatment. Therefore, the pCR may be higher than in the SANO trial, albeit the possibility of higher risk of toxicity and increased adverse effects. The primary outcome is overall survival. Recruitment is slow, however.
The design of the SANO trial seems to facilitate a smooth recruitment of patients among the 12 participating high-volume centers. Randomization on an individual rather than an institutional level has some limitations as pointed out by Blazeby et al.
They concluded that optimizing recruitment of patients towards an operative vs a nonoperative approach appears to be challenging. Only 11% of the patients with SCC were finally eligible for randomization in a feasibility study on dCRT vs surgery. 57 This was attributed to the discrepancy during the informative process for consent of the patients between the centers performed by surgeons and oncologists. Audio-recording consultations, data interpretation, outcome analysis, and training of recruiters may be the key in further enhancing randomization in demanding oncologic hypotheses to be investigated.
| P OTENTIAL B ENEFIT OF AC TIVE SURVEILL AN CE
A recently published international study on 2704 patients diagnosed with EC who underwent esophagectomy between the years 2015-2016 disclosed a 59% overall incidence of complications. 58 Moreover, 30-and 90-day mortality was 2.4% and 4.5%, respectively. Interestingly, the vast majority of patients with a complication experienced multiple adverse events. The comprehensive complication index (CCI) was developed in an effort to summarize the total burden of postoperative complications in a single comprehensive parameter. In a later analysis of the CROSS trial, the CCI was comparable for patients who underwent nCRT plus surgery vs surgery alone. 59 However, patients after esophagectomy experience longlasting symptoms impacting on quality of life. [60][61] Alimentary disorders and reflux are the most frequent symptoms reported. 61 Overall, nutritional and psychological status are strongly deteriorated after surgery mainly due to changes of daily habits, while fatigue and appetite loss may persist for a long period postoperatively. 60 This justifies initiating studies looking at the benefit and harm of "surgery as needed" since avoiding an esophagectomy will not put the patient at risk for a reduced quality of life. Secondly, there is no risk of morbidity and mortality related to the surgical intervention as previously reported in other malignancies. 20,[62][63] Another argument towards delaying surgery after nCRT and opting for an organ-sparing approach is that patients have more time to recover after therapies with improvement of physical, social, and selfcare. Surgical trauma and its consequences also impair the immune system. [63][64][65][66] Hence, avoidance of surgery may provide time for the immune function to self-reinforce and attack any possibly remaining viable tumor cells. Finally, as already discussed, prolonged time to surgery was associated with a better histopathological assessment of tumor response to neoadjuvant treatment and prognostication. 29
| CON CERN S
The accuracy of diagnostic tests used during active surveillance is a possible concern. Residual cancer after nCRT may be missed and this may lead to an unnecessary delay of surgery in patients with false negative CREs. Theoretically, this could lead to patients presenting with an irresectable or incurable (cT4b) regrowth or a lower chance for a complete tumor resection (R0). It also remains unknown if delayed surgery increases postoperative morbidity or mortality and distant dissemination rate. However, the lower dose of radiation, close and repeated monitoring of patients for disease recurrence and patient selection are more favorable in an active surveillance approach than salvage surgery after dCRT. One may also argue that there is a chance for a higher distant dissemination rate in patients that undergo active surveillance. Undetected residual cancer cells may give rise to blood-borne metastases. These concerns have been addressed in the protocol of the SANO-study and appropriate stopping rules have been defined. 54 The expertise of physicians involved in the response evaluations and interpretation of data is important and implementing an active surveillance program needs to be guided, guarded, and supported by a health system. Dedicated multi-disciplinary team meetings, repeated quality assessments, and training of the staff involved is important. Although the active surveillance approach in EC may result in non-inferior overall survival and lower treatment-related morbidity, the cost-effectiveness of this treatment approach is yet unknown. In summary, close monitoring is needed for patients in an active surveillance program. This involves the repeated use of accurate diagnostic modalities and skilled and trained specialists in order to prevent irresectable or incurable regrowth of cancer that may even give rise to distant metastases.
| FUTURE PER S PEC TIVE
The value of the additional role of diffusion-weighted (DW) technology in T2-weighted (T2W) MRI has been recently presented.
Sensitivity and specificity of this combined technique increased from 90%-100% and 8%-25% to 90%-97% and 42/50%, respectively 67 . Despite the low specificity and the risk of over staging complete responders, it is undoubtedly a trustworthy tool that can be incorporated in the current protocols of active surveillance in order to improve early detection of residual or recurrent disease. Another novel tool that may be implemented in the surveillance protocols in the future is the circulating tumor DNA (ct-DNA) technology. This, along with new biomarkers identified in the peripheral blood, may have the potential to contribute to the earlier and more accurate detection of dissemination of the disease-identifying targeted genetic markers. [68][69]
CO N FLI C T O F I NTE R E S T
Authors declare no conflict of interests for this article.
|
v3-fos-license
|
2018-09-16T02:56:27.533Z
|
2018-09-04T00:00:00.000
|
52152813
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jep.13025",
"pdf_hash": "19a1db779272c1cfb0391306fa21859f7a8499b3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46042",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "19a1db779272c1cfb0391306fa21859f7a8499b3",
"year": 2018
}
|
pes2o/s2orc
|
Defensive caesarean section: A reality and a recommended health care improvement for Romanian obstetrics
Abstract Rationale Defensive caesarean section (CS) has become one of the most common medical procedure worldwide. Additionally, performing CS in accordance with the patient's choice is an appropriate professional practice. Aims and Objective This paper reports a prospective, observational, multicenter study to quantify the use of this type of practice that is performed by obstetricians to avoid medico‐legal complaints and decrease the frequency of malpractice litigations. Methods We interviewed 73 obstetricians from three distinct units of obstetrics and gynaecology, to assess their opinion regarding defensive caesarean delivery and caesarean delivery performed upon maternal request. We conducted an opinion‐based survey using questionnaires based on nine, close‐ended questions. Results Out of 73 respondents, 51 (69.9%) stated that they perform defensive CS; 63 (86.3%) declared that their choice of birth delivery is influenced by the risk of being accused of malpractice; 60 (82.2%) indicated that it is normal for the patient to be able to decide on the type of delivery; and 63 (86.3%) declared that they consult their patients regarding their delivery preferences. We found statistically significant differences between the respondents who declare that they perform defensive CS (69.9%) and those who said that they are influenced by the risk of malpractice when they choose the method of delivery for their patients (86.3%) (P < .001; McNemar Test). Conclusions The results of our study indicate that defensive caesarean section is a widespread practice among obstetrics practitioners in Romania.
| INTRODUCTION
A natural and predictable outcome of free market medicine is the practice of defensive medicine. In obstetrics, this is most often seen as "the defensive cesarean section (CS)". 1 In this study, the idiom "defensive CS" (or CS with a defensive indication) is defined as a caesarean delivery recommended by the doctor in the absence of any clear medical indication that such a delivery method is needed to avoid possible litigation or a possible accusation of malpractice. 1 This definition was proposed because no other clear definition currently exists. Some authors have defined defensive CS as a caesarean section performed by the doctor to avoid a lawsuit rather than for the benefit of the patient, with such a practice being considered both legitimate by some and immoral by others. 2 There is a difference between defensive CS and CS on maternal request. Caesarean delivery on maternal request is defined as a primary caesarean delivery performed at the request of the mother in the absence of any medical or obstetric indications. 2,3 There are insufficient data to evaluate the benefits and risks of CS on maternal request compared to vaginal delivery and more research is needed. Therefore, any decision to perform a CS on maternal request should be carefully analysed, individualized, and be consistent with ethical principles. 4 Recently, the support for the physician's decision to implement an informed pregnant patient's request for caesarean delivery in the absence of an accepted medical indication has been increasing. [5][6][7] Therefore, it is now considered ethically acceptable to perform CS in a well-informed patient who has provided consent, and this is thus considered good professional practice.
Defensive CS delivery can be considered an example of defensive medicine, which is defined as the deviation of the medical behavior from medical protocols or guidelines to reduce the number of complaints or criticisms from the patients. 3 Summerton described this approach to general medical practice as the ordering of tests, treatments and procedures to protect the doctor from criticism rather than correctly diagnosing and treating the patient. 3 Defensive medical practice can be either positive or negative. The distress produced by fear of possible litigation results in the doctor declining to treat particular patients or to perform certain risky procedures. However, using tests and initial treatments inherent to defensive medicine makes it possible to reduce the risk of injury that leads to malpractice complaints. 4 Obstetrics and surgery are perceived as the specialties that are most vulnerable to malpractice claims. 4 In obstetrics, the risk of malpractice litigation increases because there are at least two patients: the pregnant woman (mostly young and healthy) and her newborn or newborns (socially perceived and accepted as a distinct patients). Additionally, pregnancy, parturition, and subsequent hospitalization are considered by the general population to be routine medical procedures that are associated with a positive outcome. Accordingly, pregnancy and delivery are generally not perceived as potentially dangerous, although there are risks with vaginal delivery as well as with CS. Therefore, the fear of a malpractice lawsuit would increase the incidence of CS. 5 In some studies, it was estimated that among deliveries, 27.5% were performed as CS, of which 6.6% were performed because of legal considerations, not because of strict medical indications. 6 Other studies have assessed the gradual increase in the incidence of elective CS in Western Europe along with the increase in defensive medical practices by obstetricians. They concluded that defensive medicine in obstetrics is deeply rooted in the everyday practices of obstetricians and gynaecology physicians. 7 Additionally, the morbidity associated with vaginal delivery is considered socially unacceptable, because the general perception is that CS delivery is safer. 8 Patients' lack of information on CS often gives them the perception that this type of delivery is only associated with benefits. In 2014, Romania had the third highest CS rate (38%) in Europe, 13% less than that of Turkey which had a CS rate of 50.36%, and close to Italy, which had a CS rate of 38.81%. 9 In the current study we report a multicenter prospective study that measured the number of defensive CS performed in three tertiary medical units in Romania. The secondary objective was to test whether introducing caesarean delivery upon maternal request would reduce the incidence, at least in part, of such litigations, and to investigate the defensive practices that result from the potential threat of litigation. Committees of each of the tertiary centers, and informed consent was obtained from the participants after they had completely understood the intended use of the data they provided.
| Data collection
The inclusion criteria were as follows: practicing obstetrician with a licence to practice in Romania, affiliation with one of the three departments where the study was conducted and freely and voluntarily agreeing (unremunerated and unrewarded materially or otherwise) to participate in the study and to provide truthful information (anonymously) with results that would be included in scientific works. The only exclusion criterion was the freely expressed refusal of the interviewed obstetrician to participate in the study. Visiting obstetricians were excluded from the survey.
Voluntary completion of the opinion-based questionnaire represented written informed consent to participate in the study and agreement to the publication of its results. The inclusion and exclusion criteria did not discriminate based on ethnicity, nationality, professional status, age, sex, socio-cultural background, social status, religion, political beliefs, race, or sexual orientation. Furthermore, no clinicians refused to participate in the study. Participation in the study was entirely voluntary. To guarantee anonymity, which contributed, in our view, to the honesty of the answers, we did not collect or process any data that could lead to the identification of the respondents. The questionnaire consisted of nine questions not included in the obstetrics literature. The independent variables were sex, age, and professional data (specialists and residents). The dependent variable was the performance of defensive CS, the extent of defensiveness among obstetricians, and the physician-patient relationships influenced by the concern for legal demands. We used "yes" or "no" answers in the close-ended questionnaire to enhance its clarity because there were no multiple variables and the doctors had experienced only one of the options. The actual number of participants is representative of the obstetricians in Romania because their age varied from 24 to over 66 years, their obstetric expertise varied between 0 and 36 years, and the number of women was 43 and number of men was 30.
The first three questions of the questionnaire were related to the respondent's sex, age, and years of practicing obstetrics (age group and seniority). Question 4 asked the respondent whether it is normal (to be answered as "yes" or "no") for the patient to be able to choose the type of delivery. Question 5 asked the obstetrician if he/she agreed with the legalization of CS delivery on maternal request (to be answered as "yes" or "no"). Question 6 asked the respondent whether he/she asks patients about their preference regarding delivery (to be answered as "yes" or "no"). Question 7 was "Have you ever performed a CS for defensive purposes, only? (to avoid a possible malpractice lawsuit against you)?" (to be answered as "yes" or "no"). Question 8 asked the respondent to self-identify the number of defensive CS deliveries performed as a percentage of the total number of CS deliveries performed (0%, 1-9%, 10-20%, 21-50%, and over 50%). Question 9 asked the obstetrician "When choosing the method of birth for the patient, do you consider that you are influenced in this choice by the risk of being accused of malpractice?" (to be answered as "yes" or "no"). The difference between question 7 and 9 was that in question 9 the main objective was to investigate if the risk of a malpractice lawsuit influenced the choice of the obstetrician.
| Statistical analysis
The answers of the 73 interviewed obstetricians were analysed using the SPSS software version 23 and EpiInfo 3.5.4. The data analysed were synthesized as frequencies as well as percentages. For the binary categorical data, the Fisher Exact Test and Pearson chi-square Test were used (when 20% of the expected frequencies were less than 5), and for the non-binary categorical data, the Likelihood Ratio Test was used. The McNemar Test was used to compare binary pair data.
The significance level was set to 5%.
| RESULTS
The structure of the respondent obstetrician group was as follows: Of the 51 of the 71 obstetricians (69.9%) who declared that they perform defensive CS, 23 (45.09%) said that this type of delivery represented 10% to 20% of the total number of the CS deliveries performed, 10 (19.6%) declared that more than 50% of the CS deliveries performed were defensive, and nine respondents (17.64%) indicated that 1% to 9% or 21% to 50% of the total number of caesareans they performed were defensive CS deliveries. Only 22/73 of the respondents (30.1%) said they do not perform defensive CS.
In the answers given by the respondents from the centers used in the study, we detected two statistically significant differences: the percentage of doctors at Arad and Bucur who stated that they ask their patients the preferences that they have regarding delivery (32/ By comparing the responses given by the respondents, we did not detect any statistically significant difference for questions 4 to 9 of the questionnaire in terms of the gender of the respondent; however, women were more likely to agree to caesarean legalization on maternal request (women: 36/43 = 83.7% versus male: 21/30 = 70.0%; Pearson Chi-Square 0.163240). We also found no statistically significant difference when comparing the answers to questions 4 to 9 with reference to the age of the respondents.
Variations in the answers to the six questions of the questionnaire (questions 4-9) with respect to the experience in practicing obstetrics are shown in Table 1; we recorded statistically significant differences regarding the affirmative answer to question 7 (performing the defensive CS) given by those with 0 to 5 years and 11 to 20 years of experience (12/28 = 42.9% who performed CS deliveries vs 13/14 = 92.9% who performed CS deliveries; likelihood ratio 0.000775).
When comparing the answers to questions 4 and 5, we did not find statistically significant differences between the answers of those who consider it normal for the patient to choose the type of delivery (82.2%) and those who agree with the legalization of CS on maternal request (78.1%) (P-value = 0.51 McNemar Test); however, we found statistically significant differences between the percentage of those agreeing with caesarean section legalization on maternal request for the group who answered that it is normal for the patient to choose the method of delivery (90.0%) and for the group who did not consider it normal for the patient to choose the method of delivery (23.1%) (P-value < .001 Fischer Exact Test).
By comparing the answers given to questions 4 and 6, we found no statistically significant differences between the answers of those who considered it normal for the patient to choose the method of delivery (82.2%) and the answers of those who asked their patients how they want to deliver (86.3%) (p-value = 0.51 McNemar Test); however, we found statistically significant differences among those who asked their patients how they want to deliver, and those who considered it normal for the patient to be able to decide how to deliver (95.0%) and those who did not consider it normal for the patient to be able to choose the mode of delivery (46.2%) (P-value <.001 Fischer's Exact Test).
By comparing the answers to questions 5 and 6 of the questionnaire, we did not find statistically significant differences between the answers of those who agreed to the legalization of caesarean on maternal request (78.1%) and those of the obstetricians who ask patients about their preferred method of delivery (86.3%) (P-value = 0.18 McNemar Test). However, we found statistically significant differences in the group who declared that they ask their patients how they preferred to deliver, and between those who agreed with legalized caesarean deliveries on maternal request (93.0%) and those who disagreed with the legalization (62.5%) (P-value < .001 Fischer's Exact Test).
By comparing the answers to questions 7 and 9, we found statistically significant differences between the percentage of those who declared that they perform defensive CS (69.9%) and those who said they are influenced by the risk of malpractice when they choose the method of delivery for their patients (86.3%) (P < .001; McNemar Test).
We also found statistically significant differences when comparing the percentage of those who declare they are influenced by the risk of accused of malpractice when choosing the method of delivery and those who declared that they perform defensive caesarean deliveries (100%); these percentages were similar to that of those who answered that they did not perform defensive CS (54.5%) (P < .001 Fischer's Exact Test).
| DISCUSSION
Our study showed that defensive CS is performed in Romania and that it is widespread in the centers included in our study (69.9% of the respondents admitted under the protection of anonymity that they practice this type of intervention). The general opinion under the law is that in the gynaecology and obstetrics field, both young healthy women and the fetus are at risk. Defensive CS is usually chosen because it can diminish the possible substantial morbidity associated with vaginal delivery for the fetus and mother. Another contributing factor is the general risk avoidance attitude of society. The strength of our study is the high response rate of 100% for this survey. There was no refusal to participate. Factors that improved the response rate were that all respondents found the subject very interesting and appropriate because of the present status of obstetrics in Romania.
Obstetricians who perform defensive CS do so to various extents which can exceed 50% of all CS deliveries performed. Indeed, 10 of 51 (19.6%) respondents who stated that they perform defensive CS considered that more than 50% of their deliveries are caesarean.
Female obstetricians were more likely to agree with caesarean legalization on maternal request. Similar to our results, in one survey, 31% of the female obstetricians preferred performing CS. 10 This is in contrast to a Dutch study which reported that only 1.4% of female obstetricians opted to perform caesarean deliveries. 11 Of the 73 obstetricians in our study, 63 (86.3%) stated that the risk of being accused of malpractice influences the method of delivery they perform, which places intense pressure on the medical-obstetrical professional body. In the literature, the relationship between malpractice claims payment and the use of CS is conflicting. Some studies indicated that many obstetricians view CS as a way of minimizing their exposure to litigation. 12 Other analyses have found no such relation. 13 Studies performed in the United States have indicated that 96% of neurosurgeons practice defensive medicine. Moreover, in Italy 94% of gastroenterologists and 85% of surgeons and anesthesiologists practice defensive medicine. 14,15 Romanian studies have revealed that the fear of a possible litigation is one of the iatrogenic factors that influence the frequency with which CS is performed, resulting in obstetricians resorting to a delivery method that involves a well-standardized and understood technique and a higher degree of control. 16 Studies have also shown that the use of certain manoeuvres and obstetrical instruments has decreased in Romania either because of the distress caused by possible legal issues or because of the emergence of newer generations of obstetricians who are not trained in the use of these manoeuvres or tools. 16 less discomfort) and the lack of some legislation that protects obstetricians when performing CS delivery as an alternative because of maternal request. In situations such as these, physicians attempt to produce medical indications that justify caesarean section under the current legal framework, which is similar to the system followed when performing a defensive CS. The attitude of society, patients, media, and the courts reflect a global intolerance to risk. One study emphasized the safe image of caesarean delivery in comparison to vaginal delivery and its possible morbidity. 18 In our study, the consent or refusal of the patient's request for elective CS was not influenced by the different tertiary units. We must remember that objective obstetrical thinking could sometimes be dangerous and a more individualized approach is more suitable for managing cases involving defensive caesarean delivery and CS on maternal request. 19
| Limitations of study
This study has some limitations. First, the only settings were only in tertiary hospitals, and the questionnaire was not sent to other city hospitals in the country. However, because the demographic profile of respondents reflected that of Romanian obstetricians, the risks of bias were minimal. Second, the sample size was small. Since we tried to keep the questionnaire short we could not increase the validity of the study by including other questions about the topic.
Unconscious defensive CS (in the context of defensive medicine) has not been reported by doctors, but it is also widely practiced. 20 Further research regarding the cost of defensive caesarean delivery is also necessary.
ETHICAL APPROVAL
No ethical approval required for this study.
|
v3-fos-license
|
2023-02-20T14:04:28.252Z
|
2023-02-20T00:00:00.000
|
257020509
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1098/rstb.2022.0064",
"pdf_hash": "94ba435e3d702c57fb2e5786f4f105d56663ea5f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46043",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "aabd207a0a16953935e6230e6f87bf833bdd8c89",
"year": 2023
}
|
pes2o/s2orc
|
Challenges of mismatching timescales in longitudinal studies of collective behaviour
How individuals’ prior experience and population evolutionary history shape emergent patterns in animal collectives remains a major gap in the study of collective behaviour. One reason for this is that the processes that can shape individual contributions to collective actions can happen over very different timescales from each other and from the collective actions themselves, resulting in mismatched timescales. For example, a preference to move towards a specific patch might arise from phenotype, memory or physiological state. Although providing critical context to collective actions, bridging different timescales remains conceptually and methodologically challenging. Here, we briefly outline some of these challenges, and discuss existing approaches that have already generated insights into the factors shaping individual contributions in animal collectives. We then explore a case study of mismatching timescales—defining relevant group membership—by combining fine-scaled GPS tracking data and daily field census data from a wild population of vulturine guineafowl (Acryllium vulturinum). We show that applying different temporal definitions can produce different assignments of individuals into groups. These assignments can then have consequences when determining individuals' social history, and thus the conclusions we might draw on the impacts of the social environment on collective actions. This article is part of a discussion meeting issue ‘Collective behaviour through time’.
Introduction
At the heart of any group decision are the conflicts of interest among individuals within the collective.When the preferences of individuals do not align, the collective must either resolve conflicts of interest through consensus (e.g. to move in a specific direction or to change from one behavioural state to another) or choose not to act as a collective (e.g. group fission [1]).Theoretical and empirical studies have focused on how consensus can be reached through democratic decisions [2,3], how costs of reaching consensus can cause collectives to fission [4,5] and how variation among individuals within groups shapes collective behaviour [6][7][8][9].By contrast, less is known about how differences in preferences among individuals arise to produce the conflicts of interest that stimulate the need for consensus in the first place [10].One challenge to understanding the origins of conflicting interests within collectives is that preferences are the outcome of processes that take place over a range of timescales.While integrating over time is a challenge for many fields (e.g.mate choice [11], dominance [12], social learning [13]), here we argue that it also merits attention in the field of collective behaviour.This is because collective actions emerge from interactions among many individuals, which can mask the distinct experience-driven preferences of each contributor.
To guide research into collective behaviour, we review processes acting at multiple timescales to produce the conflicts of interest that prompt consensus decision-making (figure 1).These processes fall into two categories-individual-level processes and group-level processes.First, drivers of individual preferences shape the degree to which individuals have divergent interests, and thus the magnitude of consensus costs borne by members of the collective when engaging in collective behaviour [4,10].Second, processes influencing groups determine which individuals-and consequently which preferences-are present in the collective, and shape the structure of the societies in which collective decisions are made.We then illustrate the methodological challenge associated with considering individual-and group-level processes operating at multiple timescales.Specifically, we present a case study showing how applying different time frames when defining groups can produce different assignments of individuals into groups (figure 1).We highlight how these differences might arise from methodological trade-offs or choices, that different definitions of group membership can generate different estimations of key properties of the social environment that individuals experience (e.g. group size, social stability, group dispersion), and that the properties of groups might capture social processes that occur at different timescales (e.g.arising from who is currently present versus historical group membership).Our case study, therefore, shows the conceptual challenge that arises when attempting to link even the most fundamental property of group-living to the collective actions that are expressed by a group.
Processes influencing individual preferences
Conflicts of interest arise when individuals have different preferences about how the group should behave.The drivers of individual differences in preferences can arise over a range of timescales.Here, we briefly survey the processes shaping individual preferences at evolutionary, developmental and experiential timescales.ecological conditions can lead individuals that are adapted to different environments to have divergent preferences.For instance, exposure to different predation regimes can shape individual preferences during collective movement [14,15].If groups include individuals adapted to different local foraging regimes, these groups may experience higher consensus costs [10] when making foraging decisions than groups who are locally adapted to share the same preferences.Similarly, heterospecific groups need to reconcile species-differences in foraging preferences to remain as a cohesive unit [16][17][18].Differences in preferences between the sexes can also produce conflicts of interest during collective behaviour.This could arise from sexual dimorphism in size or gait [19].For instance, sex-specific preferences for activity budgets in red deer (Cervus elaphus) drive intersexual segregation [20,21].Additionally, sexual conflict is a special case of sex-based conflicts of interest, where optimal reproductive strategies for males and females entail different collective behaviours.For instance, male banded mongooses (Mungos mungo) pay the cost of intergroup encounters, while females benefit from these encounters by mating with extragroup males [22].Consequently, females are more often the initiators of intergroup encounters producing high consensus costs for males [23].
(b) Developmental timescale
Early life conditions are well known to impact the behavioural patterns of individuals [24][25][26][27][28]. Several studies on zebra finches (Taeniopygia guttata) have demonstrated that early life adversity can impact social preferences among individuals, with individuals experiencing stress being less selective in their social interactions [29,30] and in which individuals they socially learn from (and therefore what information they obtain) [31].Cohort effects, where cohorts of individuals differ from each other as a result of some shared early life conditions [11,32], can lead to sets of individuals that share the same preferences but differ from other sets of individuals.For instance, individuals experiencing the same early life environments may share preferences for activity budgets, movement patterns or foraging strategies [33].Conflicts of interest could, therefore, also arise when these individuals disperse and mix with other cohorts to form groups [34].Even among individuals raised in the same cohort and under the same conditions, stochasticity during their development can shape individuals' behavioural patterns later in their lives.For example, clonal fish (Amazon molly, Poecilia formosa) express consistent individual differences in their movement behaviour, even when reared under nearly identical social and physical conditions [35,36].In sum, the extent to which individuals within a group have conflicting interests depends upon their individual and shared developmental histories.
(c) Experiential timescale
Recent experiences also shape the preferences of individuals involved in collective decision-making.For example, accessing resources influences individual nutritional states and informational states about where future resources can be found, both of which can have strong effects on group structure [37], movement [38] and aggregation [39].Intragroup variation in access to resources can introduce conflicts of interest about what a group should do next.When foraging on a patch of resources, vulturine guineafowl (Acryllium vulturinum) that are excluded from the patch are motivated to continue searching for new food patches, causing the group to initiate movement and forcing those that are still feeding to leave the patch to follow the group [40].Experiences with the social and ecological environment are also a key component shaping behaviour.For instance, individual experiences in foraging ants influence behavioural specialization and division of labour within the group [41].Variation in information among individuals can arise from differences in age (and thus different amounts of experience) or from the 'passenger effect', where followers cannot recall routes as effectively as leaders [42].The role of differences in knowledge about the environment in shaping collective actions is exemplified by older elephants (Loxodonta africana) [43] and killer whales (Orcinus orca) [44] which lead their groups to rarely used resources during periods of food scarcity.
Processes influencing group composition and structure
Group composition and structure influence collective behaviour by shaping conflicts of interest within the collective.The composition of a group determines the set of individuals that can potentially engage in collective behaviour together, whereas group structure shapes the extent to which interests are aligned.Here, we review processes that can introduce stochasticity in group structure and composition at evolutionary, demographic and dynamic timescales, thereby shaping conflicts of interest within collectives.
(a) Evolutionary timescale
The evolution of social systems-the mating system, social organization and social structure [45]-shapes the potential for conflicts of interest within collectives.Patterns of dispersal and mating influence the kinship structure of groups [46], which in turn impacts the extent to which individuals within the group have aligned interests [47].For example, when conflicts of interest among kin occur, consensus costs are offset by the inclusive-fitness benefits of pursuing the interests of related group-mates [48].Finally, the selective pressures leading to the evolution of aggregations of individuals influence the relevant domains in which conflicts of interest might occur.For example, forming and maintaining groups can benefit individuals by increasing their ability to detect predators [49], increasing their ability to find food [50], improving their navigation ability [51][52][53] or reducing the energetic costs of movements [54].However, differences in preferences can arise across all of these domains, including whether risk is present [55], what food patches to choose [56], which direction to move in [2] or the speed of locomotion [19].These domains can also intersect.In wildebeest, synchronous birthing has co-evolved with large-scale migration [57], creating a need for consensus both in birth timing and when and where to migrate.
(b) Demographic timescale
Demographic processes shape conflicts of interest within groups by influencing the composition [6], social network structure [58] and stability of social groups [12].The addition and subtraction of individuals through births, deaths, immigration and emigration shape group size and structure, which can impact the expression of collective behaviours [59,60].For instance, the addition of uninformed individuals royalsocietypublishing.org/journal/rstb Phil.Trans.R. Soc.B 378: to a group increases the likelihood of the group deciding in favour of the preferences of the majority [3], while demographic turnover (the introduction of naive individuals) can increase the rate at which cultural change takes place in animal groups or populations [61].When the results of prior collective behaviour influence future collective action, for example via the memory of some group members [62], then historical group composition can impact future behaviours.Demographic turnover can, therefore, influence the maintenance and efficacy of collective strategies [61,63].
(c) Group dynamics timescale
Collective behaviour is also influenced by social dynamics within groups, like temporary splitting and joining of individuals or groups.For instance, groups characterized by high degrees of fission-fusion dynamics experience frequent changes in group composition [64,65], and social instability introduced by changes in group membership can impede collective behaviour.For instance, in captive zebra finches, changes in group composition reduced group foraging efficiency [66].Group dynamics can be extrinsically driven, such as in prides of lions (Pantera leo), where the patterns of fission-fusion and the stability of membership into subgroups is affected by ecological conditions and the corresponding availability of prey [67].Group dynamics can also be internally driven, such as when dominant vulturine guineafowl exclude subordinates from food patches, causing the latter to depart [40].Group dynamics may also emerge at multiple levels of social organization.For example, in multi-level societies, where cohesive groups join with other groups to form higher-order groupings [68], consensus decisions may be influenced by both core group composition and the conflicts of interest that emerge from the higher-order groups.Finally, consensus and cooperation may be influenced by prior patterns of association at these different levels as well (e.g.familiarity, social bonds), such that long-term grouping patterns shape consensus during collective action.For instance, in the fission-fusion societies of spotted hyaenas, collective mobbing of lions is promoted by the presence of preferred subgrouping partners [69].
The challenge of mismatching scales in longitudinal studies of collective behaviour, and some solutions
The processes highlighted above point to a fundamental challenge in explaining collective behaviour: consensus and collective action are reached through fine-scale moment-bymoment interactions that resolve conflicts of interest, but these conflicts of interest originate in longitudinal processes that require us to reach into the past.This generates both methodological and conceptual challenges.On the one hand, groups have to be observed over longitudinal timescales that are relevant to establishing conflicts of interest, because the source of the conflicts of interest are rooted in the individual's and/or collective's history.On the other hand, very-highresolution (cross-section of the longitudinal processes at the moment of the focal collective action, e.g.second-by-second) data are required on the relative movements of many individuals to establish how preferences are integrated into collective actions [2,70].Additionally, collectives have to make different types of decisions (e.g. when to move versus where to move), and these likely represent different axes of decision-making [71], which could be more sensitive to some timescales than others.For example, what a group does next might be influenced by the distribution of present nutritional states within a group, whereas where a group goes next may be influenced by the longer-term historical membership of the group and what the present individuals learnt from them.Embedding collective behaviour research in long-term individualbased study systems offers an approach to tackling these challenges, by allowing the collection of new fine-scale data on consensus formation that is informed by rich longitudinal information on individual and group histories over multiple timescales.
Several shorter-term approaches have also helped shed light on the links between longitudinal processes and collective behaviour.Three, in particular, have substantial potential for continued insight-mixed-species collectives, artificial selection experiments and the use of clonal species.Mixed-species collectives provide interesting opportunities to understand the role of direct benefits in the evolution of social and collective behaviours, as by definition individuals cannot gain indirect fitness from cooperating with a heterospecific [72].Such studies have demonstrated how individual social rules appear to be tuned differently to conspecifics versus heterospecifics [56,73], which provides a potentially powerful experimental paradigm in systems where groups can be experimentally generated [74].Recent work with guppies (Poecilia reticulata) has also demonstrated the potential for using within-species variation to understand the evolutionary dynamics of collective behaviours, specifically the timeframe over which alignment can evolve [75].Finally, clonal species provide an intriguing opportunity to understand how individual differences emerging from developmental conditions could affect the expression of, or individual contributions to, and performance and consensus costs, of collective actions [76].For example, tests of collective behaviour in experimental groups where clonal individuals experienced homogeneous versus heterogeneous developmental conditions offer a promising approach for linking processes taking place at different timescales.
Recent technological improvements are now facilitating three important GPS-based approaches that provide another promising avenue for unpacking the multiple timescales influencing collective behaviour-continuous GPS tracking, lifetime GPS tracking and whole-group tracking.With solarpower technology, it is becoming increasingly feasible to study not only the choices that individuals make, but also the consequences of these choices on future decisions.In groupliving species that maintain high cohesion, this can even be achieved by tracking just a few group members [70].Further, it is becoming increasingly feasible to capture, in detail, both the physical [77] and the social [70] environments that individuals experience over their lifetimes.Such approaches become particularly powerful when combined with whole-group tracking, allowing the relative contributions of each individual to be studied over time [2,78].In such studies, demographic changes, such as deaths of individuals or immigration by others, provide powerful natural experiments that can shed light on how prior experience affects individual contributions to collective actions.Finally, GPS data can be used to quantify spatial aspects of the structure of groups, such as how cohesive they are or how efficiently they move.
royalsocietypublishing.org/journal/rstb Phil.Trans.R. Soc.B 378: 20220064 In the first part of this paper, we reviewed different processes that can shape the preferences of individual and the structure of societies, and how these processes can operate at different timescales from the collective behaviours that they shape.However, while our methods to study animal collectives have improved, we must still overcome a number of methodological and conceptual challenges.One particular challenge that spans both methodological and conceptual dimensions is identifying the correct time frame at which a given hypothesized driver operates to shape preferences and the behaviour of collectives.This includes one of the most foundational concepts for collective behaviour: what is a group?From a conceptual perspective, there is the challenge of distinguishing whether a collective behaviour is being shaped only by the individuals present or whether there are legacy effects from past group members.Examples of this include culturally transmitted behaviours, which can lead to between-group differences in behaviour arising not from current group members but from their predecessors in which the behaviour first arose [79].From a methodological perspective, there exists a trade-off between applying finerscale definitions (e.g.moment-by-moment group membership) and uncertainty in the estimates of which individuals are present in the group (or vice-versa, uncertainty about which group an individual belonged to over a longer time period).In the following case study, we illustrate the concept of mismatching timescales by applying different temporal (and methodological) definitions of social units.We then demonstrate how these different definitions translate to different estimates of group properties, thereby introducing uncertainty in downstream analyses.
A case study of mismatching timescales:
defining the membership of social units in vulturine guineafowl (Acryllium vulturinum) Defining groups (herein social units) is often the first step to investigating collective decision-making.However, social environments individuals experience change constantly (see §1), which can make operationalizing the definition of social units less straightforward.Depending on the time window we chose, the social units we detect (membership and composition of supergroups, groups and subgroups, temporal fission from a group, etc.) can differ.This will then have consequences for our estimations of how the membership and composition of social units shapes collective behaviour (e.g. which individuals' preferences might form part of a given decision).Thus, defining social units requires addressing two questions: (i) What level of social units is meaningful when interpreting a given current collective action?and (ii) How can we choose the corresponding timescales to define the focal social units?For this case study, we assess how membership dynamics and structure vary over three different timescales, in a multilevel society of vulturine guineafowl where social units do not have any central resources to regularly come back to (e.g.colonial nesting locations) and often mix with other groups in space and time.We define social units as sets of individuals that are inferred to maintain cohesion across time, which is consistent with existing definitions: sets of individuals that maintain close spatial proximity (mean intra-unit distance is substantially less than the distance over which individuals range daily, and smaller than mean inter-unit distance) over days, and among which most social interactions occur (following [80,81]).
(a) Study system
The vulturine guineafowl project has been collecting longterm GPS tracking and daily census observations of aggregations (field-observed groups) in a wild vulturine guineafowl population at the Mpala Research Centre in Kenya.Vulturine guineafowl are predominantly terrestrial and highly gregarious, living in a multi-level society [82] in which individuals are regularly observed with up to 100 (or sometimes more) conspecifics.Individuals purportedly belong to the same social unit over multiple years or seasons, with these members moving highly cohesively [82].However, a social unit can also temporarily split into subunits for a few hours up to several weeks, eventually merging back together to re-form the original single social unit.Social units can also form supergroups that can last for a few days up to several months (especially during dry seasons) that then disband back into the original social units.These dynamics, combined with sporadic observational data, can make it challenging to determine exactly what social environments an individual might have experienced for a given study period.
(b) Data collection
Approximately 90% of individuals in the study population are uniquely identifiable in the field based on a unique combination of colour bands fitted to their legs (n = 782 individuals during the study period).We collect daily census data (morning and evening), during which we record observations of individuals moving and associating together.However, we do not observe all individuals every day (or sometimes week), meaning that estimating social units at finer timescales (e.g.within a month) can result in having fewer observations of individuals and, correspondingly, greater uncertainty in the assignment of individuals into social units.
When encountering an aggregation, we record the number of marked and unmarked individuals and the identity of all individuals present.We also record whether the observed sets of individuals (clusters of individuals found to be closer to one another than to others) behave as a cohesive unit ('single' set of individuals), or as multiple units that arrived from or are moving in different directions ('multiple' sets of individuals).For the purpose of this study, we used only 'single' observations for the data analysis.Further, we removed incomplete daily census with any missing information to reduce uncertainty of estimated networks [83].We removed individuals that were observed two or fewer times in a focal period, because detected social networks are unreliable when the number of observations is very small [83], although the methods below are relatively robust to undersampling [84].For this study, we use data collected across eight continuous months (1 September 2020 to 30 April 2021), with varying intensity of data collection due to periods when birds temporarily left the study area [85].
To generate measures of social unit properties (see §5e), we used GPS data from 66 males fitted with solar-powered GPS tags (e-obs 15 g solar).GPS tags were programmed to collect a mixture of high-resolution data.When the battery charge is high, tags collect 1 Hz bursts of data sometimes royalsocietypublishing.org/journal/rstb Phil.Trans.R. Soc.B 378: 20220064 lasting several hours.When the battery charge is lower, tags collect a burst of 10 consecutive points every 5 min.See [70] for more details on the design of the GPS study.For the purpose of this case study, we subsampled any 1 Hz data to one point every 5 min.We used GPS data from 1 October 2020 to 31 March 2021.
(c) Methods: inferring social units over multiple timescales
Since the main aim of this case study is to show whether and to what extent social unit structure and membership emerge over multiple timescales, we specifically report the outcomes from eight different approaches to inferring social units from census data (table 1 and figure 2).These vary in terms of the time frame (1-month, 2-month, 8-month), whether we employ a bootstrapping procedure to better account for low sampling at finer timescales, and whether we use a dynamic network community algorithm to track the carryover of membership to social units across time.We expect to observe differences in the allocation of individuals to social units depending on time frame applied.From observations, we know that a vulturine guineafowl group sometimes temporary splits as subunits consisted of non-repeatable members, and that multiple groups can be observed together at the overlapping parts of their home ranges, with these dynamics occurring on a daily basis.Some of these dynamics also vary over longer time frames, for example groups merging to form a supergroup under harsh ecological conditions, such as during droughts.We can, therefore, expect the shorter time frame to capture temporal or daily dynamics of day-to-day association patterns, while the longer time frame can capture more general patterns of group memberships.For 1-month basis analyses (1-month, 1-month*) and 2month basis analysis (2-month*), we aggregated census data from 1 month before through to 1 month after the focal period as a moving window, resulting in six 1-month social networks and three 2-month social networks (figure 2).For the static network (8-month), we combined all census data for the entire study duration (8 months) to create one social network.
To create each network, we first created a group-by-individual matrix in which each cell contained a 0 or 1 representing whether each individual (columns) was observed in each aggregation (row) that was encountered during that focal period.We then calculated the network for the focal period using the get_network function in the asnipe package [86] in R [87].Ties in these networks were calculated as the simple ratio index of association, which estimates the proportion of time that two individuals were together by dividing the number of census observations of them together by the number of possible chances that they could be observed together [88].When the edge weights (range 0 = always apart to 1 = always together) were less than 0.5, we replaced these by 0 because community detection algorithms (see below) are substantially more sensitive to the presence of an edge than they are to the edge weights.
Next, we inferred social communities from the social network as a means of extracting social units.Community detection algorithms detect sets of individuals that are more connected among each other than they are to other individuals.We did this in two ways, either directly on the observed network for each focal period, or using a bootstrapped metanetwork approach for each focal period to better account for uncertainty at finer timescales.For community detection (on both types of network), we used the walktrap community algorithm in the R package igraph [89], which was previously found to perform best with census observations of multilevel societies [90].The meta-network approach consists of using a bootstrapping approach following the algorithm described by Shizuka & Farine [84].Briefly, the approach involves resampling the observed observations of aggregations with replacement (i.e.bootstrapping), constructing the network and detecting the membership of individuals to communities in the network.We could calculate the probability of observing two individuals in the same community across all the bootstrapped replicates (n = 100), which we defined as our meta-network.We then ran the walktrap community algorithm on this meta-network to get the community membership for the focal period.
Because of the tendency for community detection algorithms to include complete connected components in sparse networks, we added an extra checking step in the metanetworks.We first checked the size of the detected communities and compared these with the maximal number of individuals with colour bands observed in a 'single' census observation during the focal period.When the size of a detected community was higher than the highest number of individuals encountered in a field observation during the same period, we re-ran the procedure above after subsetting the census observation data to include only the individuals assigned to that detected community.This procedure allows us to partition observed aggregation of multiple groups into their composing social units.
(d) Methods: detecting carryover membership of social units across focal periods
We also used a dynamic network community algorithm to link the community membership across time for the 1-month and 2-month networks (table 1).We used the MajorTrack library [91] in Python, which produces global community identifiers, and therefore links the community identifiers across consecutive time periods.In some cases, the community remained stable but some individuals were missing in a given time period, so we also used interpolation to re-add temporarily missing individuals into their community.
(e) Methods: measuring cohesiveness and dynamics of detected social units To quantify how group cohesiveness emerges over the different timescales at which we defined social units, we used the GPS data to estimate the average GPS pairwise distances of individuals over each focal period.These data give us insights into our ability to capture social units that are highly cohesive in their movements versus social units that represent longer patterns of (re-)associations.We did this by calculating the mean and maximum daily GPS pairwise distances among individuals within the same detected community.We used GPS data only from males, which are philopatric [92], as the few dispersing females in our dataset could substantially impact the estimates of pairwise distances.
To quantify the temporal stability of group membership inferred when using different timescales, we calculated Jaccard similarity between the detected social network communities in consecutive focal periods.This is calculated as the ratio of (i) the number of individuals detected in the same community across sequential focal periods (e.g. in month N and month N + 1) to (ii) the number of unique individuals detected in the community during either focal period (e.g. in month N or month N + 1).Note that we could only do this for approaches that used the carryover methods as these provided the necessary information about the links between communities over time.
(f ) Methods: quantifying effects from different methodological procedures on detected social units Finally, we quantified how methodological processes applied in different approaches can explain differences in cohesiveness of detected communities.Specifically, we used generalized linear models with fitting the size of detected communities (Poisson) or average GPS pairwise distance (lognormal) as a response variable and fitting time window (1-month, 2-month, 8-month), bootstrapping process (yes/no) and detection of carryover membership (yes/no) as predictors, using the lme4 package [93] in R [87].To test the effect on group stability, we used generalized linear mixed models (binomial) with fitting the similarity of group memberships as a response variable, fitting time window (1-month, 2-month, 8-month), bootstrapping process (yes/no) and detection of carryover membership (yes/no) as fixed effects, and fitting sampling period ID as a random effect.To test the effect from each fixed effect, we used a two-way ANOVA (type 3), in the car package [94] in R [87].
Inferred social unit size varied depending on the community detection approaches (figure 4).Social unit sizes from the shorter-term approaches (1-month, 2-month) tended to be smaller (within a focal period) than the 8-month network when using bootstrapping.Social units were substantially larger (and less variable) when using the bootstrapped meta-networks than when using only the observed network.Further, the bootstrap procedure produced substantially fewer unrealistic social units that consisted of only a few individuals (figure 4).Overall, social units detected by the 1-month detection approach and the 2-month detection approach had similar Jaccard similarities (figure 4).However, some carryover of community membership detected without the bootstrapping procedure experienced more turnovers in their community memberships (lower Jaccard similarities) than those detected with the bootstrapping procedure did (figure 4; β ± s.e.= 0.287 ± 0.069, χ 2 = 17.47, p < 0.001; electronic supplementary material, table S2).
Estimated group cohesiveness varied depending on the community detection approach used (figure 5).Overall, the 2-month community detection approaches produced higher .The distribution of detected social unit sizes and Jaccard similarity between social units detected consecutively.(a) The distribution of social unit sizes inferred from each method.The x-axis shows each community detection method (dynamic community detection with and without bootstrapping, static community detection), and the y-axis shows the size of detected community.(b) Jaccard similarity for communities in consecutive focal periods when using the dynamic network community method.*Approach using detection of carryover of community membership across periods of 1 or 2 months.§ Approach using bootstrapped meta-networks.Note that we could not calculate Jaccard similarity for approaches without the dynamic network community method as there was no way to link networks in consecutive focal periods.(Online version in colour.)royalsocietypublishing.org/journal/rstb Phil.Trans.R. Soc.B 378: 20220064 and more variable average GPS pairwise distances among individuals assigned to the same community than other approaches did.Groups detected without bootstrapping had substantially lower average GPS pairwise distances than the methods with bootstrapping (β ± s.e.= 0.479 ± 0.154, χ 2 = 9.71, p = 0.002; electronic supplementary material, table S3).The 8-month detections (with the bootstrapping procedure) had substantially smaller and less variable average GPS pairwise distances, compared with other approaches (electronic supplementary material, table S3).Detection of carryover social units did not impact cohesiveness of detected groups in most approaches (but, without bootstrap, 1-month detection approaches produced higher and more variable groups without detection of carryover groups).
(h) Summary of the case study
We used this case study to investigate the inference of social units in a multi-level society using different temporal and methodological approaches.The differences in timescales used, and whether or not methods were used to carry over community membership, led to substantially different inferred social unit sizes, stability and spatial cohesion.Community detection approaches without bootstrapping produced more communities with more variable community sizes, as well as more communities consisting of a few individuals, relative to methods using bootstrapping.Thus, analyses at shorter timescales may suffer from higher uncertainty in the allocation of individuals into social units.Because very small social unit sizes (i.e.single individuals) are unrealistic in vulturine guineafowl, we conclude that the bootstrapped meta-networks produce better representations of the social unit structure in the population when focusing on identifying social units at shorter timescales.However, bootstrapping produced social communities with higher average GPS pairwise distances among males within the same inferred social unit, relative to the non-bootstrapped approaches.In biological terms, this means that the detection approaches without carryover of community membership and without bootstrapping may be more sensitive to detecting fine-scale temporal changes in social unit membership (i.e. which individual is actually present at a given point in time).However, these changes may not be perceptibly different from inference errors.Across different timescales, shorter-term focal periods (1 month) typically produced the smallest pairwise GPS distances, and only slightly higher variability in the Jaccard similarity of social unit membership across time.This corresponds with more accurately capturing the social environment that individuals experienced during the focal period.
So which method is best?This depends, in large part, on the question, thereby highlighting the conceptual challenges that arise when defining social units.One objective measure would be a definition that minimizes GPS pairwise distance while maximizing the stability of the estimated groups.Here, we found that focal periods of 1 month produced relatively low GPS pairwise distances, high Jaccard indices and 1-month* 1600 900 400 100 GPS pairwise distance (m) Figure 5. Averaged GPS pairwise distances within detected communities.The x-axis shows the community detection approach (table 1) and the y-axis shows GPS pairwise distance among tagged males within the same detected community (m).*Approach using carryover of community membership.§ Approach using bootstrapping.(Online version in colour.)royalsocietypublishing.org/journal/rstb Phil.Trans.R. Soc.B 378: 20220064 (when using bootstrapping) larger social unit sizes when guineafowl groups formed supergroups (according to field observation).This suggests that there may be an optimal timescale for inferring stable social units that are not susceptive to temporal day-to-day changes in associations but that still accurately capture demographic changes.The 1-month definition (1-month*, 1-month* § ) was also the only one able capture larger-scale changes, such as those driven by ecological conditions.Previous studies and field observation have revealed that vulturine guineafowl form a multi-level society [82], and evidence is emerging that social units merge during dry seasons to form supergroups that consist of about 70-100 individuals (and, more recently, we have observed a supergroup with an estimated 600 individuals).However, while many questions will focus on the finer-scale collective actions, others may instead need to capture the most stable social units (e.g.studies of cultural transmission).We found that GPS pairwise distances for the 8-month detection were smaller than shorter-term detections with carryover memberships.This could point to more aggregated approaches as being better at capturing the most stable set of individuals that compose a social unit (i.e.stable across different environmental conditions).
Ultimately, our case study reveals some trade-offs between detecting stable and strong relationships between individuals and capturing the dynamics of aggregations over time.Longer time periods capture the former, but are unlikely to accurately describe what social environment an individual will have experienced in a given time and place.By contrast, using very short timescales makes it more challenging to track which individuals ultimately form a long-term social unit.Finally, bootstrapping methods (especially) were typically more inclusive, meaning that they were likely to allocate individuals to the same social unit even if they may have spent some time apart, whereas networks constructed directly from the observations were more likely to produce isolated individuals (and thus to separate individuals into unrealistic social memberships).Thus, depending on research questions, scientists need to carefully decide on which timescale and community construction approaches might produce meaningful social units for their research questions, all the while keeping in consideration how robust the method is at inferring social units.Through this case study we have not only provided some starting guidelines, but also demonstrated an analytical procedure that can help with choosing the best approach for a given question (i.e.optimizing the relative accuracy in terms of cohesiveness, social stability and/or social unit size).
Conclusion
Some behaviours can be affected by temporal surrounding social environments (e.g. which individuals are present at the moment), while others can be influenced by the longer-term or the broader social environments they have been experiencing across different timescales.Given such dynamics, when studying collective actions, what are the relevant definitions of groups?When individuals contribute to the decision-making process, which levels of social units have an influence?As we touched on in the case study, identifying meaningful social units, especially in societies where groups experience membership changes at both finer (e.g.subgrouping, demographic changes, dispersing) and larger scales (e.g.forming supergroups, splitting into multiple groups for a period of time) may not be straightforward because the membership inferred can depend on the timescale (and corresponding uncertainty) and the methods we choose to overcome limitations in the data.Studying social animals living in other types of societies also requires facing the same conundrum when drawing inference about the social structure that individuals experience at different timescales.For example, what levels of social units (e.g.surrounding associates at the moment versus the stable social group) shape current collective actions, when subsets of a group perform the collective action?Even in animal species forming closed societies without any fission-fusion dynamics between social units (such as territorial animal species), individuals' preferences for collective actions arise over various timescales (evolutionary, developmental, experiential), while groups continuously experience changes and development of collective actions under demographic timescales.
Once we detect the relevant social units (e.g. group memberships to be focused on), the next steps are finally to investigate how long-term effects shape moment-bymoment collective actions: do individuals decide based on the temporal surrounding social environments or the broader social environments they have been experiencing?Furthermore, how do these individual-and group-level dynamics impact collective actions at different spatial and social scales?For example, do demographic turnovers or dispersal events diffuse developmental effects over the landscape?As we discussed in §4 'The challenge of mismatching scales in longitudinal studies of collective behaviour, and some solutions', conducting long-term comparative studies using clonal individuals or mixed species with a combination of the recent advancement of technologies could give further insights on these topics.We also believe that investigating these questions, with careful consideration of the discrepancy of timescales, can help researchers to develop a deeper understanding of how different long-term effects can shape current collective actions.
Ethics.Data were collected with permission from, and in collaboration with, the Kenya National Science and Technology Council, the National Environment Management Authority, the Kenya Wildlife Service, the Wildlife Research and Training Institute, the National Museums of Kenya and the Mpala Research Center.Ethical approval was granted by the Max Planck Society's Ethikrat Committee.
Figure 1 .
Figure 1.Overview of processes at different timescales shaping moment-by-moment collective behaviour.Two processes (group structure and composition, and individual preferences) lead to conflicts of interests among individuals in a collective, which requires the collective to reach consensus, then finally the collective can perform collective behaviour.We review each process in subsequent sections in this article and illustrate the methodological challenges in the case study.(Online version in colour.)
Figure 3 .
Figure 3. Detected network community dynamics over time using carryover methods.(a) 1-month* § : 1-month detection with carryovers of community membership and bootstrapping.(b) 1-month *: 1-month detection with carryovers of community membership without bootstrapping.(c) 2-month* § : 2-month detection with carryovers of community membership and bootstrapping.(d ) 2-month *: 2-month detection with carryovers of community membership without bootstrapping.Each colour represents an inferred social unit, and colours were randomly assigned within each approach.The shaded colour shows the track of social units between focal periods, and the movement of individuals between social units.The colour changes when a social unit merged with another social unit or split into smaller subunits.The height of each colour bar corresponds to the number of individuals in the social unit.(Online version in colour.)
Table 1 .
Summary of approaches for inferring social units.For 8-month approaches, the detection of carryover membership was not applicable (n.a.) because these have only one sampling period.Figure 2. Summary of focal periods for each detection period.Census data from September 2020 to April 2021 was used to determine group membership.For 1-month and 2-month detections, the census data from 1 month before and after the focal month(s) were used to detect group memberships.For static detection, all census data used for 1-month and 2-month detections were used.We used GPS data only from October 2020 to March 2021, because our aim was to measure cohesiveness of detected groups within focal months.Blue, green and orange represent 1-month, 2-month and static detection, respectively, throughout this paper.(Online version in colour.)royalsocietypublishing.org/journal/rstb Phil.Trans.R. Soc.B 378: 20220064 *Approach used detection of carryover of community membership across 1 or 2 months.**Dataforeach month includes data from the months before and after, using a sliding window method (figure2).§ Approach used bootstrapping.
|
v3-fos-license
|
2021-10-15T00:09:50.428Z
|
2021-06-02T00:00:00.000
|
238847489
|
{
"extfieldsofstudy": [
"Sociology"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://journal.binus.ac.id/index.php/Lingua/article/download/7040/4260",
"pdf_hash": "982cd18768853ee2a4f147cac37b43c5ea1627a7",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46044",
"s2fieldsofstudy": [
"Sociology"
],
"sha1": "d6954cb23e3581c29fe043580077986dd1d69378",
"year": 2021
}
|
pes2o/s2orc
|
ANA DHEI DHATO (AN ANALYSIS OF TRADITIONAL WEDDING TRADITIONS AT RAJAWAWO VILLAGE OF ENDE REGENCY)
This research aimed to describe each process of ‘Ana Dhei Dhato’ and revealed its cultural meanings based on the perspective of people in Rajawawo village of Ende regency. The method of this research was descriptive qualitative research. In collecting the data, the researcher used interviews, note-taking, and recording. The data were analyzed through transcribing and translating. The analysis results show that ‘Ana Dhei Dhato’ is considered the most valuable type of wedding because of its complexity and peculiar characteristics. It can be seen from the way they run this kind of wedding ritual. Some cultural meanings are found in these wedding rituals and language spoken, such as religious meaning, social meaning, historical meaning, juridical meaning, and didactic meaning. Therefore, it is expected that people in Rajawawo village must consider ‘Ana Dhei Dhato’ as one of the most valuable cultural heritages and should also maintain its existence.
INTRODUCTION
Language and culture are closely bound because it is considered one of the most vital elements in any culture. Language is a part of the culture, and language itself is the mirror of culture. Al-Mansoob, Alrefaee, and Patil (2019) have stated that a language is considered a means of communication that has its own specific cultural and linguistic features. It is such an identity to its speakers. Meanwhile, according to Kami et al. (2020), language is always a vital means of communicating. Language is also one of the human characteristics that distinguish them from other creatures. In addition, a language in societies that lives side by side has a social function, both as a means of communication and as a way to identify social groups.
The question is how culture is related to language. Language and culture themselves are tied very much since the beginning of human civilization. Culture is learned through relationships with other people. Therefore, culture is not natural, inborn, and will-less; it is a social product. Some factors are considerable and momentous in this transmission, such as information and knowledge in a society, social changes, social relations, and mass media. Thus, culture transmits generation by generation. Meanwhile, Aso and Sujito (2016) have defined culture as a system of human patterns behavior which generated from social life, and it correlates all of the human communities with their environment ecology. The variety of culture in a certain region is quite unique and highly varied.
Different groups do not only have a different language, but they also have different world views, which are reflected in their languages. This proves how complex their relation is. It is in line with what is stated by Jiang in Mehmet (2017), who says language and culture make a living organism; language is flesh, and culture is blood. Without culture, the language would be dead; without language, culture would have no shape. Additionally, Nengsih and Syafwandi (2020) have argued that every human is governed by the customs and rules that apply in society. The customs that govern people's attitudes and life patterns have been passed down from the previous generation (ancestors) to the next generation.
Since the relationship between language and culture is very complex in nature, Hymes in Kupper and Jessica (2000) has proposed an approach to treat such a relationship in the view of three related perspectives. The first is that language as the element of culture in which the use of language can be seen in rituals, folklores, folk songs, prayers, or ritual speech. The second is that language as the index of culture in which the use of language can be seen in the way of expressing the speakers' insight and experiences in perceiving the world. Third, language is the symbol of culture in which language has the function to characterize the existence of an ethnic group or tribe as an ethnolinguistic group as well as a speech community or language society. The power of the culture can be seen in human beings' traditions. Hidayati (2018) has stated that tradition, often referred to as custom, is an activity carried out by a group of people from generation to generation with the aim to obtain harmony either between humans with humans, or harmony between humans with nature through values and norms contained in the tradition. If this harmony can be achieved and maintained, then welfare can be easily obtained. Welfare is not only concerned with matters but also with spiritual relevance. This can be seen, for instance, in wedding rituals that are maintained and survived through language. In line with that statement, Kartolo (2017) has defined marriage as one of the important human life cycle stages. Through married, somebody will get a new status from single to be married; thus, the couple will be approved and needed as full members of society. Additionally, Panjaitan and Manugeren (2019) have stated that a wedding is a ceremony in which two people are united in marriage or a similar institution. Most wedding ceremonies involve exchanging wedding vows by the couple, a gift (offering, rings), symbolic item (flowers, money), and a public proclamation of marriage by an authority figure or leader. Special wedding garments are often worn, and the ceremony is sometimes followed by a wedding reception. Music, poetry, prayers ritual speech are also optionally incorporated into the ceremony.
In many societies of the world, marriage is seen as the fundamental unit of the society without which there could be no family (Silalahi, 2019). Moreover, Passandaran (2019) has added that marriage ceremony in a cultural context is one of the traditions in the form of rituals with various functions. Marriage is something sacred, great, and monumental for every life partner. Therefore, marriage is not just following religion and continuing instinct to form a family; however, it has a profound and broad meaning for human life.
Wedding traditions and customs vary greatly among cultures, ethnic groups, religions, countries, and social classes. In running this kind of valuable inheritance, people should follow each step and fulfill all regulations needed. Neglecting one of those will give destructive impacts on their future life since it stands as one of the guidelines for living. Those are based on the convention and are maintained through generations. Indonesia is a country widely known as one of the biggest archipelagic states with various cultures and traditions. Rahman et al. (2020) have stated that Indonesia is the largest archipelago stretching from Sabang to Merauke. More than 13.000 ethnic groups inhabit the territory of Indonesia.
One of the regions in Indonesia which still maintain their past culture and tradition is the Nangapanda subdistrict, especially in Rajawawo village, which becomes the main focus of this research. Rajawawo village is located about 38 kilometers from Ende city, situated at the western area of Nangapanda sub district of Ende regency. They have three types of wedding rituals, namely Ana Dhei Dhato, Ana Aze, and Ana Paru Dheko. However, this research only focuses on investigating Ana Dhei Dhato in an attempt to investigate and reveal both its process, which also covers the analysis of its ritual speech in the form of sacred words and its cultural meanings. Singh (2018) has stated that the sacredness of words has always been a vehicle to transfer values and moralities from one culture to another, from one faith to another, and so on. Languages behave differently based on their internal composition when it comes to the analysis of words in the sacred context. Meanwhile, Abdullah in Rudiyanto, Rais, and Purnanto (2020) have argued that cultural meanings can be defined as the meaning of language in accordance with the cultural context of its speaker along with their cognitive system, which can be seen from their mindset, way of life, and their world view.
From the perspective of scientific research, many pieces of research focus on investigating indigenous rituals to reveal their cultural meanings. The first research is from Langkameng and Latupeirissa (2020). They have conducted research about the cultural values of Oko Mama, marriage proposal ritual speech in Bokong community, Indonesia. The results show that the cultural values implied in Oko Mama, namely: (a) social value, which consists of cooperative value and the appreciation to girl's parents value, and (b) religious value. Hodairiyah, Rais, and Purnanto (2020) have conducted ethnography research that aims to find out the cultural meaning of verbal and non-verbal expression represented in the Nyaébuh tradition of people in Aeng Tong-tong, Saronggi, Sumenep. It is found that this tradition in the form of almsgiving whereby the charity is devoted to the deceased in the hope that it could alleviate and erase the sins of the deceased, in addition to which it could increase unity, harmony, family, harmony between people and others. Meanwhile, Silalahi (2019) has tried to unravel the semiotics of a marriage tradition in Batak Toba society based on the conception of signs proposed by Charles Sanders Peirce in order to reveal the meaning of icons, indexes, and symbols in the marriage tradition.
The fourth previous research is from Rudiyanto, Rais, and Purnanto (2020). Their research focuses on describing the cultural meaning of the Sranan tradition found in Wonokromo village, Alian subdistrict, Kebumen. The result of this research shows that the cultural meanings of this tradition include an offering to the ruler of a rice field (Dewi Sri) to avoid all kinds of pests that damage crops, as an intermediary to ask God for salvation, and as an application to be given a smooth provision and abundant crops. Additionally, Mubarokah, Djatmika, and Sumarlam (2019) have conducted research in order to describe the violations of cooperative principles and the factors that created the humor of Cucuk Lampah in the wedding ceremony in the Magetan regency. The result shows that in creating humor, Cucuk Lampah mostly violates the quantity maxim. Cucuk Lampah is free to lie, using taboo, speaking indirectly with pambiwara, and singers. Cucuk Lampah also uses language play by mentioning the unexpected in the utterances to build up taboo words. The violations of the quality maxim, relevance maxim, and manner maxim are also done in less number compared to quantity maxim violations. The non-observance of the maxims mostly is violating a maxim, flouting a maxim, infringing a maxim, suspending a maxim, and the last is opting out of a maxim.
Meanwhile, Akbar et al. (2020) have tried to reveal the Sasak Lexicon in traditional marriages from a linguistic anthropology perspective. The results show that Sasak traditional marriages have three common systems. They are betrothed (tapedait), proposed (melakoa), and elopement (memulang). Among the marriages, memulang system is carried out dominantly. Moreover, it is found that there are two meanings, namely linguistic meaning, and cultural meaning. The next research is from Hidayati (2018). She tries to reveal the local wisdom of Kembar Mayang in the wedding tradition of Java ethnic. She finds out four points of local wisdom in Kembar Mayang; they are maintaining family honor, termed keris-kerisan in the form of dagger-shaped webbing; mutual protection, termed payung-payungan in the form of umbrella-shaped webbing; fidelity, termed manuk-manukan in the form of bird-shaped webbing; tenacity and sacrifice, termed walang-walangan, in the form of praying mantis-shaped webbing. Kembar Mayang as cultural heritage is to be preserved t as a guideline in social life.
Meanwhile, Rahman et al. (2020) have carried out research in an attempt to analyze the symbolic meanings of Palang Pintu tradition of the Betawi wedding ceremony. They have found that the Palang Pintu tradition has symbolic values such as leadership, the religiosity that can be used as an opportunity for children's literacy appreciation learning. Nengsih and Syafwandi (2020) have undergone research in order to find out the symbolic meaning of the tradition Hantaran Jamba Badagang in a wedding party ceremony at Kambang Lengayang sub-district, Pesisir Selatan. They have found out that this tradition contains the symbol of interaction, communication, and social value. It also contains the meaning of unity between communities and educational values. Furthermore, the Jamba Badagang tradition continues to be preserved and developed into the cultural heritage of the Kambang sub-district community. Panjaitan and Manugeren (2019) have analyzed the symbolic meanings of Kembar Mayang conducted at Medan Sinembah village, Tanjung Morawa, Deli Serdang that are predominantly by Javanese ethnic. It is found that there are five forms of symbolic meanings in Kembar Mayang: Manuk-Manukan as a symbol of loyalty, Uler-Uleran of struggle, Walang-Walangan of persistence, Pecut-Pecutan of optimism, and Keris-Kerisan of wisdom. The five forms of rites are compulsory in the wedding ceremony with the main objective to achieve a happy, harmonious, and peaceful life for the bride and the bridegroom, and this is in line with the general concept of marriage. The next research is from Kartolo (2017) that carries out an ethnography study about the use of language in Malay Deli, Indonesia. It covers the grammatical, psychological aspects, and social structure of wedding ceremonies in Malay Deli tradition.
METHODS
This research attempts to describe the wedding ritual based on its names and reveal their cultural meanings in the cultural context where they have taken place. The appropriate research design to answer the problems stated is a descriptive qualitative research design.
In collecting the data, the researcher uses some techniques proposed by Creswell (2009), such as observation, interview, recording, documentation study, and note-taking. Septiana, Santosa, and Sumarlam (2019) have stated that the researcher gradually discerns recurring patterns from these notes and observations. These allow the researcher to generate hypotheses about what the various linguistic behaviors that have been observed mean. Additionally, according to Sudaryanto (2015), some techniques can be used to obtain the data. Two of them are the observation method and interview. The observation method is done by listening and recording the information uttered by the informants, while the interview method is done by direct interview or face-to-face communication. The data obtained through field research are analyzed qualitatively. The data are analyzed through a systematic procedure such as transcribing, translating, and data analysis using contextual-based interpretation before finding the meaning. In transcribing, the recorded data are transcribed by the researcher in order to get a written datum enabling the researcher to analyze it in more detail. After transcribing, the data are translated from the Ende language to English. To get a good result of translation, the text is translated using both lexical and contextual translation. In lexical translation, the text is translated literally. Conversely, the contextual translation should base on Ende's culture, especially in the Rajawawo context. It is based on the perspective that each society has its own way of thinking and behaving. In conducting both lexical and contextual translation, the researcher has consulted the informants to get a good and accurate meaning. The researcher's own interpretation is also added, especially to reveal its contextual meaning.
All of the data, then, are analyzed in accordance with the problems, aims, and scope of this research, namely the analysis of the cultural meaning of the traditional wedding ritual of people in Rajawawo village based on its process, which could reveal their values, function, and meaning. The analysis of that traditional wedding ritual process aims to get a picture of the cultural meaning of the traditional wedding ritual of people in Rajawawo village. This analysis is held after the meaning of each name and value dealing with the rituals is obtained by the researcher.
RESULTS AND DISCUSSIONS
The cultural meaning is known as a character of meaning or value that is intermediate to represent something in a particular context of culture. It deals with an object that represents or stands for an abstraction. The explanation of cultural meanings will be presented after translating each process into lexical meaning. The data of the traditional wedding ritual process of people in Rajawawo village of Ende regency are presented, followed by a brief explanation about its lexical meaning and contextual meaning of the ritual, which will lead to the cultural meanings of the ritual being investigated.
The researcher has decided to delimit this research only to describe Ana Dhei Dhato according to its process and then reveal the cultural meanings within this ritual. Ana Dhei Dhato/Nai Lapu Ja wedding ritual can be held if the couples have already gotten permission from their parents and entire clan. In other words, they should do this without any pressure. The following description is the explanation of the terms Ana Dhei Dhato/Nai Lapu ja. The marriage can only be held if the couple (woman and man) love each other and get permission from their parents and clan. At the same time, nai Lapu ja, which means the man with his own desire, goes to his woman's house in order to get permission from her parents and clan. Before that, he has already told his family about his girlfriend's background (name, clan, hometown, and status). This is very important to his family because, by this information, they could prepare everything before entering the woman's house.
Ana Dhei Dhato's wedding ritual is carried out in a series of separate rituals that must be obeyed. The first ritual is called tei nia mbe'o ngara. In this phase, the woman's parents and the whole clan have already obtained information about the man's background. A woman from the perspective of the Rajawawo people, is symbolized as muku and tewu (bananas and sugar cane), which means the woman does not live in isolation, but always lives under the protection of her whole clan and before the bananas ripe, anyone is not allowed to touch it. In this stage, the man's clan (Weta Ane) should bring su'a eko (ivory and animals -horse and buffalo). As a symbol of respect, the woman's clan also gives are guni, wawi, zuka zawo, zambu kadho (food, animals -pig, sarong, and also clothes).
The second ritual is weta mai eja se'a. The whole members of man's clan, including his eja (brotherin-law) and his sister, come to the woman's house to meet the woman's clan. In this stage, the exchange of valuable things will happen. The man's clan gives toko and eko, and as a symbol of gratitude, the woman's clan gives are guni, zuka zawo sa kadho, and tee zani (rice, sarong, and bedding).
The third is mbeo sao nggeso tenda. In this process, all the information needed from the woman's such as her house, her entire family (clan), and also the clan's role, especially ka'e embu, to the woman herself, have already been obtained by the man's clan (weta ane). In this stage, the dowry that has to be fulfilled by weta ane in the form of toko (ivory) and wea seziwu (gold) are discussed.
The fourth stage is kiri pipi mbinge inga. Contextually, it means using the cheek and ear to listen to something. Contextually, puu kamu (the girl's uncle) attends this stage with the purpose of listening to the ritual process of his sister's daughter. In this part, puu kamu takes a role only as a quest. However, after making some agreements, the man's family (weta ane) is demanded to give something, such as money and gold (wea), because puu kamu will play a significant role in the upcoming process of that ritual.
The fifth stage is pete negi rike nggiki. The function of this process is to strengthen their (woman and man) bond or love, and also their whole families/ clans. In this stage, the man's clan must fulfill some demands from the woman's clan, such as ring(s) and necklace. People in Rajawawo village believe that this kind of valuable thing (ring and necklace) can lead their love to stand forever and happily ever after.
The sixth process is called kuni kudu. Contextually, the woman is ordered to go to the man's house to help the man's family, especially his mother, with household chores, for example, cooking, washing, cleaning, and others. This is a very crucial point because by doing this, she will be able to run or do her job or duty as a wife. She should throw away her doubt and hesitation in finishing all the work given. It should be noticed that the woman's duty here is merely helping the man's family. Therefore, she is not allowed to live together with her future husband in his house.
The seventh is weka te'e soro zani. Contextually, it means stretching the plaited mat and pillow. This stage is a formal ceremony of this wedding ritual. The woman's uncle (puu kamu) has already prepared plaited mat and pillows in the woman's bedroom as a symbol of readiness to be a married couple. Symbolically, te'e (plaited mat) and zani (pillow) that are provided by the woman's uncle (puu kamu) show the fact that the woman is born from her mother's womb; therefore, the woman belongs to puu kamu (her uncle). The puu kamu (her mother's brother) plays a very crucial role during this ritual. This ritual is the beginning of their life as a married couple. The dowries that must be fulfilled by the man's clan are toko, wea, and money.
The eighth stage is bhanda mere. Bhanda mere is the final stage of this wedding ritual that they must bring many dowries. In this stage, the man, along with his family, must give a dowry to all members of the woman's nuclear family in accordance with the conventions that have already been determined before by the woman's parents. This stage is also called Tu Bhanda Mere (give a dowry in a huge number).
The last process is called tu dhu nawu jeka. Contextually, it means to live together forever. In short, the woman's family is ready to accompany their daughter to her husband's house in order to live together with her husband for her entire life. This process is also called mbuku nai sao. Nai means enter, and sao means house. Contextually, it is a symbol that a girl has already merged into a man's clan. The dowries that must be fulfilled by the man's clan are toko, bride price, and eko (buffalo and cow).
In accordance with the name, process, or ritual speech deals with the traditional wedding ritual. The researcher takes numerous cultural meanings that are interrelated to one another in order to cover a set of ideas and worldviews of people in Rajawawo village of Ende regency about their traditional wedding ritual. Based on the conceptual frameset in the idea of the informants and the result of the research, the researcher has found a number of cultural meanings of the traditional wedding ritual of people in Rajawawo village of Ende regency. The meanings include religious meaning, social meaning, historical meaning, juridical meaning, and didactic meaning. The set of these cultural meanings are not only symbolized by verbal expression (non-material symbols) in the form of language but also by non-verbal aspect (material symbol) in the form of any visible and touchable things.
The first cultural meaning is religious meaning. It constitutes an expression of believing in a higher being since it is closely related to people in Rajawawo village's perception of the nature of their ancestors' soul (Embu Kajo Iro Aro) as the mediator between them and their God (Nggae Raze Dewa Reta). Ancestors' souls as a human mediators should be magically invited to come in all ritual ceremonies. People in Rajawawo village believe that the ancestors' souls could become a bridge and also a mediator who delivers their praying and wishes to their highest entity.
Their perception of the existence of the ancestors' soul is clear on the verbal level in the preface of all ritual speeches. In Rajawawo village, people call it Dhera Dhao. Lexically, dhera means open, dhao means call. Contextually, dhera dhao means the ritual to feed the ancestor's soul. It takes place one night before the wedding ritual happens, and the whole members from the woman's side should take part or attend this ritual. This ritual always starts with the prayer to God or the highest entity, called soa soza. Soa means speech, soza means called. Table 1 is cited from the spoken discourse of the sesajen (the gift for souls) ritual. The way that will lead them together and don't give them any disasters on their way Ndia kami pati ka miu Here we bring you a present Ka are pesa uta manu The pure rice and a big chicken Maisi ka pahara, minu peimu Let us eat and drink together People in Rajawawo village believe that a chicken, especially its blood, symbolizes purity and clarity. The food in this ritual is served without any flavor. Praying for God or the highest entity through their ancestors is important since they believe in their ancestors' inclusions by keeping them away from any bad spirit, disasters and gives them eternal protection.
The second is social meaning. This traditional wedding ritual also contains social meaning. The social meaning covers a set of moral values functioning as ethical guidelines for all villagers of Rajawawo village. These ethnical guidelines as the social norms lead people to improve and maintain their social relationships in society. The social value includes solidarity, democracy, and social reconciliation. Solidarity value constitutes a significant social dimension established by the traditional wedding ritual of people in Rajawawo village. It becomes the basic principle of social unity between a couple's family and all villagers and other clans in Rajawawo society. The important thing to improve solidarity values is regularity, balance, cooperativeness, and the availability of the same right, and status. The social value is seen in all parts of the traditional wedding ritual of people in Rajawawo village of Ende regency from the beginning to the end. It creates an enormous social relationship because they believe that this wedding ritual is not only about the couple (woman and man) themselves, but it also involves the entire villagers to bond and unite the couple's clan (weta ane and pu'u kamu).
The solidarity value can be seen in the principles of the collaboration of people on Rajawawo village: Kita ata mai pati nee ate masa mae e nee seso neno. Lexically, kita means we/family, ata mai means come, pati means give, nee means with, ate means heart, masa means sincere, mae means not, e means remember/ think, seso neno means repayment. Contextually, it means the couples' family realizes that the villagers come to offer help with an open-hearted and without any repayment. Lexically, poka means cut down, tiko means crowd, weza means defend, gizi means around. Contextually, it is an activity that has to be done together in order to get the job done easily and quickly. Togetherness becomes a key success of this ritual. Lexically, tuke means support, nduku means knee, duke means hold on to, siku means elbow. Tuke nduku duke siku means support with knee and hold on to elbow. Contextually, it means a kind of support from many sides of the clan members through moral, material, and financial support has to be done seriously.
Social reconciliation defines the process of renewing and maintaining the friendly relationship between groups of clans. People in Rajawawo village believe that a friendly relationship or kinship not only limits or relies on particular people or among people in society, but also among the living people and their ancestors, which are considered living invisibly in their society. The social reconciliation among people in the society can be viewed in all processes and the principles of the traditional wedding ritual, while the relationship between people and their ancestors can be viewed in the Dhera Dhao ritual. In this ritual, the woman's whole clan, especially her parents, brothers, and sisters, have to take part, and the villagers should come as quests and witnesses. All related clans should join this ceremony as their presence becomes a form of social duty. Meanwhile, their absence will reduce social reconciliation. By participating in all ceremonies, taking part in such roles, having dinner together, they all can increase and maintain their friendly relationship. As a sign of gratitude to their ancestors, one day before the ceremony takes place, the couples should come to Tubu Musu (their ancestors' graves) to 'fed' their ancestors' souls, because they believe that by doing this, they would be blessed by their ancestors and Gods (Embu kajo and Nggae Raze Dewa Reta) and will also lead or guide them to reach prosperity, harmony, and happiness.
The term democracy refers to the similar views of people in deciding for unity. In the traditional wedding ritual context, it can be seen in ritual speeches (mbabho). Mbabho is a ritual talk for the purpose of negotiating the dowry that has to be fulfilled by the man's clan side. Only certain or particular people are trusted to lead this ritual (mbabho) called Mosazaki. He must be a man above sixty years old and has an ability to control the situation and arrange the sayings because of his tough job as a bridge to combine and unite two opinions from two different perspectives from both clan sides (puu kamu and weta ane). The example of mbabho can be seen in Table 2. The ritual speech in Table 2 represents both togetherness and the solidarity of people in the community. After having a deal about a dowry that has to be fulfilled, both clans make an agreement to follow all the regulations in front of Mozazaki and the villagers.
The third is historical meaning. The traditional wedding ritual of people in Rajawawo village of Ende regency is closely related to the cultural values and events existing and relating to the past. The traditional wedding ritual in Rajawawo village has gone through some changes. The wedding ritual in Ende regency, especially in Rajawawo village since the beginning, uses the dowry in the form of ivory. Rajawawo village is in Tana Zozo region that covers east region called Nanggeree (Nangaba), and west region called Paroree (Nangamboa to Maukaro).
In the past time, people in Rajawawo village used doka (white gold), but as time passed by, the head of the village changes it to wea (pure gold) because they think that the value of the white gold is not so high. People in Rajawawo village, in their ritual or ceremony, always pair up all the things, even the dowry and the foods. Sue (ivory) is always paired up with wea (pure gold). Sue here is used to symbolize manhood, while wea is used to symbolize woman. When both clans gather to talk about something, the hosts always serve mengi-keu (betel vine and bitternut or sirih-pinang) and muku-vizu (bananas and cucur cake). It is a symbol of woman and man. People in Rajawawo village believe that all that has been united cannot be separated because missing one of these important elements will negatively impact the entire clan or family.
The last is juridical meaning. It can be viewed from two perspectives, including macro-perspective and micro-perspective. From a macro-perspective, the juridical meaning of the traditional wedding ritual of people in Rajawawo village is closely related to the conformity law, which is implemented in the harmonious relationship among alliance people in the society. It can be seen in the process of Tu Bhanda Mere, called Mbuku Uta Ae. It is a gratitude to the villagers that have given their support, time, and finance to help the couple's families during their ceremony. The man's clan serves kamba bhanda (cows) to the villagers who came to the ceremony, but it should be noticed that there are some rules which the villagers should follow. Their duty here is only to help the couple's clan and as a guest. They are not allowed to give opinions or suggestions during a dowry negotiation process because it only involves the couples' family and the ceremony leader (mozazaki).
From a micro-perspective, the juridical meaning of the traditional wedding ritual of people in Rajawawo village is closely related to the relationship among the woman, her family, and the man. A relationship that cannot be separated between the woman and her clan can be viewed in the principle of the family relationship of Rajawawo village. People in Rajawawo village believe that they should always live in a group. This group (Ine baba, pu'u kamu, ari ka'e) always protects each other, especially their daughter and sister. Moreover, juridical meaning can also be viewed in a relationship between the woman and the man, in pete negi rike nggiki. It takes a function as a tool to strengthen the couple's (woman and man) love and their entire family/clan. In this stage, the man's clan has to fulfill some demands from the woman's clan, such as ring(s) and necklace, because these valuable things from their perspectives (ring and necklace) could make their love stands forever for the rest of their life.
CONCLUSIONS
Based on the findings of this research and as the answers to the research problem, the researcher has revealed the names and oral literature dealing with the process and its cultural meanings. There are three types of traditional wedding rituals in Rajawawo village of Ende regency, such as Ana Dhei Dhato, Ana Aze, and Ana Paru Dheko. Ana Dhei Dhato is a type of wedding ritual considered the most valuable wedding ritual among others. The process of Ana Dhei Dhato wedding ritual include Tei nia mbe'o ngara, Weta mai eja se'a, Mbeo sao nggeso tenda, Kiri pipi mbinge inga, Pete negi rike nggiki, Kni kudu, Weka te'e soro zani/ kembi kaja, bhanda mere. Bhanda mere covers some details such as mbuku Ine Baba, mbuku pu'u kamu, mbuku ndoa baba, ndoa ine/ari ka'e, mbuku ta ae, mbuku juju kangge weri ine, mbuku nua oza/ tubu musu ora nata. The last process is tu dhu, nawu jeka.
Regarding the process and oral discourse of the traditional wedding ritual of people in Rajawawo village, the finding of this research pinpoints four cultural meanings related to how people in Rajawawo village view their life and the world where they live in. The four meanings are religious meaning, social meaning, historical meaning, and juridical meaning. Therefore, it is expected that this research will give valuable information and insight about Rajawawo's tradition, especially on their wedding traditions for people in general, and is also a valuable written document of cultural heritage for the next generations. Along with the conclusion provided, the researcher offers two suggestions: the stakeholders are advised to take part in maintaining Ende's cultures by issuing policies supporting the maintenance and cultivation of these highly valuable cultural values. Further research on the cultural meanings, either the traditional wedding ritual of people in Rajawawo village or other Ende's traditional wedding rituals, is highly recommended to probe more about Ende's culture.
|
v3-fos-license
|
2021-09-14T13:37:29.550Z
|
2021-09-01T00:00:00.000
|
237496913
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://energyinformatics.springeropen.com/track/pdf/10.1186/s42162-021-00177-1",
"pdf_hash": "7cb7b74caa9c28657cd7f92d73f95cf7a5c16645",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46048",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "767d081a6b2026a4d24489ad8c5f6f51bc865f00",
"year": 2021
}
|
pes2o/s2orc
|
A practical approach to cluster validation in the energy sector
With increasing digitization, new opportunities emerge concerning the availability and use of data in the energy sector. A comprehensive literature review shows an abundance in available unsupervised clustering algorithms as well as internal, relative and external cluster validation indices (cvi) to evaluate the results. Yet, the comparison of different clustering results on the same dataset, executed with different algorithms and a specific practical goal in mind still proves scientifically challenging. A large variety of cvi are described and consolidated in commonly used composite indices (e.g. Davies-Bouldin-Index, silhouette-Index, Dunn-Index). Previous works show the challenges surrounding these composite indices since they serve a generalized cluster quality evaluation. However, this does not suit individual clustering goals in many cases. The presented paper introduces the current state of science, existing cluster validation indices and proposes a practical method to combine them to an individual composite index, using Multi Criteria Decision Analysis (mcda). The methodology is applied on two energy economic use cases for clustering load profiles of bidirectional electric vehicles and municipalities.
Introduction
With increasing amounts of data in the energy sector, the relevance of data analysis is increasing constantly. This is mainly caused by the rising numbers of smart meters and decentralized energy resources (DER) as well as sensors and actors in infrastructures and new assets (i.e., through sector coupling). This trend is causing a growing complexity in handling incoming data, purposefully utilizing it and managing the complexity of the system. This paper focuses on the utilization of data with a given goal in mind. In contrast to exploratory data analysis, the examination of unknown datasets is conducted with certain pre-conceived presumptions to identify new information, patterns and derive hypotheses concerning the individual research goals (Martinez et al. 2010;Tukey 1977). Especially now, in the early stages of the digitization of the energy industry, with newly available data and tools, the importance of data analysis must not be overlooked. Unsupervised learning extends or simplifies this process and therefore gains an increasing practical importance within the industry. Especially with newly acquired data it bears many advantages such as • the compression of information (reducing information complexity), • simplification of complex and high-dimensional data, • pattern recognition, • the detection of outliers, • knowledge expansion and an increased understanding of the data (Tanwar et al. 2015;Brickey et al. 2010).
Yet, while unsupervised learning becoming progressively more convenient with many available libraries, the process of data analysis with real world data remains a big challenge. The process of deriving the desired information out of specific datasets is highly individual and scientifically challenging. The extraction of valid clustering results, serving specific goals e.g., of a client or for a given real-world task is especially highly individual (Hennig 2020). The main research goals of this paper include the review and development of existing relative and internal cluster validation methodologies to compare different model results. Furthermore, an emphasis is put on the practical application of the methodology outlined in Hennig (2020) to build a bridge between experts in certain fields (here: energy economics) with machine-learning and data science experts. The resulting methodology is applied to energy-economic datasets in two different projects.
Literature review
The goal of this paper is to identify clusters for a given dataset without any prior knowledge about its structure but with certain goals in mind. The fact that countless clustering algorithms are available and easily accessible raises the challenge of identifying the individually best clustering result for a certain task and dataset. According to , there are three ways to evaluate the results of unsupervised clustering analysis to find the "best" clustering: 1 relative validation is used to tune the hyperparameters of an algorithm (i. e., number of clusters) to identify the best model. These relative validation methods may vary according to the machine learning algorithm used. One commonly used relative validation method is the elbow curve, used in conjunction with k-means (Syakur et al. 2018). 2 internal validation describes the identified clusters within a dataset by different algorithms and compares them. 3 external validation compares the clustering results to the ground truth and describes the error via selected indices.
The goal of this paper is to develop a practical methodology to identify the best clustering result out of a finite number of runs by applying different algorithms and varying hyperparameters on the same dataset. While options one and two are necessary to determine the optimal hyperparameters for a chosen algorithm (1) and to determine the "best" algorithm (2), option three is beyond the scope of this paper due to the lack of a ground truth. As stated in Hennig (2015); Hennig (2020); Hennig and Liao (2010); Metwalli (2020) and many more, there is neither a universally optimal clustering method nor a generally applicable definition of a cluster. This is supported by the multitude of different algorithms described in literature, each having specific goals, strengths, and weaknesses in terms of clustering results, scaling and ease of use on different datasets. Selecting the individually best suited algorithm and comparing their results hence pose a challenge which is often overcome in a pragmatic approach, considering the size of the dataset, available computing power, ease of use of the algorithms or just personal preference. The first step to a scientifically viable clustering is to find a general or individual definition of a cluster, which is done in the following by a literature review.
Definition of clustering and clusters
Clustering can be described in a very general sense as a "method of creating groups of objects, or clusters in such a way that objects in one cluster are very similar and objects in different clusters are quite distinct" (Gan et al. 2007). More detailed definitions of clustering always use "metrics" to describe their goals, as shown in the definitions in Gan et al. (2007) by Bock (1989) and Carmichael et al. (1968). The authors describe objects in a cluster as closely related in terms of their properties with high mutual similarities (= low distances) and other objects out of the same cluster in close proximity. All clusters in a dataset should be clearly distinguishable, connected and dense areas in n-dimensional space and surrounded by areas of low density in n-dimensional space. These definitions show that, with a greater level of detail, the definitions of clusters vary strongly and might even be contradicting. It also shows that assumptions about the clusters have to be made in order to find a clustering result. Lorr (1983) proposed splitting clusters into two groups, as summarized in Lorr (1983): • compact clusters have high similarity and can be represented by a single point or a center.
• "chained cluster is a set of datapoints in which every member is more like other members in the cluster than other datapoints not in the cluster" (Gan et al. 2007).
The challenge is either to find out the types of clusters that are present in a given dataset or find clusters that best match certain criteria (as seen in chapter "Application on energy economic use cases"). Yet with increasing usability and research in the field of data science and clustering algorithms, the number of easy-to-use algorithms is rising steeply. This is a challenge, as it makes it more difficult to choose the right algorithm, tune hyperparameters, and choose the best result. The following chapters outline a methodology to overcome these challenges and use it with different real-world datasets.
Methodologies to identify the best clustering algorithm
Papers comparing different clustering algorithms (=relative validation) to identify a "best" solution usually do so to propose and validate new algorithms utilizing known datasets and a known ground truth (e.g., Hennig (2015); McInnes et al. (2017); Chen (2015); Kuwil et al. (2019); Das et al. (2008); Cai et al. (2020)) . Only very few of them utilize generalized metrics to compare the results and are completely unbiased (Hennig 2015). More general and axiomatic approaches characterizing clustering algorithms can be found in Ackerman and Ben-David (2009) , responding to Kleinberg (2002). Ackerman and Ben-David (2009) proposes a methodology to define cluster quality functions, individual goals for these functions and then optimize towards it. A comprehensive connection between clustering goals, the structure of the datasets, clustering methods, and validation criteria can also be found in the works of Hennig et. al. (see Hennig (2020); Hennig (2015); Hennig and Liao (2010)). Hennig (2015) proposes a methodology to identify the optimal clustering algorithm for individual datasets. The paper focuses on pre-processing as well as the clustering itself. The choice of representation and measure of dissimilarity advocates for the attitude that correlating features should also be included in a dataset if they are essential for clustering and shows different ways to incorporate clustering in non-Euclidean space with different data types. The authors propose different (and optional) ways to transform features with nonlinear functions to influence the effect of distance measures and resulting gaps between datapoints within a feature. This helps to avoid unwanted effects of outliers in the dataset. Hennig (2015) Different methods of standardization, weighting and sphering of variables are further discussed. The authors highlight the impact of outliers on these methods and the effect of these methods on clustering results due to a (possibly even wanted) change of feature variance and refer to paper supporting these claims.
All in all, literature provides a wide range of internal and relative validation indices, suitable for clustering. Yet only a few sources focus on a more axiomatic approach to selecting the best clustering results purely based on a large range of validation indices. Hennig et al. 2020 provide a comprising methodology to standardize these indices to compare them (see chapter "Relative and internal cluster validation indices"). Kou et al. (2014) proposes a methodology for multiple criteria decision-making to select the best ensemble of validation criteria, interpretability, computation complexity and visualization for a specific challenge in financial risk analysis. Tomasini et al. (2016) propose a methodology using a regression model to determine "the most suitable cluster validation internal index.
Relative and internal cluster validation indices
To evaluate and compare different clustering results, a set of validation indices is required to benchmark the results of different algorithms (relative validation) or varying hyperparameters (internal validation). Thus, papers utilizing cluster validation indices (cvi) for relative or internal validation are introduced in the following. Puzicha et al. (2000) propose different separability measures based on clustering axioms. Cormos et al. (2020) focuses on internal validation criteria (sum of square error, scatter criteria, trace criteria, determinant criteria, invariant criteria) for large and semi-structured data as well as the performance of selected algorithms. apply k-means and bisecting k-means with a variety of internal and external validation indices. All of them are composite indices, combining multiple validation indices into one generalized index. They include the commonly used Calinski-Harabasz-Index, Davies-Bouldin-Index, silhouette-Coefficient, Dunn-Index as well as a novel validity index (NIVA) (Rendón et al. 2008). This is also a common procedure in many energy related works. E.g. Yang et al. (2017) rely on the use of multiple composite indices (such as Calinski-Harabasz-Index, Davies-Bouldin-Index, silhouette-Coefficient, Dunn-Index) to detect building energy usage patterns using k-shape clustering. Proving their results with a known ground truth (external validation). Zhou et al. (2017) introduce a (fuzzy) cluster based model to identify patterns in monthly electricity consumption of households. They remark that no single cvi is always the best or performs best on any given dataset, datatypes or distance-measure. Hence, they apply the COS index (composite index), they already used in previous works. It is comprised of a compactness, separation and overlapping indicator. Gheorghe et al. (2015) create representative zones to assess the renewable energy potential in Romania by using k-means. They validate their results internally with various indices related to the silhouette-index. Akhanli and Hennig (2020) introduce two new composite indices to describe cluster homogeneity and cluster separation. Other internal validation indices can be found in Liu et al. (2010) and Vendramin et al. (2010). Kou et al. (2014) utilizes F-measure, normalized mutual information purity and entropy. Chou et al. (2002) introduce a point symmetry measure as a cluster validity measure. Wang et al. 2019 create a new composite index (Peak Weight Index) out of two composite indices (silhouette index and Calinski-Harabasz index). Many papers with practical relevance, including the field of energy and energy economics, utilize clustering techniques usually by applying only one clustering algorithm (e. g. Bittel et al. (2017); Siala and Mahfouz (2019)). If multiple algorithms are compared, generalized composite indices (e.g., Davies-Bouldin-Index, silhouette-index etc.) or a selected few indices such as sum of squared errors are used (Toussaint and Moodley; Schütz et al. 2018).
This overview shows the lack of scientific discussion of the comparison of different algorithms, especially in subject-specific scientific papers. Many scientific papers use one or multiple (composite) cvi, usually not providing much insights in the selection process or alternatives. A critical review or deeper analysis of the used index/indices is usually missing. This poses a risk since validating cluster results with different cvi on the same data set often produces very different results.
In Hennig (2020), Hennig et al. introduce different cluster validity indices (cvi) including their mathematical formulation and a suitable normalization. These cvi are normalized in such a way that 1 represents the best (possible) value and 0 the worst. An overview of these indices is given in Table 1.
Name Abbreviation Usage
Average within-cluster distance I avg_wc Measure of similarity of objects/points in a cluster. The higher the index, the smaller the average within-cluster distance.
p-separation-index I p−sep Measure of separation between clusters. Instead of minimum/maximum distance (prone to outliers) this can be calculated by the mean of a portion (p) between two clusters. The higher the index, the better the between-cluster separation.
I centroid
Measure of how well a cluster is represented by its centroid. The higher the index, the better the representation.
Representation of dissimilarity structure by clustering I pearson Measure of the dissimilarity structure denoted by the Pearson correlation between pairwise dissimilarities (e.g., Euclidean distances) and "clustering induced dissimilarity" (matching cluster). For increasing dissimilarity, objects/points should not be assigned to the same cluster. Hence for higher indices, pairwise dissimilarity correlates more strongly to clustering dissimilarity.
Within-cluster gaps I widestgap Measure of the connectivity of a cluster. The higher the index, the smaller the within-cluster gaps.
Entropy I entropy Measure for assessing the uniform size of clusters.
Parsimony I parsimony Measure to express the preference for a lower number of clusters.
Density modes and valleys I densdec Measure to quantify the density drop from cluster-mode to the edges of a cluster and the density-valleys between clusters.
I cvdens
Measure to quantify the within-cluster density levels. For higher indices, density is more uniform within the cluster.
Hennig (2015) shows the inherent clustering characteristics and tendencies of selected groups of algorithms (partially see chapter 4.3). It further proposes using different validation indices such as measurements of within-cluster homogeneity, cluster separation, homogeneity of different clusters, and measurements of fit, e.g., to a centroid. The author points out the importance of the stability of clustering (i. e. the influence of changes in the dataset on the clustering results). Generally, two types of indices can be distinguished. Simple validation indices (in analogy to cryptography one might call them primitive cluster validation indices) as shown above and composite indices. Composite indices (like the silhouette-coefficient) are not composed of a single cvi but combine multiple of them into one to create a measure of cluster quality. This measure might not suit every purpose well and rather aims for a more generalized approach. Hennig (2020) This paper will utilize the primitive indices over composite indices and create a task-specific composite index according to the clustering goal.
The literature review shows multiple challenges in the field of clustering. The number of available and easy-to-implement clustering algorithms increases steadily while mitigating certain weak points of the existing methods. This increases the difficulty of choosing the best algorithms for a given task. Evaluation metrics are manifold in different papers, a comprehensive overview and normalization to compare them is given in (Halkidi et al. 2016). The reviewed research also shows that existing composite indices (i.e., silhouette-Coefficient or Dunn-Index) that are a combination of primitive cvi might prove to be too generalized and not suitable for every specific task. Therefore, individual clustering goals and corresponding indices should be developed for every task. Hennig et al. introduce a methodology to normalize and calibrate cvi (Hennig 2020) and propose two general-purpose composite indices (Akhanli and Hennig 2020). They remark that, in particular, the weighting of indices poses a challenge to the creation of task-specific composite indices. While Hennig et al. lay the (mathematical) foundation to identify an individual "best" solution, they provide neither a methodology to identify the relevant indices nor a method for weighting them for a given task. Yet they provide the mathematical foundation to do so. The determination of individual cluster goals according to a specific task, selecting suitable algorithms, tuning and comparing them in order to select the "best" clustering results is outlined in the following paper. The focus of it is to include industry and clustering-specific expertise into the clustering process to create an individual composite index to compare clustering results. A methodology and a workflow to weight identified clustering goals is proposed in chapter "Weighting of clustering goals", improving the methodology of Hennig et al. by a multi-criteria decision analysis (mcda) and hence building the missing bridge from the mathematical foundation to a practical implementation. The method is applied on two energy-economic use cases in chapter "Application on energy economic use cases".
Methodology
The following paper builds on relative and internal cluster validation indices as well as their weighting and combination into a single composite index. The focus of this paper is to provide a practical workflow to conduct unsupervised cluster analysis for real-world tasks and apply it in the energy sector. It extends the methodology in Halkidi et al. (2016) by including a methodology for weighting the cluster goals using mcda. This requires a link between the mathematical formulation of cluster goals as provided in Hennig (2020) (2020)).
The core methodology to identify clusters in an (already) pre-processed dataset builds on the following steps: 1 Identification of cluster goals: depending on the clustering task individual goals have to be chosen in order to choose the best result. In this step, goals are described in purely qualitative terms. 2 Weighting of clustering goals: by a multi-criteria decision analysis. The defined goals can be weighted by a single or by multiple decision makers (e.g., involved stakeholders) 3 Derivation of validation indices: the defined cluster goals (qualitative) must be transformed in mathematical statements utilizing existing validation criteria. Decision rules for these statements have to be formulated (min, max) and the validation criteria normalized [0, 1] to become comparable indices. 4 Preselection of suitable algorithms: by formulating cluster goals, validation indices and decision rules, some algorithms are no longer an option due to conflicting characteristics. The size of the dataset and available computing power are also included. 5 Model setup, internal validation and hyperparameter tuning: the pre-selected algorithms are set up and applied on the dataset. By internally validating the results with the selected cvi, hyperparameters can be tuned in order to iteratively improve the results. 6 Calibration of the clustering results: the resulting validation indices might differ in terms of variance. Hence calibration makes the indices comparable by identifying the normalization range via calibration algorithms.
7 Relative evaluation, model and result selection: the calibrated validation indices can be used to select the overall best model and determine the best clustering result.
The following chapters describe these steps in further detail.
Clustering goals and decision rules
The first logical step to conduct a cluster analysis is to derive task-specific clustering goals. These goals are individual and differ every time, as shown in chapter "Application on energy economic use cases". The clustering goals presented in Hennig (2020) are listed and explained in terms of common clustering goals in the following, whereas the similarity of two datapoints (in this study) is represented by their Euclidean distance. The lower the distance, the more similar two datapoints are, which corresponds to the general definition of clustering in chapter "Definition of clustering and clusters". Considering the nature of clustering, the clustering goals in Van Mechelen and Hampton (1993) can be split in three categories. While some goals describe the cluster definition "bottom-up" for the relation of datapoints and cluster to one other, they do not restrict the clustering result itself. Others a priori restrict the clustering results by their definition. The third category does not affect the clustering result directly but the process of clustering itself, by considering properties of algorithms, such as ease of use. In the following, potential clustering goals for the first two categories are introduced, explained if necessary, and linked to certain validation indices in chapter "Relative and internal cluster validation indices", if possible. An overview of various clustering goals and corresponding indices described in Hennig (2020) is given in Table 2. However, an index for the representation of a cluster via a datapoint of the original dataset instead of an artificial datapoint (e.g., centroid) is missing. We therefore introduce the following index I cp2cent as described in Table 3.
Goal Index
Within-cluster dissimilarities should be small: this implies that the points within a cluster are all relatively similar to one another.
I avg_wc
Between-cluster dissimilarities should be large: clusters are clearly distinguishable and very different in their characteristics.
I p−sep
Points of a cluster should be well represented by a centroid: a representative of the cluster (that is not an original datapoint) reflects the characteristics of the datapoints within a cluster in the best possible way.
I centroid
Members of a cluster should be well represented by a specific datapoint within the dataset (=representative): a single point (that is an original datapoint) reflects the characteristics of the datapoints within a cluster in the best possible way -Clusters should correspond to connected areas in data space with high density: datapoints within a cluster always have very similar neighbors yet might not be very similar to every datapoint in the cluster (exception: spherical clusters).
I widestgap
All clusters should have roughly the same size.
I entropy
The density of clusters should be roughly the same.
I cvdens
The number of clusters should be low (many indices increase with an increasing number (Hennig 2015))
I parsimony
The number of clusters should be within a certain range of values. I targetrange * It should be possible to characterize the clusters using a small number of variables: this is especially useful if the result is used for complexity reduction i.e., to create personas.
I pps * * Introduced in "Clustering of municipalities" section Table 3 New index for good representation of data points
Goal Index Index Definition
Representation by data points I cp2cent Measure of how well a cluster is represented by a single point out of its cluster (i.e., closest point x cp to the centroid of the cluster c i with x cp ∈ C i ). The higher the index, the better the representation.
This index is viable if the features used for clustering are only a lower-dimensional representation of the actual datapoints (e.g., in spatial or time series clustering) and a centroid cannot be converted back in the original (higher) dimension.
Further, very specific restrictions and limitations as well as their mathematical formulation can be found in Hennig (2020). To perform clustering, the above goals must be specified according to the clustering task. Examples are shown in chapter "Application on energy economic use cases".
Weighting of clustering goals
Clustering is rarely a purpose in its own right. Especially in practical use cases there is always a specific goal in mind. For example, a customer segmentation analysis or a complexity reduction (see chapter "Application on energy economic use cases"). This paper focuses on energy economic use cases. Yet the methodology is applicable in any clustering task. In order to decide on a best solution among multiple algorithms and results and to simplify and objectify the clustering process, the normalized cvi can be aggregated into one composite index, as proposed in Hennig (2020). While Hennig et al. give a comprehensive methodology to apply validation indices on data and calibrate them, they do not specify how to find suitable individual weights for a distinct, individual goal. A methodology to weight individual clustering goals and therefore the validation indices is proposed in the following and summarized in Fig. 2: The methodology consists of the following steps: 1 Identify general cluster goals, often set by the specific task and intended use of the results and/or the client 2 Decide on absolute goals: if a set threshold (e.g., minimum number of clusters) is not met, this result is discarded and is not be considered any further. 3 If not already necessary in step 1, find and mathematically formulate validation indices describing every remaining goal and find an understandable wording for them (depending on the decision makers). A list can be found in chapter 3.1. 4 Select and apply an mcda method to these remaining goals to weight them. The selection of the best mcda method depends on the setting and the involvement, knowledge and preference of the involved stakeholders. 5 Calculate the resulting weights of the applied mcda method(s) 6 Calculate an individual composite index by applying the weights to the underlying validation indices on which the understandable formulations are based.
With the second step being a "yes-or-no" decision or strict requirements, the fourth one represents a challenge, as stated in Hennig (2015). To rank certain interpretable goals (linked to mathematically formulated validation indices), we propose the application of "Multi-Attribute Decision Making Methods" (Xu 2015). The goal of these methods is to identify individual weighting factors for previously defined selection criteria (here: clustering goals). Weighting methods can be split in subjective methods (weights are based on the decision maker's judgment and require knowledge and experience in the field) and objective methods. These determine weights by mathematical algorithms or models (Zardari et al. 2015). In order to find a clustering result best suited to individual tasks or goals, subjective methods can be applied. Zardari et al. (2015) suggests among others the methods described in Table 4 to conduct a mcda.
In general, every method has its advantages and disadvantages (as summarized in Zardari et al. (2015)) and can be applied to quantify individual weights. Due to its properties enabling its use for silent negotiation, its easy application in a team, and its focus on unique collective results, we decided on the revised SIMOS method. This method has already been applied in the past in many practical and theoretical energy related projects (2012)). This method builds on the collective and realm-specific knowledge of a team to identify a certain ranking among a set of decision variables (here: clustering goals) (Oberschmidt 2010). There are several variations and iterations of the methodology. The original procedure was introduced by Jean Simos in Simos (1990). It was revised in Figueira and Roy (2002); Pictet and Bollinger (2005) with the latter focusing on practical efficiency and the application with a single or multiple decision makers. Many stakeholders might be involved (e.g., multiple representatives of a client or members of a team) in real-world clustering tasks (as in chapter "Application on energy economic use cases"). The method thus aims at a collective elicitation of weights and thus a consensus among the participants. To apply the SIMOS method, the clustering goals must be understandable to all decision
Name Explanation
Direct Rating Every decision variable is assigned with an importance independent of the others (as in Likert scale questionnaires).
Ranking Method Decision variables are ranked relative to one other. These ranks can be used to calculate weights using rank sum, rank reciprocal or rank exponent method.
Point Allocation
Decision makers allocate weights directly to decision variables. The result is normalized.
Pairwise Comparison Method
Decision variables are compared pairwise and the resulting pairwise weights are documented in a matrix. The resulting matrix is used to calculate the overall weights and a consistency ratio.
Swing Weighting Method All decision variables are set to the worst score. Decision makers can change the score of individual variables by moving them to the best score. The rank of doing so determines the importance (Leijten et al. 2017).
Graphical Weighting Method
This graphical method utilizes a horizontal line to place decision variables relative to one other. Their distance determines their assigned weights.
(Revised) SIMOS Weighting Method Decision variables are ranked relative to one other. Variables may share the same rank. The relative ranks can be increased by inserting empty ranks in between. In the last step, decision makers need to decide how many times more important the first variable is compared to the last. This rank is used to assign weights.
Fixed Point Scoring Decision makers need to distribute a finite number of points to weigh decision variables.
makers. Therefore, instead of a mathematical formulation, the impact of a certain decision variable must be formulated in a clear (target group-specific) and interpretable way. Some suggestions can be found in chapter "Application on energy eco-nomic use cases". The SIMOS method then provides the necessary set of rules to rank these goals relative to one another. Based on the rank of the goals r and a selected weighting factor f, the exact weighting can finally be calculated by linear interpolation for any goal φ i using the following formula from Wilkens (2012): This methodology makes it possible to find relatively unbiased weightings φ i (with i φ i = 1) for all defined goals. It also focuses purely on the task and is completely unbiased if applied prior to the clustering process. The generated ranking is applied to the underlying indices I j to create a single composite index I agg for a specific task according to Akhanli and Hennig (2020): It must be stated that some evaluation criteria may correlate heavily. The inclusion of highly correlated evaluation criteria might by itself increase their weight (Akhanli and Hennig 2020). The set of decision rules generated in this way can be used to pre-select algorithms, optimize their respective hyperparameters and compare the results.
Algorithm pre-selection
In the first step after the determination of the clustering goals and decision rules, suitable algorithms have to be pre-selected. This step depends highly on many individual parameters: 1 length and feature space of the dataset 2 n-dimensional structure of the existing clusters in a dataset 3 characteristics of the algorithms 4 available computational power and time 5 ease of use 6 requirements for the clustering process (see chapter "Clustering goals and decision rules") For reasons of scope, this topic will not be discussed further. Yet, some clustering algorithms are favored towards certain indices. After weighting them, the suitable algorithms should be selected. For example, k-means optimizes towards the best representation by a centroid (I centroid ) by minimizing the within-cluster sum of squares. Further, "axioms and theoretical characteristics of clustering methods" can be found in Hennig (2015) chapter 4.3.
Clustering
After the dataset has been prepared, the goals for the clustering have been set, and a range of suitable algorithms has been selected, clustering can be carried out.
Model setup, internal validation & hyperparameter -tuning
The models need to be setup and run to carry out clustering. The results must be evaluated with the selected indices in chapter "Weighting of clustering goals" and normalized (see Hennig (2020)) and the hyperparameters tuned in order to improve the models' results according to the defined goals.
Calibration
The different validation indices may have very small variance and are therefore sometimes hard to compare to those with high variance. Hennig introduces a calibration technique utilizing naïve, random clusterings and therefore a mean/standard deviation-based standardization (Hennig 2020). This is achieved by a "stupid k-centroids" and "stupid nearest neighbors" approach. Both have different assumptions about their results and thus help to increase the range of values of an index.
Scaling
In order to further simplify the decision process by calibrating the results, we further propose a simple scaling process. For any cvi, we set the best value to 1 and the worst value to 0. Since the value range of calibrated indices as proposed in Hennig (2020) is not limited between 0 and 1, a composite index based on weighted aggregation of selected indices could be dominated by single indices which would distort the original weighting. Hence, to compare selected clusterings, we scale their corresponding calibrated indices between 0 and 1. Assuming (for a specific index) that the mean of the "stupid" clusterings is always lowest, we scale the interval from 0 to the highest index to [0, 1]. Otherwise, the worst index of the selected clusterings is set as the lower limit. However, we do not scale I parsimony , or I targetrange since they only depend on the number of clusters and are not calibrated, thus they are between 0 and 1 by definition.
Relative validation, model and result selection
After an individual, task-specific composite index (I agg ) is created and the clustering is carried out with different algorithms, the results are compared by utilizing the individual indices. The clustering result with the highest value is selected as the best overall result.
Application on energy economic use cases
In the following chapter, the introduced methodology will be applied to two use cases in the field of the energy economics from different research projects with varying goals. The datasets and tasks include the unsupervised clustering of municipalities and driving & load profiles of electric vehicles. The following chapters will give a brief overview of the tasks, data and results. The focus will be put on the methodology introduced in chapter "Methodology". Neither the dataset nor the performed pre-processing will be discussed in detail and will be found in their detailed respective publications.
Clustering of municipalities
Within the InDEED research project (03E16026A) an optimization and simulation framework for blockchain use cases within the field of labeling of renewable energies, p2ptrading and energy communities will be built. Due to computational limitations and the complexity of the optimization and simulation, the municipal level is to be considered. The goal of the clustering is to identify representative German municipalities that do exist and represent the other municipalities of the same cluster in the best way. In a later step the simulated economical potential of the use cases in representative municipalities will be used to calculate the potential in those municipalities that could not be simulated. In order to do so, a regression model will be applied to inter-and extrapolate the simulated potentials to non-simulated municipalities. The dataset consists of 11.994 municipalities, described with 27 selected features ranging from number of inhabitants and installed renewable capacities to peak load and geographical size.
Application of the method
The application of the SIMOS method worked smoothly with seven members of the project-team InDEED. The participants included experts with technical and economics background in energy economics, new business models and digitization, who functioned as product owners and were responsible for the evaluation of the simulation result. Additionally, one participant was responsible for the development of the simulation framework utilized on the clustering data. As described in chapter "Clustering goals and decision rules", clustering goals and decision rules were brainstormed in the team as qualified statements. During the brainstorming, the focus was set on understanding the statements and possible implications. The results were weighted according to chapter "Weighting of clustering goals". Qualified statements were then described mathematically building on chapter "Relative and internal cluster validation indices". The results can be seen in Table 5.
Clustering goals and decision rules
In addition to the ranks, the weighting factor f was determined as 13.2 resulting in the presented weights. Some requirements formulated by the participants, are not yet defined in "Relative and internal cluster validation indices" section. Hence, two This is necessary in order to a) simulate a real municipality and b) let it be as similar to other points in the cluster as possible. Input features are a lower dimensional representation of municipalities.
max(I cp2cent ) 13 13.20 The number of clusters should be as low as possible.
Since the resulting clusters are the basis for a subsequent optimization with high computation time, a lower number is favored.
Since one goal is to create "personas" with the clusters in order to improve explainability, clusters should be distinguishable.
max(I p−sep ) 9 9 . 1 3 Communities within a cluster should be structurally similar.
As similarity is defined by Euclidean distance, pairwise distances should correlate with cluster affiliation.
max(I pearson ) 9 9 . 1 3 The number of clusters should be between 5 and 30.
The experts in the simulation software estimate an upper limit of 30 possible simulations. In order to make the clustering viable, a minimum of 5 clusters was determined by the participants.
This makes sure that not only the representative but also all datapoints in a cluster are comparable.
max(I avg_wc ) 7 7 . 1 0 Clusters should be describable by a low number of features.
Next to having unique and distinguishable characteristics, in order to create understandable "personas", the number of characterizing features should be as low as possible.
max(I pps ) 5 5 . 0 7 Clusters should be relatively even in size.
A clustering with 90% of the datapoints in one cluster is not desirable. Hence the participants agreed on this parameter. max(I entropy ) 1 1 . 0 0 qualitative statements with missing indices had to be formulated, see Table 6. This shows that an algorithmic or mathematical definition of new cvi is not only necessary, but a potential issue. Not any qualitative statements might be formulated as such. Figure 3 shows the comparison of five clusterings with different algorithms and hyperparameters (in A & B). With the chosen and weighted indices, the two clusterings with k-means best suit the needs of the use case. While both results (A & B) have high values in terms of I cp2cent , the other algorithms perform relatively poorly in comparison. This Table 6 New indices for municipality clustering
Name Abbreviation Usage
The number of clusters should be between 5 and 30.
max(I targetrange )
Similarly to parsimony, the target range index assesses the number of resulting clusters k. If k is within this target range, the index is 1, if it is lower than the lower limit k min , it increases linearly from 0 at k = 0 to 1 at k min . For values larger than the upper limit k max , the value decreases analogously, reaching zero at k = k min + k max .
Clusters should be describable by a low number of features.
max(I pps )
This parameter builds on the predictive power score (PPS) (Sharma 2020). The PPS uses machine learning to find (pairwise) linear and non-linear relations between two feature vectors. The proposed index calculates the PPS between every feature vector and the clustering results. A threshold to imply a "good" correlation between features and results is set. The mean number of features describing the resulting cluster result well is used to derive a cvi according to the Parsimony (IP) with K max as the dimensionality of the features.
is to be expected because the k-means optimize towards a minimum distance of cluster points to their respective cluster centroid. If the centroid has a neighboring point of the same cluster very close by, the results of I cp2cent are hence almost identical to I centroid . The highly ranked I p−sep performs the best in clustering A & B and very poorly in E. I parsimony , a measure to express the preference for a lower number of clusters, is rather low overall due to the numbers of clusters ranging from 13 to 19. The newly introduced I pps performs well in E, yet is still high in A & B. I targetrange is 1 for all clusterings since only results within that range were used for the comparison. Due to these clustering results, A is determined as the best overall result (out of the compared clusterings) for the needs of the project team with an I agg (weighted average) of 0.514. This shows that not all clustering goals are met perfectly. Hence, further clusterings will be conducted in the future, to improve the results towards I agg = 1. A specific publication introducing and validating the results is currently in progress.
Clustering of driving & load profiles of electric vehicles
The BDL project focuses on the development of and research on bidirectional electric vehicles. One goal is to conduct a systemic evaluation of the impact of bidirectional electric vehicles in Germany. The optimization framework for this task is specified in Böing et al. (2018). In order to reduce complexity, the given driving & load profiles should be clustered in about 20-25 clusters. A preliminary analysis by the project team shows an anticipated optimum of model runtime and variance of load profiles in this range (i.e., the measured runtime of the model decreases by factor 3.2 if 25 instead of 1.000 load profiles are used). The dataset contains 9.997 load profiles represented in 337 features.
Application of the method
The method was applied by a team including six experts, four from the BDL project (01MV18004F) and two external clustering experts. The procedure was equivalent to chapter "Clustering of municipalities". The results can be seen in Table 7.
Clustering goals and decision rules
In addition to the ranks, the weighting factor f was determined as 5.25 resulting in the presented weights. The goal of this clustering was relatively comparable to chapter "Clustering of municipalities". With a different simulation framework in a far more The results as depicted in Fig. 4 show a big difference in terms of their cluster goals. While A and B show good results with I cp2cent (for an explanation, see chapter "Clustering of municipalities") and I pps , their I entropy is relatively low compared to C. C has the overall lowest I pps (0). A high I parsimony could not be reached in any of the clusterings, as it decreases with a higher number of clusters. All in all, this shows a tradeoff for all cluster results and the importance of the weighting process. For this use case, the k-means clustering A with 21 cluster reaches the highest I agg of 0.81. Again, further clusterings will be carried out in order to improve the results.
Discussion
The proposed methodology is aimed at improving individual clustering results. Building on the previous works about cvi , it adds a practical workflow as well as an mcda methodology to decide on individual weights and suggests new indices. This helps professionals in the field of data science and experts from different areas to identify the individually "best" clustering goals and benchmark different algorithms. The examples in chapter "Application on energy economic use cases" show promising results in the field of energy economics. The chosen cvi as well as their weights and resulting I agg differ, even though the overall goal is relatively similar. This supports the need for the introduced methodology. However, the two examples also show that the set goals by the project teams could not be fully met by the clusterings. Even though, this method helps identifying the individually "best" result, it does not optimize towards it. The flaws of the methodology are outlined in the following: • Result generation: the methodology is capable of comparing different clustering results with a single, individual composite index (I agg ). Generating the results still is challenging task and is of exploratory nature.
• Scalability: every exploratory approach comes with scaling issues. The bigger the dataset compared to the available computational power, the longer it takes to conduct the clustering itself and the calculation of the validation indices.
• Optimization towards indices: with defined indices, it should be possible to mathematically optimize towards a real "best" result. In the cases presented, the clustering was conducted manually. This process should be addressed in future works.
• Bias towards higher numbers of clusters: many indices improve with an increasing number of clusters. While the tendency of a clustering towards a lower number of clusters is expressed via the parameter "parsimony", it might still be weighted low or excluded by certain users.
• Correlation of indices: the resulting indices might correlate and hence be overrepresented even after the weighting. This should be addressed in future works. • Missing indices: the two example showed that some indices had to be defined (I cp2cent , I pps , I targetrange ) after the mcda method. Depending on the complexity of the missing indices, their mathematical formulation might be time consuming and prone to error if defined incorrectly.
• Further validation: the methodology was conducted with two energy economic examples in different project teams. This showed that the application of mcda methods is possible and helps in tailoring an individual composite index. It also shows that comparing results can be simplified with an individual composite index I agg . To prove the viability of the resulting composite indices, extended research in different fields of application has to be conducted. Further cases (e.g., deriving personas for marketing of utilities) will be applied in the future to show the universal usability. Further clusterings in the presented cases will be executed to improve the results.
• Detailed result analysis: due to scope and length restrictions, a detailed introduction, visualization and validation of the clustering results could not be provided in this paper. This will be addressed in further publications.
Summary and outlook
With ongoing digitization in many sectors, the importance of practical data-analysis, exploration and -usage is increasing significantly. A part of this process is the clustering of data for different practical reasons. These include the reduction and simplification of information complexity, pattern recognition, knowledge expansion, an increased understanding of the data or the detection of outliers. A growing field of use is the energy system analysis in order to reduce input complexity (see examples in chapters "Clustering of municipalities" and "Clustering of driving & load profiles of electric vehicles"). The literature review shows a wide variety of available clustering algorithms. However, it was also possible to identify a gap in their neutral comparison tailored to the individual requirements of practitioners. Most realm-specific papers provide little to no explanation on their cvi choice or choice in clustering algorithm(s). Existing literature presents generalized composite indices or a relatively mathematical formulation of individual cvi in the works of Hennig et al. 2020. While the former are relatively generalized and might not suit individual needs, the latter proposes a viable methodology but lacks a "bridge" to practical application. This paper focused on summarizing the necessary theoretical background as well as the status quo of the scientific discussion. A methodology was developed and proposed to help practitioners tailor an individual composite index to find the best clustering results according to their individual goals from a set of clustering results. This proposes an alternative to better define and achieve individual cluster objectives than with (often) randomly selected composite indices, as done in many cluster-related scientific studies. It creates a practical workflow for energy related projects, adds a mcda method to weight indices and adds further cvi to the method introduced by Hennig (2020). Two examples with different energy economical goals show that the method works with practitioners. The practical application in mcda workshops showed that there were cvi missing. In this case, these indices need to be defined and mathematically formulated. The already existing composite indices, introduced in chapter "Literature review", may contain useful individual cvi, once decomposed into their components. I cp2cent was introduced in this paper due to practical needs and its viability shown in cases with high distances between centroids and datapoints from the original dataset. However, this also shows that the indices can correlate, which in turn can mean overrepresentation in individual composite index. I pps was introduced in order to evaluate whether results are describable by a low number of features using non-linear-correlations (Sharma 2020). I targetrange was introduced to prefer not only lower number of clusters (as in I parsimony ) but numbers of clusters within a defined target range. The methodology proved viable to compare different clusterings of multiple algorithms towards individual goals. If the clustering goals can be reached with the provided datasets and specified I agg can not be ensured with the methodology. Whether an optimization towards I agg is possible, should be part of further research. The clusterings introduced in chapter "Application on energy economic use cases" will be used in further research and the respective papers concerning the results will be published. Further clusterings will be conducted to improve the results. Its application in other projects with different clients will prove its practicality in the future. All in all, the methodology can be helpful for data scientists and engineers to help find an optimal clustering result with clients or tasks with respective experts in this field with low or no prior knowledge on clustering.
|
v3-fos-license
|
2020-03-03T16:42:01.845Z
|
2020-03-02T00:00:00.000
|
211729526
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41438-020-0265-9.pdf",
"pdf_hash": "8cad0912cdf61fc7ed65dd4ad35e2931496453f2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46050",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "8cad0912cdf61fc7ed65dd4ad35e2931496453f2",
"year": 2020
}
|
pes2o/s2orc
|
Intra- and interspecific diversity analyses in the genus Eremurus in Iran using genotyping-by-sequencing reveal geographic population structure
Eremurus species, better known as ‘Foxtail Lily’ or ‘Desert Candle’, are important worldwide in landscaping and the cut-flower industry. One of the centers of highest diversity of the genus Eremurus is Iran, which has seven species. However, little is known about the genetic diversity within the genus Eremurus. With the advent of genotyping-by-sequencing (GBS), it is possible to develop and employ single nucleotide polymorphism (SNP) markers in a cost-efficient manner in any species, regardless of its ploidy level, genome size or availability of a reference genome. Population structure and phylogeographic analyses of the genus Eremurus in Iran using a minimum of 3002 SNP markers identified either at the genus level or at the species level from GBS data showed longitudinal geographic structuring at the country scale for the genus and for the species E. spectabilis and E. luteus, and at the regional scale for E. olgae. Our analyses furthermore showed a close genetic relatedness between E. olgae and E. stenophyllus to the extent that they should be considered subspecies within an E. olgae/stenophyllus species complex. Their close genetic relatedness may explain why crosses between these two (sub)species have been found in the wild and are exploited extensively as ornamentals. Last, current species identification, while robust, relies on flower morphology. A subset of seven SNPs with species-specific (private) alleles were selected that differentiate the seven Eremurus species. The markers will be especially useful for cultivar protection and in hybrid production, where true hybrids could be identified at the seedling stage.
Introduction
Eremurus, the largest genus in the Asphodelaceae, is comprised of some 45 species of herbaceous perennial plants that are native to central Asia and Caucasia 1 . Eremurus species are important commercially as ornamental plants for landscaping and cut-flower markets 2 . Due to their large and colorful floral spikes, Eremurus species are known in the international horticulture trade as "Foxtail Lily" or "Desert Candle". In addition to their ornamental value, Eremurus species have been used in traditional medicine and are potential sources for anti-inflammatory, antibacterial, and antiprotozoal drugs [3][4][5] . Other Eremurus products, such as bio-oil 6 and adhesives 7 , have industrial applications.
Interspecific breeding of Eremurus species has been conducted for floral color and longevity, resulting in popular hybrids such as Eremurus × isabellinus (E. stenophyllus × E. olgae). A better understanding of the genetic variation within and among Eremurus species would facilitate breeding for ornamental traits and other properties. Naderi Safar and colleagues 8 used genetic variation obtained by amplicon sequencing of the plastid trnL-F and nuclear rDNA ITS regions to conduct a molecular phylogenetic study of three Asphodelaceae genera, including Eremurus. This study showed that Eremurus species grouped into the paraphyletic subgenus Henningia and the monophyletic subgenus Eremurus. However, information on the genetic diversity within Eremurus species is lacking. Recent developments in next generation sequencing technologies have enabled the detection of single nucleotide polymorphism (SNP) markers at the whole genome level in non-model species, including those that lack a sequenced genome, using reduced representation sequencing [9][10][11] . These approaches have not yet been applied to identify SNP markers across species within an angiosperm genus comprised of species with very large genomes (>8 Gb) and no reference genome. Both diploid (E. chinensis) and tetraploid (E. anisopterus) Eremurus species have been identified by karyotype analysis with 2n chromosome counts of 14 and 28, respectively 12,13 . Flow cytometry of the diploid E. stenophyllus (2n = 2x = 14) determined that it has a large 2C genome size of 16.2 gigabases (1C = 8.1 Gb) and a GC content of 41.3% 14 .
Iran is the third largest diversity center of the genus Eremurus, after the Soviet Union and Afghanistan 15 . There are seven Eremurus species and three subspecies found in Iran, with the greatest species diversity located in the northeastern part of the country. Eremurus stenophyllus (Boiss. & Buhse) Baker subsp. stenophyllus is endemic to Iran and E. kopet-daghensis Karrer is subendemic 15 . Eremurus stenophyllus subsp. stenophyllus and E. spectabilis M. Bieb subsp. subalbiflorus are recognized as endangered and in need of conservation 16 . The other Iranian species/subspecies are E. spectabilis subsp. spectabilis, E. persicus (Jaub. & Spach) Boiss., E. olgae Regel, E. luteus Baker, and E. inderiensis (M. Bieb.) Regel. Hybrids between E. olgae and E. stenophyllus subsp. stenophyllus have been observed in the wild and are identified as E. x albocitrinus Baker. Eremurus species are generally insectpollinated, although self-fertilization is possible and wind dispersal of pollen has been observed in desert habitats where pollinator activity is unreliable 17,18 .
In this study, we investigated the interspecific and intraspecific diversity in Eremurus spp. germplasm from Iran using SNP markers identified through genotypingby-sequencing (GBS) 10 to determine phylogenetic relationships and investigate correlations between the genetic diversity, morphological diversity and geographic origin. In addition to the biological significance of our research, this is the first report of the use of GBS on species of the Asphodelaceae, none of which have been sequenced to date, the first use of GBS on an angiosperm species with a genome size (1C) larger than 8 Gb and no reference genome, and one of the few applications of GBS to plants of ornamental interest. Furthermore, we demonstrate the use of GBS to study intraspecific variation as well as interspecific variation in Eremurus spp. using different SNP-calling protocols on the same dataset.
Genetic analyses across species within the genus Eremurus SNP markers identified by GBS across Eremurus species
To analyze diversity in Eremurus at the genus level, a reference was assembled from GBS reads ('GBS reference') across 96 accessions belonging to seven Eremurus species collected across Iran (Supplementary Table S1; Supplementary Fig. S1). Because the reference building was carried out across species, we required each reference tag to be present in only two accessions in order to be included in the reference. The threshold we typically use for within species reference building is presence in at least 50% of the samples. The assembled GBS reference consisted of 201,099 tags. We obtained a total of 12,535 SNP markers across the 96 samples after alignment of the reads from each accession to the GBS reference and SNP calling, removal of adjacent SNPs (multiple side-by-side SNPs are sometimes caused by misalignment of reads) and filtering for biallelic SNPs, SNPs with a quality depth (QD) ≥ 10, and SNPs with <50% of missing data. Six accessions were removed from the analysis because they had <600,000 reads. An additional two samples with >1 M reads had >75% missing data and were also removed. The average number of reads for the remaining 88 accessions was 1.67 million (M), with minimum and maximum read numbers of 0.72 M and 11.43 M, respectively. We then decreased the missing data threshold for SNPs from 50 to 30%, and removed SNPs with a minor allele frequency ≤5%. The final number of SNPs used for the diversity analyses across the seven Eremurus species was 3002. A SNP resampling analysis showed that a subset of 1000 randomly selected SNPs had the same power to distinguish all multilocus genotypes as the full set of 3002 SNPs, indicating that our SNP set was adequate to determine the diversity between Eremurus species ( Supplementary Fig. S2). The SNP markers, and the sequence of the corresponding GBS reads, are given in Supplementary Table S2. The genotypic scores for the 3002 SNP markers in each of the 88 accessions are given in Supplementary Table S3.
Phylogeographic analyses within the genus Eremurus
Neighbor Joining (NJ) and Unweighted Pair Group Method with Arithmetic mean (UPGMA) analyses, Fig. 1 Genetic, morphological and geographic stratification of accessions belonging to seven species within genus Eremurus. a Genetic population groups as determined by STRUCTURE for K = 5. The subpopulations identified were olgae/stenophyllus, inderiensis, luteus, persicus, and spectabilis. Colored vertical bars indicate the percentage membership to different subpopulations. Genotypes indicated with * had not been morphologically classified at the species level. 'E. k.' indicates E. kopet-daghensis. b NJ tree based on 3002 'species SNPs' showing the relationships between seven Eremurus species. Bootstrap values for branches are indicated when higher than 70%. c Map showing the geographical distribution of the genetic groups of the Eremurus populations sampled across Iran. d Principal coordinates analysis (PCoA) using 16 morphological traits. Accessions are color-coded by species carried out with the set of 3002 'species SNPs', separated the seven species in very strongly supported clades (bootstrap values ≥98%) ( Fig. 1b and Supplementary Fig. S5). The seven species' clades were organized into three superclades. Interrelationships between the superclades were unresolved in the NJ tree (Fig. 1b), but the UPGMA tree topology suggested that superclades 1 and 2 were most closely related ( Supplementary Fig. S5). Superclade 1 comprised E. olgae, E. stenophyllus, E. luteus, and E. kopet-daghensis. E. olgae was sister to E. stenophyllus, and E. luteus was sister to E. kopet-daghensis. Superclade 2 comprised two sister clades corresponding to E. inderiensis and E. spectabilis, and E. persicus formed superclade 3. The pairwise Nei's genetic distances between the seven Eremurus species revealed E. olgae and E. stenophylus as the most closely related species (Nei = 0.059; Fst=0.217; Supplementary Table S4). With the exception of a few branches, relationships between accessions within species had low bootstrap values (Fig. 1b).
The genus Eremurus in Iran was geographically structured according to an East-West transect. Overall, a Mantel test revealed a significant correlation between genetic and geographic distances (Rxy = 0.439; P = 0.001). Within subgenus Eremurus (Supplementary Table S1), E. spectabilis was the dominant species in the western part of Iran, while E. inderiensis was only present in the eastern part of Iran (Fig. 1c). All species sampled within subgenus Henningia originated from the eastern part of Iran except E. persicus, which was only present in the west and center of Iran.
Private alleles that can be used for species identification
Overall, high genetic differentiation (Fst = 0.832) and a low level of gene flow (N m = 0.579) were observed between the seven Eremurus species leading to the identification of a total of 864 private alleles (alleles that are unique to a single species and present in that species at a frequency of 100%) and 717 diagnostic alleles (alleles that are unique to a single species but present in that species at a frequency <100%) ( Table S4). No private alleles were identified for E. olgae or E. stenophyllus, but both species did carry diagnostic alleles. The highest frequency of any diagnostic allele in E. stenophyllus was 88.5% while in E. olgae, the highest frequency was 61.8%. A total of 82 and 410 alleles were private and diagnostic, respectively, for the E. olgae/E. stenophyllus complex. The number of private and diagnostic alleles per subpopulation is given in Table 1. SNPs with private alleles are indicated in Supplementary Table S2.
Morphological analyses across species within the genus Eremurus
Tepal color was the most polymorphic trait evaluated across the Eremurus species with seven characters recorded and a Shannon diversity index H′ of 1.568. Rhizome diameter was the least powerful trait to differentiate the accessions morphologically with two characters recorded and an H′ of 0.251 (Supplementary Table S5). Four morphological traits, tepal color, tepal nerve, tepal tip, and flower shape, were singly able to distinguish the two subgenera, Eremurus and Henningia, as defined by Wendelbo 15 (Supplementary Table S6). In addition, tepal color, tepal tip and flower shape used in combination were able to differentiate the seven species. Overall, species were highly differentiated morphologically (P < 0.001); the morphological variability among species accounted for 70% of the total variability while the variability within species accounted for only 30% (Supplementary Table S7). The two most morphologically diverse species were E. stenophyllus and E. spectabilis with 10 and 19 morphotypes, and Shannon diversity indexes of 0.316 and 0.226, respectively ( Table 2). E. kopet-daghensis was the least diverse with two morphotypes and a Shannon diversity index of 0.040.
Overall, 55 morphotypes (matrices consisting of all 16 morphological characters) were recorded and all of them were specific to one of the studied species (Supplementary Table S8). Specific morphotypes were also recorded for each clade of E. spectabilis, E. olgae, E. stenophyllus, and E. luteus except E. luteus Clade II, where none of the five morphotypes observed were unique to Clade II (Supplementary Table S9). One trait out of the 16 evaluated (tepal color) was able to differentiate E. stenophyllus Clade I (yellow tepal color) from the rest of the E. stenophyllus accessions.
When accessions were color-coded according to their genetic affiliation, the PCoA based on the 16 morphological traits showed a similar clustering of accessions to that obtained using the 3002 SNPs (Fig. 1d). The first coordinate of the PCoA explained 31.9% of the genetic variability and separated subgenus Eremurus from subgenus Henningia. Three traits, tepal color, tepal nerve, and tepal tip, were the main contributors (56%) to the variation explained by axis 1 (Supplementary Table S10). The second coordinate, explaining 23.3% of the genetic variability, distinguished E. inderiensis from E. spectabilis within subgenus Eremurus. The second coordinate also separated the species within subgenus Henningia into three groups represented by E. olgae, E. stenophyllus and the rest (E. luteus, E. persicus, E. kopet-daghensis) (Fig. 1d). The traits that were most highly correlated with axis 2 were stem length and leaf margin indumentum (35% contribution; Supplementary Table S10). The third coordinate explained 9.9% of the variation and distinguished E. persicus from E. luteus and E. kopetdaghensis. In addition, significant correlations between morphological and genetic distances across species were revealed (Rxy = 0.636, P = 0.010). To resolve intraspecies relationships, GBS data for the three largest subpopulation groups identified by STRUCTURE at K = 5 and by phylogenetic analyses (E. olgae/stenophyllus, E. spectabilis, and E. luteus) were reanalyzed within each subpopulation to identify biallelic SNPs with a QD value ≥10, an allele frequency ≥10% and ≤15% missing data. Adjacent SNPs were also removed. A total of 22,934 reference tags were obtained for subpopulation 'spectabilis', 27,258 for subpopulation 'olgae/ stenophyllus' and 24,735 for subpopulation 'luteus'. The highest percentage of unique reference tags was found in subpopulation 'spectabilis' (65%), and the highest percentage of shared tags (23%) was observed between subpopulations 'olgae/stenophyllus' and 'luteus'. This concurs with E. olgae, E. stenophyllus, and E. luteus belonging to subgenus Henningia, and E. spectabilis belonging to subgenus Eremurus. Some 21% of the tags were shared between all three subpopulations (Supplementary Table S11). Using the generated GBS references in each subpopulation, we obtained 4175 SNPs for subpopulation 'spectabilis', 5281 SNPs for subpopulation 'olgae/stenophyllus' and 6131 SNPs for subpopulation 'luteus'. The majority (90.3%) of the SNP-carrying GBS reference tags were specific to a single subpopulation, 9.0% were common to two subpopulations, and 0.7% were shared between three subpopulations. The lower number of shared tags when considering only the SNP-carrying tags used in the analyses compared with all reference tags can be explained by the fact that common reference tags are not necessarily polymorphic in all subpopulations. The SNP markers, their location and the sequence of the corresponding GBS reads for subpopulations 'spectabilis', 'olgae/stenophyllus', and 'luteus' are given in Supplementary Tables S12, S13, and S14, respectively. The genotypic scores for each SNP marker for accessions within a subpopulation are given in Supplementary Tables S15, S16, and S17. SNPs identified within subpopulations ('subpopulation SNPs') showed higher diversity indexes compared with SNPs identified across all species ('species SNPs') for each of the largest population groups investigated. Shannon's information index I was 4.8-fold higher on average using 'subpopulation SNPs' (Table 3) than using 'species SNPs' (Table 1). Based on the 'subpopulation SNPs', E. luteus had the highest diversity among the four species analyzed (E. luteus, E. olgae, E. stenophyllus, and E. spectabilis) with a Shannon's information index I of 0.632 (Table 3). It should be noted, however, that the 'subpopulation SNPs' were different for each species, except for E. stenophyllus and E. olgae where building of the GBS reference and SNP calling was done within the subpopulation stenophyllus/ olgae. When using 'species SNPs', E. luteus presented the lowest genetic diversity (I = 0.088) of the four species (Table 1).
Phylogeographic analyses within STRUCTURE subpopulations
Using both larger SNP numbers and less conserved SNPs allowed most inter-accession relationships to be resolved with bootstrap values ≥70% (Fig. 2). Within subpopulation 'spectabilis' (Fig. 2a), four clades largely grouped accessions by geographic location. Clades I and IV were collected in the western part of Iran, while Clade II and Clade III were found in the center and eastern part of Iran, respectively (Fig. 1c).
As indicated by the phylogenetic trees obtained with both the 'species SNPs' and the 'subpopulation SNPs', subpopulation 'olgae/stenophyllus' consisted of two sister clades, one comprising E. olgae accessions and the other comprising E. stenophyllus accessions (Figs. 1b and 2b). Both species were collected from the eastern part of Iran (Fig. 1c), and no geographic patterning at the regional level was observed that separated the two species. Discrepant placement in the two analyses was found for E. stenophyllus accession E_S_KN3, which was phylogenetically more closely related to E. olgae than to E. stenophyllus in the phylogeny using 'subpopulation SNPs', but grouped with E. stenophyllus in the across-species Fig. 2 Phylogenetic trees within subpopulations. Trees within a E. spectabilis, b the E. olgae/E. stenophyllus species complex, and c E. luteus were generated using, respectively, 4175, 5281, and 6131 'subpopulation SNPs'. Bootstrap values for branches are indicated when higher than 70%. For accessions belonging to E. stenophyllus, flower color is given in parenthesis after the accession name phylogeny and the STRUCTURE analysis at K = 6. An analysis of Nei's genetic distance and genetic differentiation at the species level using 'species SNPs' showed that these two diversity indices were at least 3.6 (Nei's distance) and 2.4-fold (Fst) lower between E. stenophyllus and E. olgae than between any other two species (Supplementary Table S4). Eremurus stenophyllus was the only species that had flower color variants. In addition to the typical yellow color, some accessions had orange or white flowers. All E. stenophyllus accessions with yellow-colored flowers grouped into a single clade I ( Fig. 2b; Supplementary Table S9), but were not geographically isolated from the rest of the E. stenophyllus accessions. Within E. olgae, three clades were identified. While all E. olgae species were found in eastern Iran, some geographic patterning was found at the regional scale. Clade I was present at more western longitudes, while Clades II and III were prevalent at more eastern longitudes.
Three clades with unresolved relationships were identified in E. luteus (Fig. 2c). Clade I comprised five species collected at the same location (N36.3-E59.4) in eastern Iran. Clade II, also sampled in eastern Iran, consisted of two sister subclades, one comprising four species collected at N35.7-E61.1 and the other consisting of two species collected at N32.9-E59.2. Clade III comprised two accessions collected in the center of Iran (at N33.4-E53.9).
Genotyping-by-sequencing for phylogenetic analysis across species within a genus
Genotyping-by-sequencing has been used in a number of species without a reference genome to identify SNP markers for genetic mapping or diversity analyses, e.g., refs. 11,21-23 . Here, we demonstrate that the use of the methylation-sensitive restriction enzyme PstI in combination with MspI is effective even in species with a very large genome such as foxtail lily (1C = 8.1 Gb). Furthermore, using GBS references generated either across species or within species, the same GBS reads can be used to provide markers suitable for cross-species and intraspecific applications, respectively. To generate a reduced representation genome reference from GBS reads using the UGbS-Flex pipeline, we first clustered reads within accessions, extracted consensus sequences from each cluster, and then clustered the consensus sequences across accessions 11 . While we typically require a consensus sequence to be present in at least 50% of the accessions in order to be included in the reference, we did not apply this criterion for the generation of the crossspecies reference. The main reason for not preselecting reference tags based on their prevalence in the set of samples was that the 'ustacks' program 24 only groups sequences that fully overlap and we were concerned that the 50% threshold was too stringent, particularly because we did not know the level of divergence between the seven Eremurus species. We then used blast all-vs.-all to identify tags that had ≥98% homology and discarded all but one of the closely related sequences. This reference consisted of 201,099 sequences. Because we discarded SNPs with >30% of missing data, the 3002 SNPs used in the analysis were derived from highly conserved regions in the genome and were polymorphic at the species level rather than between accessions within a species. These 3002 SNPs clustered accessions by species with bootstrap values of ≥98% in NJ and UPGMA trees.
As expected, however, little bootstrap support was obtained for the majority of relationships between accessions within a species. To increase the resolution at the accession level, we extracted the raw reads for the three largest subpopulations obtained with STRUCTURE, which essentially corresponded to the species E. stenophyllus/E. olgae, E. luteus, and E. spectabilis. Generation of a GBS reference and SNP calling was then carried out within each subpopulation group. For this analysis, only sequences that were present in at least 50% of the accessions within a subpopulation were included in the reference, leading to smaller reference sets. Because of limitations on the number of SNPs that could be used within the phylogenetic program 'DARwin', we removed SNPs with an allele frequency <10% and >15% missing data, retaining 5281 SNPs for subpopulation 'olgae/stenophyllus', 6131 SNPs for subpopulation 'luteus' and 4175 SNPs for subpopulation 'spectabilis'.
Genetic relationships between and within Eremurus species
We used STRUCTURE 20 , which applies a Bayesian iterative algorithm, to determine the most likely number of genetic groups and the membership of each Eremurus accession to these groups. We obtained five clusters (Fig. 1a), with no or very few admixed (≤90% membership to a single subpopulation) accessions within each cluster. E_K_49, E_K_52, and E_K_54, the only three E. kopetdaghensis accessions in our study, had ≥50% (but ≤90%) membership to subpopulation 'luteus' and minority membership (>10% and <50%) to the 'olgae/stenophyllus' subpopulation. E_S_62, an accession identified based on morphological characters as E. spectabilis, had majority membership to E. spectabilis and minority membership to subpopulation 'olgae/stenophyllus'. E. olgae and E. stenophyllus accessions fell within a single subpopulation. NJ and UPGMA analyses resolved the seven species into seven strongly supported clades. In agreement with the STRUCTURE results, the three E. kopet-daghensis accessions were sister to the E. luteus clade, and both clades were sister to the E. olgae and E. stenophyllus clades (Fig. 1b).
In 1876, Baker divided the genus Eremurus into three subgenera, Eremurus verus, Ammolirion, and Henningia 25 . Wendelbo 15 recognized only two genera, Eremurus, which comprised sections Eremurus and Ammolirion, and Henningia, which comprised section Henningia. The seven Eremurus species found in Iran are distributed across the three genera/sections. E. spectabilis was classified as belonging to subgenus Eremurus section Eremurus, E. inderiensis as belonging to subgenus Eremurus section Ammolirion, and E. luteus, E. olgae, E. persicus, E. stenophyllus, and E. kopet-daghensis as belonging to subgenus Henningia section Henningia. Naderi Safar and colleagues 8 subsequently showed using plastid trnL-F and ribosomal internal transcribed spacer (ITS) sequences that subgenus Henningia was paraphyletic, with E. persicus placed separately from the remainder of species belonging to this subgenus. Our results largely agree with Naderi Safar et al. 8 with E. luteus, E. kopet-daghensis, E. stenophyllus, and E. olgae being located in one superclade (Superclade 1 in Supplementary Fig. S5), while E. persicus formed a separate superclade (Superclade 3 in Supplementary Fig. S5). Furthermore, pairwise Nei's genetic distances showed that E. persicus was the most diverged of all Eremurus species analyzed (Supplementary Table S4), further supporting that E. persicus should be placed in a separate subgenus.
Our data also bring into question some of the current species delineations. Nei's genetic difference and the genetic differentiation between any two species is, on average, 0.441 and 0.668, respectively. In contrast, these values are 0.059 and 0.217 when comparing E. olgae and E. stenophyllus. Furthermore, E. olgae and E. stenophyllus are inter-fertile, not geographically differentiated, and grouped in the same genetic subpopulation in a STRUCTURE analysis. We therefore recommend the use of 'olgae' and 'stenophyllus' at the subspecies level within the species complex E. olgae/stenophyllus.
Phylogeography of Eremurus species in Iran
As expected, 'subpopulation SNPs' showed higher genetic diversity than 'species SNPs' suggesting that although 'species SNPs' are more efficient for species differentiation, 'subpopulation SNPs' are more accurate for diversity evaluation within species. The SNP data obtained from both the across-species and intraspecies analyses of the GBS reads demonstrated that accessions typically group by geographic location. Geographic distances and genetic distances calculated using 'species SNPs' were significantly correlated (Rxy = 0.439, P = 0.001) and the population of Eremurus accessions was geographically structured along a longitudinal axis. When Mantel tests were performed within species, the results revealed significant correlations between geographic and genetic distances (based on 'subpopulation SNPs') only for E. persicus (Rxy = 0.574, P = 0.020) and E. spectabilis (Rxy = 0.135, P = 0.05). Interestingly, E. stenophyllus, which typically has yellow flowers, was found in three color variants in the same geographic region (N36.72-E58.53). Yellow-colored accessions formed a single cluster, but the white and orange accessions did not cluster at the genetic level by flower color. This may not be too surprising considering that the color variants grow in sympatry and that foxtail lily is outcrossing.
Levels of heterozygosity
When analyzing SNP variants across species (using 'species SNPs'), the number of heterozygous loci identified within each accession was, on average, 3%. SNPs were called only for loci that had a sequencing depth of at least eight reads, which was sufficient to reliably identify heterozygous SNPs 11 . Hence, the paucity of heterozygous loci in foxtail lily, an outcrossing species, is not caused by a lack of read depth. Most likely, the SNP loci used for the across-species analyses were highly conserved and, consequently, alleles were fixed within a species. This is supported by the high correlation (r 2 = 0.96, P < 0.001) between the overall diversity within a species and the percentage of heterozygous loci ( Table 1). The only exception to the low occurrence of heterozygous SNPs was accession E_S_62, which had 31.1% heterozygotes and, based on STRUCTURE, NJ and PCoA analyses, was an interspecific hybrid between E. spectabilis and E. olgae/ stenophyllus. However, no hybrids between species belonging to subgenus Eremurus and Henningia have been reported to date. Furthermore, E_S_62 had been identified morphologically as E. spectabilis and had the same morphotype as another E. spectabilis accession, E_S_64. Therefore, we deduce that the 'hybrid' status and high level of heterozygosity of E_S_62 were most likely caused by sample contamination.
Morphological characterization
Of the 16 evaluated traits, tepal color was the most variable trait and was informative for subgenus differentiation according to Wendelbo 15 . Eremurus persicus, which should be placed in its own subgenus based on the genetic data, could be distinguished from other species in Wendelbo's subgenus Henningia by a hairy leaf surface. Although only four species of the genus Eremurus (E. spectabilis, E. inderiensis, E. kopet-daghensis, and E. luteus) displayed private morphological characters, a set of three morphological traits (flower shape, tepal color, and tepal tip) was sufficient to differentiate the seven species, highlighting the importance of morphological characterization. Species clustering based on morphological data in the PCoA was driven largely by the few flower characteristics that are key identifiers for Eremurus species. Most vegetative characters contributed little to the PCoA axes. Consequently, species classification based on morphology could only be done unambiguously at the flowering stage. When only the eight vegetative traits measured in our study were considered, some species' morphotypes overlapped. Similar morphotypes were seen not only within subgenera, but also across subgenera, indicating that the eight vegetative traits are insufficient to differentiate accessions at the subgenus level. In contrast, all species could be identified at any stage during their life cycle using a panel of seven SNPs. Any SNP with speciesspecific (or private) alleles (indicated in Supplementary Table S2) could be used singly to unambiguously identify that species or, in the case of E. stenophyllus/E. olgae, the species complex. No private alleles were identified that uniquely identified E. stenophyllus or E. olgae. However, three markers diagnostic for E. stenophyllus (M0087, M0367, and M0368) each could distinguish all 17 E. stenophyllus accessions analyzed from the 13 E. olgae accessions. With the exception of three E. stenophyllus accessions that were heterozygous, all E. stenophyllus accessions were homozygous for the alternate allele, while E. olgae accessions were homozygous for the reference allele. Although morphological and genetic distances were highly correlated, the genetic markers presented in this work definitely represent the most accurate and rapid method to resolve species and subspecies classifications of accessions within the genus Eremurus, in particular during the vegetative growth stage. For example, E_GH7 and E_KERMANSHAH_39, two accessions that were collected at the vegetative stage and classified only at the genus level were identified as E. olgae and E. spectabilis, respectively, using SNP markers. Furthermore, the SNP markers with private (species-specific) alleles provide a rapid method for phenotyping of hybrids.
Conclusions
Our study provides the first use of GBS in an angiosperm species with a haploid genome size larger than 8 Gb. Despite the absence of a reference genome, SNPs were successfully identified across species within the genus Eremurus as well as within Eremurus species using GBS reference tags that were assembled across all species ('species SNPs') or within species/subpopulations ('subpopulation SNPs'), respectively. Our data demonstrated longitudinal geographic stratification at the country level for the genus and for the species E. spectabilis and E. luteus and, at the regional scale, for E. olgae. While classification of species based on morphology was robust, the SNPs provided a tool to identify species during the vegetative stage, which should be particularly useful for breeding purposes, including identification of diverse parents for crossing, hybrid identification, and cultivar protection. Furthermore, the SNPs provided important new information regarding the genetic relatedness of species within the genus Eremurus that suggests that reclassification at the subgenus and species level is in order.
Sample collection
Leaves were collected from wild Eremurus populations in Iran during the spring and early summer of 2015 and 2016, and stored at −20°C until further use. A total of 143 genotypes belonging to seven species were collected from nine provinces. One to six individuals were sampled per location. The majority of species were identified in situ based on flower morphology. For each accessions, 16 morphological characteristics were measured (inflorescence length, stem length, leaf length, leaf number, stem diameter, rhizome number, rhizome diameter, peduncle length, tepal color, tepal nerve, tepal tip, flower shape, bract margin, fruit shape, margin of leaves indumentum, and surface of leaves indumentum) which, combined, defined an accession's morphotype. The subset of 88 genotypes that was successfully analyzed by GBS, together with their species designation based on morphological characteristics and genetic data, subgenus, geographic origin and morphotype, is presented in Supplementary Table S1. Source locations are shown in Supplementary Fig. S1.
DNA extraction and genotyping
Genomic DNA was isolated from frozen leaf tissue using a CTAB procedure 26 . The DNA quantity and quality were determined by Nanodrop spectrophotometry (Thermo Scientific) and agarose gel electrophoresis. Ninety-six Eremurus spp. samples that had high DNA quality and were representative of the sampled populations were chosen for GBS analysis. GBS was done as described by Qi et al. 11 using the enzyme combination PstI/MspI. Briefly, 250 ng of DNA from each sample was double-digested with PstI and MspI, and ligated to a barcoded adapter at the PstI site and a common Y-adapter at the MspI site. Unligated adaptors were removed with OMEGA Mag-bind RXNPure plus Beads. Samples were PCR-amplified separately and the individual libraries were quantified using SYBR Green. Amplicon size range for 11 samples from the high and low end of the range were verified on a Bioanalyzer (Agilent). An epMotion 5075 pipetting system was used to pool 5 ng of each of the 96 samples. The pooled library sample was quantified by Qubit and a subsample was run on a fragment analyzer. A KAPA Library Quantification Kit was used to determine library concentration prior to sequencing on a NextSeq (150 cycles) SE 150 Mid Output flow cell.
Generation of a GBS reference and SNP calling
Processing of the GBS reads and generation of a GBS reference using the scripts 'ustacks' 24 and 'ASustacks' 11 were essentially done as described in Qi et al. 11 . For interspecific analyses, the GBS reference was generated across all accessions within the genus Eremurus. For intraspecific analyses, the GBS reference was generated across accessions within a species. The parameters used in 'ustacks' and 'ASustacks' were '-m 2, -M 2 and -N 4'. Tags that were present in at least two accessions and at least 50% of the accessions were included in the inter-and intraspecific GBS references, respectively. If two or more tags had ≥98% sequence identity, only a single tag was included in the reference 11 .
Reads from each accession were aligned to the relevant GBS reference(s) with Bowtie 2 27 , and SNP calling was done using Unified Genotyper from the Genome Analysis Toolkit (GATK) 28 using the work flow and scripts described in Qi et al. 11 . SNP filtering included removal of SNPs with three or more alleles, removal of SNPs with allele frequencies <0.1 and >0.9, and removal of adjacent SNPs. SNPs with a read depth of at least 8X were converted to the mapping scores A, B, H, D (A or H), and C (B or H) 11 . These scores were later converted for use in GenAlEx to the format 11 (A), 22 (B), and 12 (H); C and D scores were changed to missing data points (00). Markers with more than 50% of missing data were removed.
Identification of GBS reference tags shared between species
Intraspecific GBS references were generated for each of the three largest subpopulation groups as determined by STRUCTURE (see below). To identify GBS reference tags that were shared between the three subpopulations analyzed, the reference tags belonging to each subpopulation were pooled and compared with one another using BLASTN. If two or more tags had ≥95% sequence identity, only a single tag was kept. All tags with <95% sequence identity across the three subpopulations formed the nonredundant tag set. GBS reference tags from each population were then compared with the non-redundant tag set using BLASTN to identify tags that were unique to that population or shared between populations.
Population structure analysis
The population structure of the genotyped Eremurus spp. germplasm was determined based on the SNP set identified across all 88 genotyped accessions belonging to seven Iranian Eremurus species. Genetic subpopulations were identified using the Bayesian clustering procedure implemented in STRUCTURE v.2.3.4 20 with ten runs of the admixture model, a burn-in period of 100,000 replications, a run length of 100,000 Markov Chain Monte Carlo (MCMC) iterations and the number of putative subpopulations (K) ranging from one to ten. The optimum value of K was selected based on the Delta K estimate of Evanno et al. 29 using Structure Harvester 19 . Accessions with a membership probability to a single subpopulation larger than 90% were considered genetically pure. Accessions with membership ≤90% to a single subpopulation were considered admixed. A principal coordinates analysis (PCoA) was performed using the same dataset with GenAlEx 6.502 30 .
Genetic diversity and phylogenetic analysis based on SNP markers
The number of effective alleles (Ne), number of private SNPs, percentage of polymorphic loci (P), Shannon's information index (I), observed and expected heterozygosity (Ho, He) and fixation index (Fis) were calculated using GenAlEx 6.502 30 and values were compared across the seven species. The overall genetic distance (Fst) and estimated gene flow (N m ) between species were also calculated in GenAlEx 6.502 30 . In addition, the correlation between genetic and geographic distance was analyzed across all accessions as well as within species using a Mantel test implemented in GenAlEx 6.502. Phylogenetic analyses using the Neighbor Joining (NJ) and Unweighted Pair Group Method with Arithmetic mean (UPGMA) methods, and a bootstrap test with 500 replications were performed with DARwin 6.0.14 software 31 to reveal relationships within the genus Eremurus. In addition, the pairwise Nei's genetic distance and Fst genetic differentiation were calculated between the seven species of the Eremurus genus using SNPs identified across species ('species SNPs') and between clades within the largest population groups using SNPs identified within subpopulations ('subpopulation SNPs') in GenAlEx 6.502.
To examine the power of the SNP markers to detect unique multilocus genotypes (MLGs), we generated genotype accumulation curves using the total 'species SNP' dataset (3002 SNPs) and random subsets of 100, 200, 500, and 1000 SNPs using the function 'genotype-curve' implemented in the R3.2.2 32 package 'ppopr'. The genotype-curve function randomly samples different subsets of SNPs without replacement and plots the relationship between the number of SNPs scored and the number of MLGs identified.
Genetic diversity based on morphological characteristics
The 16 morphological traits scored for species identification were compared for their Shannon-Weiner diversity index (H′) using the following formula: H′ = −∑p i ln (p i ), where p i is the frequency of the ith character. The morphological diversity of each species was estimated by calculating the H′ diversity index, the number of morphotypes (number of different combinations of morphological characters), the number of private morphotypes (number of morphotypes present in a single species only), the number of private characters (characters fixed in one species at a frequency of 100% and absent in all other species) and diagnostic characters (characters present in one species at a frequency below 100% and absent in all other species). The number of total morphotypes and private morphotypes were also calculated by clade for the three largest subpopulation groups as determined by STRUCTURE. In addition, morphological differentiation within the genus Eremurus was investigated based on the 16 traits by a Principal Coordinates Analysis (PCoA) using GenAlEx 6.502 30 . The contribution of each morphological trait to axes 1 and 2 was calculated using the R3.2.2 32 packages 'FactoMineR' and 'Factoextra'. Variability among species and within species for morphological traits was assessed in GenAlEx 6.502 30 . Finally, a Mantel test was performed between the genetic and morphological matrix distances across the genotyped accessions using GenAlEx 6.502.
|
v3-fos-license
|
2023-09-24T16:34:13.028Z
|
2023-09-01T00:00:00.000
|
262194486
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2075-4418/13/18/2958/pdf?version=1694767909",
"pdf_hash": "c9336df906cd131a1fca9ef6c85769b9ed1f0415",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46051",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "97656384bb962ab33a4ce4faeff7e70fcab1231d",
"year": 2023
}
|
pes2o/s2orc
|
Multilevel Threshold Segmentation of Skin Lesions in Color Images Using Coronavirus Optimization Algorithm
Skin Cancer (SC) is among the most hazardous due to its high mortality rate. Therefore, early detection of this disease would be very helpful in the treatment process. Multilevel Thresholding (MLT) is widely used for extracting regions of interest from medical images. Therefore, this paper utilizes the recent Coronavirus Disease Optimization Algorithm (COVIDOA) to address the MLT issue of SC images utilizing the hybridization of Otsu, Kapur, and Tsallis as fitness functions. Various SC images are utilized to validate the performance of the proposed algorithm. The proposed algorithm is compared to the following five meta-heuristic algorithms: Arithmetic Optimization Algorithm (AOA), Sine Cosine Algorithm (SCA), Reptile Search Algorithm (RSA), Flower Pollination Algorithm (FPA), Seagull Optimization Algorithm (SOA), and Artificial Gorilla Troops Optimizer (GTO) to prove its superiority. The performance of all algorithms is evaluated using a variety of measures, such as Mean Square Error (MSE), Peak Signal-To-Noise Ratio (PSNR), Feature Similarity Index Metric (FSIM), and Normalized Correlation Coefficient (NCC). The results of the experiments prove that the proposed algorithm surpasses several competing algorithms in terms of MSE, PSNR, FSIM, and NCC segmentation metrics and successfully solves the segmentation issue.
Introduction
Nowadays, SC is a serious illness that may afflict anyone regardless of race, gender, and age.The skin tissues' aberrant growth is usually caused by exposure to Ultraviolet Radiation (UVR) from the Sun or tanning beds.The significance of SC lies in its potential to spread to other parts of the body if not detected and treated early [1].According to the World Health Organization (WHO), in 2022, UVR caused over 1.5 million cases of SC.In 2020, there were 66,000 deaths from malignant melanoma and other SCs.In the United States, there are an estimated 1.1 million annual cases of SC.Melanoma, basal cell carcinoma, and squamous cell carcinoma are the three most frequent kinds of SC.Melanoma is the deadliest form of cancer [2].
Malignant melanoma can also be less deadly and more treatable if found early.It might be diagnosed in its early stages, preventing the need for an expensive treatment that would cost millions of dollars.However, detecting and accurately segmenting SC lesions pose significant challenges.One major challenge is the similarity between benign and malignant lesions in appearance, which makes it difficult for healthcare professionals to differentiate between them based on visual examination alone.Another challenge is the variability in different individuals' lesion size, shape, color, and texture.This variability makes it challenging to develop a universal algorithm or model for accurate detection and segmentation across diverse populations.
Furthermore, detecting skin cancer requires expertise and experience from dermatologists or trained healthcare professionals.The shortage of dermatologists in many regions Diagnostics 2023, 13, 2958 2 of 30 can lead to delays in diagnosis and treatment.Researchers are exploring various Computer-Aided Diagnostic (CAD) systems that utilize Artificial Intelligence (AI) techniques, such as machine learning and deep learning algorithms, to address these challenges.These systems aim to improve the accuracy and efficiency of skin cancer detection by analyzing large datasets of images and identifying patterns indicative of malignancy.Additionally, advancements in imaging technologies like dermoscopy have improved visualization capabilities for clinicians.Dermoscopy allows for magnified examination of skin lesions using specialized equipment that enhances surface details and structures not visible to the naked eye [3].Image segmentation techniques first define the lesion's borders to identify skin cancer.Image segmentation also refers to extracting interesting objects from images and analyzing their behavior to reveal the presence of a problem or sickness [4].According to the literature, image segmentation techniques include edge detection [5], clustering [6], and thresholding-based segmentation [7].
Edge detection algorithms can identify the boundaries of skin lesions by detecting abrupt changes in pixel intensity.This technique is useful for identifying irregularities in the shape and texture of skin lesions, which are important features for diagnosing skin cancer.Edge detection can help differentiate between healthy skin and potentially cancerous regions.
Clustering techniques group pixels based on their similarity in color or intensity values.In the context of skin cancer detection, clustering algorithms can identify regions with similar color characteristics as potential lesions.
Thresholding is the most common segmentation approach due to its ease of use, simplicity, fast computation, and robustness against noise.Thresholding methods often have mechanisms to handle noisy data points [8].The limitations of this technique include sensitivity to threshold selection: The choice of threshold(s) can significantly impact the segmentation results and difficulty with complex textures or lighting variations.Thresholding may struggle with complex textures or when the lighting conditions vary across the image.
Despite the significance of image segmentation in identifying objects of interest from medical images, some issues, such as noise contamination and artifacts from image capture, cause mistakes in the segmentation of medical images.Various smoothing approaches (for instance, developing an algorithm or tuning a filter) can decrease errors or eliminate noise.Without this step, the exact segmentation of the image may not be easy [9].Most currently used segmentation methods depend greatly on several pre-processing methods to avert the consequences of unwanted artifacts that could impair accurate skin lesion segmentation [10].
Thresholding-based segmentation is split into two classes depending on how many thresholds were utilized to segment the image: Bilevel and multilevel [11].A threshold value divides the image into homogenous foreground and background portions in the first class.On the other hand, multilevel splits the image using a histogram of pixel intensities into more than two portions.Since bilevel thresholding separates an image into two sections, it cannot accurately recognize images with numerous objects on colorful backgrounds.MLT is more suitable in these cases [12].The essential step in the thresholding process is determining the optimal threshold values that effectively define the image segments.
As a result, it is defined as an optimization issue that may be addressed by parametric or nonparametric techniques [13].In the parametric technique, the probability density function calculates parameters for every region to determine the optimal threshold values.Through this, the nonparametric technique aims to maximize a function like fuzzy entropy [14], Kapur's entropy (maximizing class entropy) [15], and Otsu function (maximizing between-variance) [13].Regrettably, by those techniques, determining the optimal threshold values for MLT is difficult and enormously raises the computational cost, especially as the threshold levels increase.Therefore, an efficient new alternative was necessary because of the substantial success of the meta-heuristic algorithms in numerous domains, such as communications, engineering, social sciences, transportation, and business.Researchers have focused on them to solve the challenges of MLT image segmentation [16][17][18][19][20][21].
Compared to a gray-level image, a color image depicts a scene in the real world more accurately.In image processing, different color spaces represent and analyze images.Each color space has advantages and disadvantages, making them suitable for specific applications.One commonly used color space is the Red, Green, Blue (RGB) color space.It represents colors by combining different intensities of red, green, and blue channels.RGB is widely used in digital imaging systems as it closely matches how humans perceive colors.However, RGB has limitations regarding image analysis tasks such as object detection or segmentation since it does not separate color information from brightness.Another popular color space is the Hue, Saturation, Value (HSV) color space.HSV separates the hue (color), saturation (intensity of color), and value (brightness) components of an image.This separation makes manipulating specific aspects of an image easier without affecting others.For example, changing only the hue component can alter the perceived color without changing brightness or intensity.HSV is often used in applications like image editing or tracking objects based on their color.Cyan, Magenta, Yellow, Key/Black (CMYK) is primarily used in printing processes where colors are represented using subtractive rather than additive mixing like RGB.It plays a vital role in the graphic design and printing industries.RGB is commonly defined and most gray-level segmentation techniques may be applied directly to each component of an RGB image; nonetheless, few studies [22][23][24] address how to apply MLT techniques to a color image.Borjigin et al. [22] concentrate on the RGB color space, which is the most commonly used to segment images.
The following summarizes the key contributions of this paper: • COVIDOA is shown to deal with MLT in image segmentation.
•
The hybridization of Otsu, Kapur, and Tsallis as a fitness function was used to present a skin cancer segmentation technique.
•
Various segmentation levels are employed to assess the proposed technique's performance.
•
The proposed technique is compared to numerous popular meta-heuristics techniques.
•
The effectiveness of the segmentation technique is validated by utilizing the MSE, PSNR, FSIM, and NCC matrices.
•
The proposed technique may be expanded to accommodate various medical imaging diagnoses and used for additional benchmark images.
The next sections of this study are arranged as follows: Section 2 shows the related work.Section 3 presents the materials and methods.Section 4 describes the COVIDOA with the proposed fitness function for MLT segmentation.Section 5 shows the results and discussion.Section 6 provides conclusions and future work.
The algorithms mentioned above have been tested on grayscale and color images.This study aims to advance the field of color image segmentation by giving an improved fitness function for COVIDOA.We believe this is the first use of the COVIDOA for image segmentation in color skin lesions images.
Numerous applications of meta-heuristics have been found.As a result, the papers that follow offer some important recent works.Rai et al. [37] evaluated all nature-inspired optimization techniques and the importance of such algorithms for MLT segmentation of images from 2019 until 2021.Sharma et al. [38] found that Kapur, Tsallis, and fuzzy entropy objective functions provided an efficient opposition-based modified firefly method for MT image segmentation.In [39], an upgraded GWO known as the Multistage Grey Wolf Optimizer (MGWO) is shown for MLT image segmentation.The proposed technique achieved superior outcomes compared to other examined approaches.In [40], a novel proposal that combines the WOA with the Virus Colony Search (VCS) Optimizer (VSCWOA) is given.The VSCWOA's effectiveness in overcoming image segmentation issues has been proven.The proposed algorithm has been demonstrated to be very successful.In [41], a neural network-based method for segmenting medical images has been presented.The authors of [42] have proposed an improved method for ant colony optimization.The segmentation outcomes provided by the proposed method are more reliable and superior when compared to other methods.The authors of [24] used an adaptive WOA and a prominent color component for MLT of color images.A combination of lion and cat swarm optimization techniques offered the best threshold value for efficient MLT image segmentation [43].Bhavani and Champa [44] presented a hybrid MPA and Salp Swarm Algorithm (SSA) to achieve optimal MLT image segmentation.Using an updated Firefly Algorithm (FA) with Kapur's, Tsallis, and fuzzy entropy, an MLT image segmentation technique was given in [45].In the EO algorithm, an Opposition-Based Learning (OBL) mechanism and the Laplace distribution were used [46] to create a modified EO method for segmenting grayscale images utilizing MLT.In [47], an MLT image segmentation technique depending on the moth swarm algorithm was suggested.The image segmentation findings demonstrate that their proposed technique outperforms the other analyzed algorithms regarding efficiency.Also, in [48], an improved Artificial Bee Colony (ABC) algorithm-based image segmentation using an MLT technique for color images has been suggested.Dynamic Cauchy mutation and OBL enhanced the elephant herding optimization method [49].The WOA was presented in [50] to solve the image segmentation problem using Kapur's entropy technique.The authors of [51] proposed a new MLT image segmentation technique depending on the Krill Herd Optimization (KHO) algorithm.Kapur's entropy is used as a fitness function that needs to be maximized to reach the optimum threshold values.Furthermore, a new meta-heuristic algorithm, galactic swarm optimization, has been adapted to tackle image segmentation [52].Anitha et al. [53] introduced a modified WOA to maximize Otsu's and Kapur's objective functions to enhance the threshold selection for MLT of color images.This proposed method surpassed various techniques, such as CS and PSO.In [54], RSA-SSA is a new nature-inspired meta-heuristic optimizer for image segmentation employing grayscale MLT based on RSA merged with the SSA.The authors of [55] developed an improved SSA that combines iterative mapping and a local escaping operator.This method utilizes Two-Dimensional (2D) Kapur's entropy as the objective function and uses a nonlocal means 2D histogram to indicate the image information.A Deep Belief Network (DBN), depending on an enhanced meta-heuristic algorithm known as the Modified Electromagnetic Field Optimization Algorithm (MEFOA), was presented in [56] for analyzing SC.In [57], an improved RSA for overall optimization and choosing ideal threshold values for MLT image segmentation was used.The authors of [58] showed an innovative approach for skin cancer diagnosis according to meta-heuristics and deep learning.The Multi-Agent Fuzzy Buzzard Algorithm (MAFBUZO) combines local search agents in multiagent systems with the BUZO algorithm's global search ability.During optimization, a suitable balance of exploitation and exploration steps is enabled.In [59], a new meta-heuristic algorithm for 2D and 3D medical gray image segmentation is proposed based on COVIDOA merged with the HHOA to benefit from both algorithms' strengths and overcome their limitations.The COVIDOA is also used in [60] to solve the segmentation problems of satellite images.
Materials and Methods
This section presents the required materials and methods to develop the proposed technique.The multilevel thresholding is explained.The objective functions utilized in this research are also shown.
Multilevel Thresholding
Image thresholding transforms the grayscale or color image into a binary image, applying a threshold value to the image's pixel intensity [61].Pixels below that threshold convert into black, and those above it turn white.There are two classes of image thresholding: Bilevel and multilevel.Bilevel uses a single threshold value (th) to assign each pixel P of the image to one of two regions (R 1 and R 2 ) as stated below: where L represents the maximal intensity level.Multilevel, on the other hand, divides an image into numerous separate areas by employing a variety of threshold values, as seen below: where {th 1 , th 2 , . . . th k−1 } indicates various threshold values.
Maximizing a fitness function may determine the optimal values for thresholds.The three common thresholding segmentation techniques are Otsu's, Kapur's, and Tsallis's.Every technique suggests a distinct fitness function that must be maximized to find the ideal threshold values.The three techniques are explained in the next subsections briefly.Additionally, red, green, and blue are the three main color components in an RGB image, so these thresholding techniques are used three times to obtain the best threshold values for each of the three colors.
Otsu's (Between-Class Variance) Method
This method is a variance-based technique suggested in [13] to find the optimal threshold values separating the heterogeneity of an image by maximizing the betweenclass variance.It is referred to as a nonparametric segmentation technique that splits the pixels of the grayscale or color image into various areas based on the pixel intensity values [62].
Let us suppose that we have L as the grayscale image's intensity level or each color image's channel with N pixels, and the number of pixels with gray level i is calculated by x i .The gray level's probability is given as: Bilevel thresholding divides the original image, and the between-class variance of two categories is determined as: Diagnostics 2023, 13, 2958 6 of 30 The average level of bilevel classes is shown as follows: The following is a representation of the classes' cumulative probability: Consequently, the optimal threshold t * C of Otsu is calculated by maximizing the class variance as: The image is categorized into f classes and with f − 1 threshold values.The Otsu between-class variance is shown as: The optimal thresholding values t * C 1 , t * C 2 , . . .t * C y−1 are determined by maximizing σ C B as follows: The following are the average levels of f classes: Similarly, when applying Otsu's method, C = 1, 2, 3, where C stands for the RGB image channels and C = 1 represents the grayscale image.
Kapur's Entropy (Maximum Entropy Method)
Another unsupervised automated thresholding approach is Kapur's method, which chooses the optimal thresholds depending on the entropy of split classes [15].The entropy is employed by computing the probability distribution of the gray-level histogram [63] to predict information from an image.The objective function for Kapur's maximization in bilevel thresholding is as follows: where where The optimum threshold value is as follows: Kapur's multilevel thresholding extension is shown as follows: The image is split into f classes by f − 1 thresholding values.Extension of Kapur's entropy for multilevel thresholding image segmentation is stated as: The optimal multilevel thresholding in multidimensional optimization issues is utilized to calculate f − 1 optimum threshold values, t 1 , t 2 ,. .., t f −1 .Consequently, the objective function of Kapur's entropy is presented as follows: 3.1.3.T'sallis Entropy Method T'sallis entropy is also called nonextensive entropy.It has the benefit of using the global and objective properties of the images [64].Depending on the multifractal theory, Tsallis entropy can be represented using a common entropic formula: where k denotes the image's total number of possibilities and q is the T'sallis parameter or entropic index.T'sallis entropy can be characterized by a pseudo additively entropic rule based on Equation (39): Assume that {1, 2, . . . ,G} represents the image gray levels and {P i = P 1 , P 2 , . . ., P G } is the gray intensity points' probability distribution.Two classes, A and B, may be created for the background and the object of interest, respectively, followed by the supplied Equation (41).
where P CA = ∑ t i=1 P C i and P CB = ∑ G i=t+1 P C i .Tsallis entropy can be classified as the following for each class: The optimum threshold value for bilevel thresholding may be obtained by using the objective function with minimal computational effort for the gray level for which this occurs: Subject to the enumerated restriction: The formulation mentioned above may easily be expanded for multilevel thresholding utilizing Equation (45).
Proposed Fitness Function
A hybrid fitness function determines the fitness of COVIDOA solutions in image segmentation issues.This hybrid function is created by applying weights to the Otsu, Kapur, and Tsallis functions, as shown in Equation (48).
where a, b, and c [0, 1] are the weights related to the three fitness functions, and a + b + c = 1.The suggested fitness function concurrently optimizes the Otsu, Kapur, and Tsallis methods and carries this out more accurately.We tried several different combinations of a, b, and c values.We found that the most effective outcomes were obtained with these values: a = 0.6, b = 0.3, and c = 0.1.We carried out some experiments on a collection of skin cancer color images to prove these values are the best.In Section 5, the results are displayed.
COVID Optimization Algorithm with the Proposed Fitness Function
Recently, the population-based optimization method COVIDOA was proposed to model coronavirus replication as it enters the human body [36,65].
Coronavirus replication comprises four major phases, which are listed below: 1. Virus entry and uncoating Spike protein, one of the structural proteins of the coronavirus, is responsible for the particles' attachment to human cells when a person becomes infected with COVID-19 [66].When a virus enters a human cell, its contents are released.
Virus replication
The virus attempts to replicate itself to hijack other healthy human cells.The frameshifting approach is the virus's method of reproduction [67].Frameshifting is the process of shifting the reading frame of a virus's protein sequence to another reading frame, which results in the synthesis of numerous new viral proteins, which are subsequently combined to produce new virus particles.There are several different sorts of frameshifting techniques; nonetheless, the most common is +1 frameshifting, which is the following step [68]:
+1 frameshifting technique
The parent virus particle (parent solution) elements are shifted one step in the right direction.The first element is lost as a result of +1 frameshifting.The first element in the proposed algorithm is assigned a random value within the limit [Lb, Ub] in the following manner: S k (2 where Lb and Ub are the lower and upper limits for the variables in each solution, P represents the parent solution, S k is the kth produced viral protein, and D is the problem dimension.
Virus mutation
Coronaviruses exploit mutation to avoid detection by the human immune system [69].The proposed algorithm applies the mutation on a previously formed viral particle (solution) to generate a new one in the following manner: The symbol X denotes the solution before mutation, Z is the mutated solution, X i and Z i are the ith element in the old and new solutions, i = 1, . .., D, r is a random value from the limit [Lb, Ub], and MR is the mutation rate.
New virion release
The newly formed virus particle exits the infected cell for more healthy cells.In the proposed algorithm, if the fitness of the new solution is greater than the fitness of the parent solution, the parent solution is replaced with the new one.Otherwise, the parent solution is still in place.
The COVIDOA flow chart with the proposed fitness function for MLT segmentation of skin lesion images is depicted in Figure 1.
New virion release
The newly formed virus particle exits the infected cell for more healthy cells.In the proposed algorithm, if the fitness of the new solution is greater than the fitness of the parent solution, the parent solution is replaced with the new one.Otherwise, the parent solution is still in place.
The COVIDOA flow chart with the proposed fitness function for MLT segmentation of skin lesion images is depicted in Figure 1.
Computational Complexity Analysis
According to the structure of COVIDOA, it mostly involves initialization, fitness evaluation, and updating of COVIDOA solutions.Where the number of solutions is N, D is the dimension of the problem, and T is the maximum number of iterations.The calculation is as follows: The time complexity for initialization is O(N).Additionally, the COVIDOA calculates the fitness of each solution with a complexity of O(T × N × D), and the computational complexity of the update of the solution vector of all solutions is O(N × D).Consequently, the total computational complexity of COVIDOA is O(N × T × D).
Experimental Results and Discussion
This section begins with a summary of the datasets utilized for testing.Then, we illustrate the parameter settings for the proposed and state-of-the-art algorithms, followed by the evaluation metrics utilized to compare the outcomes.The numerical outcomes of testing the proposed algorithm and its competitors are then shown.Finally, we accomplished a comparative study of the collected outcomes.
Dataset
This paper uses SC images from the International Skin Imaging Collaboration (ISIC).This multinational collaboration has created the biggest public archive of dermoscopic skin images globally [70] and it is used to evaluate the proposed algorithm's performance.More than 12,500 images across three tasks are included in this dataset.
Our experiments involve segmenting 10 color images for SC using two, three, four, and five threshold levels.Those images are selected randomly from the ISIC datasets to validate the performance of COVIDOA.
Table 1 depicts the histograms of each component and the original image, as red, green, and blue represent the three components of a color image.It is important to mention that the taken images are given new names like img1, img2, img3, img4, and so on.
Computational Complexity Analysis
According to the structure of COVIDOA, it mostly involves initialization, fitness evaluation, and updating of COVIDOA solutions.Where the number of solutions is , is the dimension of the problem, and is the maximum number of iterations.The calculation is as follows: The time complexity for initialization is .Additionally, the COVIDOA calculates the fitness of each solution with a complexity of , and the computational complexity of the update of the solution vector of all solutions is .Consequently, the total computational complexity of COVIDOA is .
Experimental Results and Discussion
This section begins with a summary of the datasets utilized for testing.Then, we illustrate the parameter settings for the proposed and state-of-the-art algorithms, followed by the evaluation metrics utilized to compare the outcomes.The numerical outcomes of testing the proposed algorithm and its competitors are then shown.Finally, we accomplished a comparative study of the collected outcomes.
Dataset
This paper uses SC images from the International Skin Imaging Collaboration (ISIC).This multinational collaboration has created the biggest public archive of dermoscopic skin images globally [70] and it is used to evaluate the proposed algorithm's performance.More than 12,500 images across three tasks are included in this dataset.
Our experiments involve segmenting 10 color images for SC using two, three, four, and five threshold levels.Those images are selected randomly from the ISIC datasets to validate the performance of COVIDOA.
Table 1 depicts the histograms of each component and the original image, as red, green, and blue represent the three components of a color image.It is important to mention that the taken images are given new names like img1, img2, img3, img4, and so on.
Computational Complexity Analysis
According to the structure of COVIDOA, it mostly involves initialization, fitness evaluation, and updating of COVIDOA solutions.Where the number of solutions is , is the dimension of the problem, and is the maximum number of iterations.The calculation is as follows: The time complexity for initialization is .Additionally, the COVIDOA calculates the fitness of each solution with a complexity of , and the computational complexity of the update of the solution vector of all solutions is .Consequently, the total computational complexity of COVIDOA is .
Experimental Results and Discussion
This section begins with a summary of the datasets utilized for testing.Then, we illustrate the parameter settings for the proposed and state-of-the-art algorithms, followed by the evaluation metrics utilized to compare the outcomes.The numerical outcomes of testing the proposed algorithm and its competitors are then shown.Finally, we accomplished a comparative study of the collected outcomes.
Dataset
This paper uses SC images from the International Skin Imaging Collaboration (ISIC).This multinational collaboration has created the biggest public archive of dermoscopic skin images globally [70] and it is used to evaluate the proposed algorithm's performance.More than 12,500 images across three tasks are included in this dataset.
Our experiments involve segmenting 10 color images for SC using two, three, four, and five threshold levels.Those images are selected randomly from the ISIC datasets to validate the performance of COVIDOA.
Table 1 depicts the histograms of each component and the original image, as red, green, and blue represent the three components of a color image.It is important to mention that the taken images are given new names like img1, img2, img3, img4, and so on.
Computational Complexity Analysis
According to the structure of COVIDOA, it mostly involves initialization, fitness evaluation, and updating of COVIDOA solutions.Where the number of solutions is , is the dimension of the problem, and is the maximum number of iterations.The calculation is as follows: The time complexity for initialization is .Additionally, the COVIDOA calculates the fitness of each solution with a complexity of , and the computational complexity of the update of the solution vector of all solutions is .Consequently, the total computational complexity of COVIDOA is .
Experimental Results and Discussion
This section begins with a summary of the datasets utilized for testing.Then, we illustrate the parameter settings for the proposed and state-of-the-art algorithms, followed by the evaluation metrics utilized to compare the outcomes.The numerical outcomes of testing the proposed algorithm and its competitors are then shown.Finally, we accomplished a comparative study of the collected outcomes.
Dataset
This paper uses SC images from the International Skin Imaging Collaboration (ISIC).This multinational collaboration has created the biggest public archive of dermoscopic skin images globally [70] and it is used to evaluate the proposed algorithm's performance.More than 12,500 images across three tasks are included in this dataset.
Our experiments involve segmenting 10 color images for SC using two, three, four, and five threshold levels.Those images are selected randomly from the ISIC datasets to validate the performance of COVIDOA.
Table 1 depicts the histograms of each component and the original image, as red, green, and blue represent the three components of a color image.It is important to mention that the taken images are given new names like img1, img2, img3, img4, and so on.
Computational Complexity Analysis
According to the structure of COVIDOA, it mostly involves initialization, fitness evaluation, and updating of COVIDOA solutions.Where the number of solutions is , is the dimension of the problem, and is the maximum number of iterations.The calculation is as follows: The time complexity for initialization is .Additionally, the COVIDOA calculates the fitness of each solution with a complexity of , and the computational complexity of the update of the solution vector of all solutions is .Consequently, the total computational complexity of COVIDOA is .
Experimental Results and Discussion
This section begins with a summary of the datasets utilized for testing.Then, we illustrate the parameter settings for the proposed and state-of-the-art algorithms, followed by the evaluation metrics utilized to compare the outcomes.The numerical outcomes of testing the proposed algorithm and its competitors are then shown.Finally, we accomplished a comparative study of the collected outcomes.
Dataset
This paper uses SC images from the International Skin Imaging Collaboration (ISIC).This multinational collaboration has created the biggest public archive of dermoscopic skin images globally [70] and it is used to evaluate the proposed algorithm's performance.More than 12,500 images across three tasks are included in this dataset.
Our experiments involve segmenting 10 color images for SC using two, three, four, and five threshold levels.Those images are selected randomly from the ISIC datasets to validate the performance of COVIDOA.
Table 1 depicts the histograms of each component and the original image, as red, green, and blue represent the three components of a color image.It is important to mention that the taken images are given new names like img1, img2, img3, img4, and so on.
Computational Complexity Analysis
According to the structure of COVIDOA, it mostly involves initialization, fitness evaluation, and updating of COVIDOA solutions.Where the number of solutions is , is the dimension of the problem, and is the maximum number of iterations.The calculation is as follows: The time complexity for initialization is .Additionally, the COVIDOA calculates the fitness of each solution with a complexity of , and the computational complexity of the update of the solution vector of all solutions is .Consequently, the total computational complexity of COVIDOA is .
Experimental Results and Discussion
This section begins with a summary of the datasets utilized for testing.Then, we illustrate the parameter settings for the proposed and state-of-the-art algorithms, followed by the evaluation metrics utilized to compare the outcomes.The numerical outcomes of testing the proposed algorithm and its competitors are then shown.Finally, we accomplished a comparative study of the collected outcomes.
Dataset
This paper uses SC images from the International Skin Imaging Collaboration (ISIC).This multinational collaboration has created the biggest public archive of dermoscopic skin images globally [70] and it is used to evaluate the proposed algorithm's performance.More than 12,500 images across three tasks are included in this dataset.
Our experiments involve segmenting 10 color images for SC using two, three, four, and five threshold levels.Those images are selected randomly from the ISIC datasets to validate the performance of COVIDOA.
Table 1 depicts the histograms of each component and the original image, as red, green, and blue represent the three components of a color image.It is important to mention that the taken images are given new names like img1, img2, img3, img4, and so on.
Computational Complexity Analysis
According to the structure of COVIDOA, it mostly involves initialization, fitness evaluation, and updating of COVIDOA solutions.Where the number of solutions is , is the dimension of the problem, and is the maximum number of iterations.The calculation is as follows: The time complexity for initialization is .Additionally, the COVIDOA calculates the fitness of each solution with a complexity of , and the computational complexity of the update of the solution vector of all solutions is .Consequently, the total computational complexity of COVIDOA is .
Experimental Results and Discussion
This section begins with a summary of the datasets utilized for testing.Then, we illustrate the parameter settings for the proposed and state-of-the-art algorithms, followed by the evaluation metrics utilized to compare the outcomes.The numerical outcomes of testing the proposed algorithm and its competitors are then shown.Finally, we accomplished a comparative study of the collected outcomes.
Dataset
This paper uses SC images from the International Skin Imaging Collaboration (ISIC).This multinational collaboration has created the biggest public archive of dermoscopic skin images globally [70] and it is used to evaluate the proposed algorithm's performance.More than 12,500 images across three tasks are included in this dataset.
Our experiments involve segmenting 10 color images for SC using two, three, four, and five threshold levels.Those images are selected randomly from the ISIC datasets to validate the performance of COVIDOA.
Table 1 depicts the histograms of each component and the original image, as red, green, and blue represent the three components of a color image.It is important to mention that the taken images are given new names like img1, img2, img3, img4, and so on.
Computational Complexity Analysis
According to the structure of COVIDOA, it mostly involves initialization, fitness evaluation, and updating of COVIDOA solutions.Where the number of solutions is , is the dimension of the problem, and is the maximum number of iterations.The calculation is as follows: The time complexity for initialization is .Additionally, the COVIDOA calculates the fitness of each solution with a complexity of , and the computational complexity of the update of the solution vector of all solutions is .Consequently, the total computational complexity of COVIDOA is .
Experimental Results and Discussion
This section begins with a summary of the datasets utilized for testing.Then, we illustrate the parameter settings for the proposed and state-of-the-art algorithms, followed by the evaluation metrics utilized to compare the outcomes.The numerical outcomes of testing the proposed algorithm and its competitors are then shown.Finally, we accomplished a comparative study of the collected outcomes.
Dataset
This paper uses SC images from the International Skin Imaging Collaboration (ISIC).This multinational collaboration has created the biggest public archive of dermoscopic skin images globally [70] and it is used to evaluate the proposed algorithm's performance.More than 12,500 images across three tasks are included in this dataset.
Our experiments involve segmenting 10 color images for SC using two, three, four, and five threshold levels.Those images are selected randomly from the ISIC datasets to validate the performance of COVIDOA.
Table 1 depicts the histograms of each component and the original image, as red, green, and blue represent the three components of a color image.It is important to mention that the taken images are given new names like img1, img2, img3, img4, and so on.
Computational Complexity Analysis
According to the structure of COVIDOA, it mostly involves initialization, fitness evaluation, and updating of COVIDOA solutions.Where the number of solutions is , is the dimension of the problem, and is the maximum number of iterations.The calculation is as follows: The time complexity for initialization is .Additionally, the COVIDOA calculates the fitness of each solution with a complexity of , and the computational complexity of the update of the solution vector of all solutions is .Consequently, the total computational complexity of COVIDOA is .
Experimental Results and Discussion
This section begins with a summary of the datasets utilized for testing.Then, we illustrate the parameter settings for the proposed and state-of-the-art algorithms, followed by the evaluation metrics utilized to compare the outcomes.The numerical outcomes of testing the proposed algorithm and its competitors are then shown.Finally, we accomplished a comparative study of the collected outcomes.
Dataset
This paper uses SC images from the International Skin Imaging Collaboration (ISIC).This multinational collaboration has created the biggest public archive of dermoscopic skin images globally [70] and it is used to evaluate the proposed algorithm's performance.More than 12,500 images across three tasks are included in this dataset.
Our experiments involve segmenting 10 color images for SC using two, three, four, and five threshold levels.Those images are selected randomly from the ISIC datasets to validate the performance of COVIDOA.
Table 1 depicts the histograms of each component and the original image, as red, green, and blue represent the three components of a color image.It is important to mention that the taken images are given new names like img1, img2, img3, img4, and so on.Img9 ---Pixel Intensity (0-255) ---> Img10 ---Pixel Intensity (0-255) --->
These algorithms were chosen for comparison for the following reasons: They have demonstrated their superior capacity to solve several optimization challenges, particularly image segmentation.
The majority of them are current and have been published in reliable sources.
Their MATLAB implementations are freely accessible on the MATLAB website (https://matlab.mathworks.com/accessed on 18 August 2023).
All experiments were conducted using a laptop equipped with an Intel (R) Core (TM) i7-1065G7 CPU, 8.0 GB of RAM, and the Windows 10 Ultimate 64-bit operating system.All of the algorithms were created with the MATLAB R2016b developing environment.As previously stated, all algorithms are tested across 30 independent runs with a population size of 50 and a maximum iteration count of 100 for each input SC test image.For all algorithms, the simulation setting is the same.
Performance Evaluation Criteria
The proposed algorithm's performance is evaluated by four performance metrics: MSE, PSNR, FSIM, and NCC [76].These metrics are summarized below:
Mean Square Error (MSE)
MSE is frequently employed to calculate the difference between the segmented and original images.It is computed in the following manner: Here, , , , are the intensity level of the original and segmented image within the ith row and jth column, respectively.M and N are the image's row and column numbers, respectively.
Peak Signal-to-Noise Ratio (PSNR)
Another metric known as PSNR is frequently employed to quantify image quality.
These algorithms were chosen for comparison for the following reasons: They have demonstrated their superior capacity to solve several optimization challenges, particularly image segmentation.
The majority of them are current and have been published in reliable sources.
Their MATLAB implementations are freely accessible on the MATLAB website (https://matlab.mathworks.com/accessed on 18 August 2023).
All experiments were conducted using a laptop equipped with an Intel (R) Core (TM) i7-1065G7 CPU, 8.0 GB of RAM, and the Windows 10 Ultimate 64-bit operating system.All of the algorithms were created with the MATLAB R2016b developing environment.As previously stated, all algorithms are tested across 30 independent runs with a population size of 50 and a maximum iteration count of 100 for each input SC test image.For all algorithms, the simulation setting is the same.
Performance Evaluation Criteria
The proposed algorithm's performance is evaluated by four performance metrics: MSE, PSNR, FSIM, and NCC [76].These metrics are summarized below:
Mean Square Error (MSE)
MSE is frequently employed to calculate the difference between the segmented and original images.It is computed in the following manner: Here, , , , are the intensity level of the original and segmented image within the ith row and jth column, respectively.M and N are the image's row and column numbers, respectively.
Peak Signal-to-Noise Ratio (PSNR)
Another metric known as PSNR is frequently employed to quantify image quality.
These algorithms were chosen for comparison for the following reasons: They have demonstrated their superior capacity to solve several optimization challenges, particularly image segmentation.
The majority of them are current and have been published in reliable sources.
Their MATLAB implementations are freely accessible on the MATLAB website (https://matlab.mathworks.com/accessed on 18 August 2023).
All experiments were conducted using a laptop equipped with an Intel (R) Core (TM) i7-1065G7 CPU, 8.0 GB of RAM, and the Windows 10 Ultimate 64-bit operating system.All of the algorithms were created with the MATLAB R2016b developing environment.As previously stated, all algorithms are tested across 30 independent runs with a population size of 50 and a maximum iteration count of 100 for each input SC test image.For all algorithms, the simulation setting is the same.
Performance Evaluation Criteria
The proposed algorithm's performance is evaluated by four performance metrics: MSE, PSNR, FSIM, and NCC [76].These metrics are summarized below:
Mean Square Error (MSE)
MSE is frequently employed to calculate the difference between the segmented and original images.It is computed in the following manner: Here, , , , are the intensity level of the original and segmented image within the ith row and jth column, respectively.M and N are the image's row and column numbers, respectively.
Peak Signal-to-Noise Ratio (PSNR)
Another metric known as PSNR is frequently employed to quantify image quality.
These algorithms were chosen for comparison for the following reasons: They have demonstrated their superior capacity to solve several optimization challenges, particularly image segmentation.
The majority of them are current and have been published in reliable sources.
Their MATLAB implementations are freely accessible on the MATLAB website (https://matlab.mathworks.com/accessed on 18 August 2023).
All experiments were conducted using a laptop equipped with an Intel (R) Core (TM) i7-1065G7 CPU, 8.0 GB of RAM, and the Windows 10 Ultimate 64-bit operating system.All of the algorithms were created with the MATLAB R2016b developing environment.As previously stated, all algorithms are tested across 30 independent runs with a population size of 50 and a maximum iteration count of 100 for each input SC test image.For all algorithms, the simulation setting is the same.
Performance Evaluation Criteria
The proposed algorithm's performance is evaluated by four performance metrics: MSE, PSNR, FSIM, and NCC [76].These metrics are summarized below:
Mean Square Error (MSE)
MSE is frequently employed to calculate the difference between the segmented and original images.It is computed in the following manner: Here, , , , are the intensity level of the original and segmented image within the ith row and jth column, respectively.M and N are the image's row and column numbers, respectively.
Peak Signal-to-Noise Ratio (PSNR)
Another metric known as PSNR is frequently employed to quantify image quality.
These algorithms were chosen for comparison for the following reasons: They have demonstrated their superior capacity to solve several optimization challenges, particularly image segmentation.
The majority of them are current and have been published in reliable sources.
Their MATLAB implementations are freely accessible on the MATLAB website (https://matlab.mathworks.com/accessed on 18 August 2023).
All experiments were conducted using a laptop equipped with an Intel (R) Core (TM) i7-1065G7 CPU, 8.0 GB of RAM, and the Windows 10 Ultimate 64-bit operating system.All of the algorithms were created with the MATLAB R2016b developing environment.As previously stated, all algorithms are tested across 30 independent runs with a population size of 50 and a maximum iteration count of 100 for each input SC test image.For all algorithms, the simulation setting is the same.
Performance Evaluation Criteria
The proposed algorithm's performance is evaluated by four performance metrics: MSE, PSNR, FSIM, and NCC [76].These metrics are summarized below:
Mean Square Error (MSE)
MSE is frequently employed to calculate the difference between the segmented and original images.It is computed in the following manner: Here, , , , are the intensity level of the original and segmented image within the ith row and jth column, respectively.M and N are the image's row and column numbers, respectively.
Peak Signal-to-Noise Ratio (PSNR)
Another metric known as PSNR is frequently employed to quantify image quality.
These algorithms were chosen for comparison for the following reasons: They have demonstrated their superior capacity to solve several optimization challenges, particularly image segmentation.
The majority of them are current and have been published in reliable sources.
Their MATLAB implementations are freely accessible on the MATLAB website (https://matlab.mathworks.com/accessed on 18 August 2023).
All experiments were conducted using a laptop equipped with an Intel (R) Core (TM) i7-1065G7 CPU, 8.0 GB of RAM, and the Windows 10 Ultimate 64-bit operating system.All of the algorithms were created with the MATLAB R2016b developing environment.As previously stated, all algorithms are tested across 30 independent runs with a population size of 50 and a maximum iteration count of 100 for each input SC test image.For all algorithms, the simulation setting is the same.
Performance Evaluation Criteria
The proposed algorithm's performance is evaluated by four performance metrics: MSE, PSNR, FSIM, and NCC [76].These metrics are summarized below:
Mean Square Error (MSE)
MSE is frequently employed to calculate the difference between the segmented and original images.It is computed in the following manner: Here, , , , are the intensity level of the original and segmented image within the ith row and jth column, respectively.M and N are the image's row and column numbers, respectively.
Peak Signal-to-Noise Ratio (PSNR)
Another metric known as PSNR is frequently employed to quantify image quality.
These algorithms were chosen for comparison for the following reasons: They have demonstrated their superior capacity to solve several optimization challenges, particularly image segmentation.
The majority of them are current and have been published in reliable sources.
Their MATLAB implementations are freely accessible on the MATLAB website (https://matlab.mathworks.com/accessed on 18 August 2023).
All experiments were conducted using a laptop equipped with an Intel (R) Core (TM) i7-1065G7 CPU, 8.0 GB of RAM, and the Windows 10 Ultimate 64-bit operating system.All of the algorithms were created with the MATLAB R2016b developing environment.As previously stated, all algorithms are tested across 30 independent runs with a population size of 50 and a maximum iteration count of 100 for each input SC test image.For all algorithms, the simulation setting is the same.
Performance Evaluation Criteria
The proposed algorithm's performance is evaluated by four performance metrics: MSE, PSNR, FSIM, and NCC [76].These metrics are summarized below:
Mean Square Error (MSE)
MSE is frequently employed to calculate the difference between the segmented and original images.It is computed in the following manner: Here, , , , are the intensity level of the original and segmented image within the ith row and jth column, respectively.M and N are the image's row and column numbers, respectively.
Peak Signal-to-Noise Ratio (PSNR)
Another metric known as PSNR is frequently employed to quantify image quality.
These algorithms were chosen for comparison for the following reasons: They have demonstrated their superior capacity to solve several optimization challenges, particularly image segmentation.
The majority of them are current and have been published in reliable sources.
Their MATLAB implementations are freely accessible on the MATLAB website (https://matlab.mathworks.com/accessed on 18 August 2023).
All experiments were conducted using a laptop equipped with an Intel (R) Core (TM) i7-1065G7 CPU, 8.0 GB of RAM, and the Windows 10 Ultimate 64-bit operating system.All of the algorithms were created with the MATLAB R2016b developing environment.As previously stated, all algorithms are tested across 30 independent runs with a population size of 50 and a maximum iteration count of 100 for each input SC test image.For all algorithms, the simulation setting is the same.
Performance Evaluation Criteria
The proposed algorithm's performance is evaluated by four performance metrics: MSE, PSNR, FSIM, and NCC [76].These metrics are summarized below:
Mean Square Error (MSE)
MSE is frequently employed to calculate the difference between the segmented and original images.It is computed in the following manner: Here, , , , are the intensity level of the original and segmented image within the ith row and jth column, respectively.M and N are the image's row and column numbers, respectively.
Peak Signal-to-Noise Ratio (PSNR)
Another metric known as PSNR is frequently employed to quantify image quality.
These algorithms were chosen for comparison for the following reasons: • They have demonstrated their superior capacity to solve several optimization challenges, particularly image segmentation.
•
The majority of them are current and have been published in reliable sources.
All experiments were conducted using a laptop equipped with an Intel (R) Core (TM) i7-1065G7 CPU, 8.0 GB of RAM, and the Windows 10 Ultimate 64-bit operating system.All of the algorithms were created with the MATLAB R2016b developing environment.As previously stated, all algorithms are tested across 30 independent runs with a population size of 50 and a maximum iteration count of 100 for each input SC test image.For all algorithms, the simulation setting is the same.
Performance Evaluation Criteria
The proposed algorithm's performance is evaluated by four performance metrics: MSE, PSNR, FSIM, and NCC [76].These metrics are summarized below:
Mean Square Error (MSE)
MSE is frequently employed to calculate the difference between the segmented and original images.It is computed in the following manner: Here, F(i, j), f (i, j) are the intensity level of the original and segmented image within the ith row and jth column, respectively.M and N are the image's row and column numbers, respectively.
Peak Signal-to-Noise Ratio (PSNR)
Another metric known as PSNR is frequently employed to quantify image quality.It refers to the ratio of the square of the maximum gray level 255 2 , and the MSE between the original and separated one is computed as follows: MSE is computed using the equation mentioned above.Increasing PSNR is necessary to obtain higher quality.
Feature Similarity Index Metric (FSIM)
FSIM is utilized to compute the structural similarity of two images in the following manner: where S L (X) indicates the resemblance between the two images, PC is the phase con- gruence, and Ω relates to the image's spatial domain.The FSIM's highest possible value, representing total similarity, is 1.A higher FSIM value enhances the thresholding process's performance [77].
Normalized Correlation Coefficient (NCC)
NCC is a metric for determining how closely two images are connected.NCC's absolute value varies between 0 and 1.A value of 0 shows no relationship between the two images, and 1 denotes the most powerful possible relationship.The greater the absolute value of NCC, the stronger the association between the two images.NCC between the original image F(i, j) and segmented images f (i, j) is estimated in the following manner:
Experimental Results
This subsection displays the numerical outcomes of testing the COVIDOA to choose the optimal threshold values utilizing the proposed fitness function.These outcomes are evaluated against the state-of-the-art AOA, SCA, RSA, FPA, SOA, and GTO algorithms.The experiments used two, three, four, and five threshold values.We ran the COVID optimization algorithm with classic Otsu, Kapur, and T'sallis methods, and then the outcomes of these fitness functions were compared with those of using the proposed fitness function.
The outcomes are represented in Table 2, and Figure 2 depicts the average.From these results, we confirmed that the proposed fitness function surpasses all other fitness functions.
We used the proposed fitness function, as seen in Equation (48).Table 3 displays COVIDOA segmented images for all SC test images utilized in the experiments.Table 4 displays the graphs illustrating the optimal COVIDOA threshold values for RGB channels for the last test image for levels 2, 3, 4, and 5.
Tables 5-8 provide the average findings of the corresponding MSE, PSNR, FSIM, and NCC evaluation matrices.The highest values of the thresholding approach, which produces the best results, are bolded in these tables.They show the optimal quality segmentation.We used the proposed fitness function, as seen in Equation (48).Table 3 displays COVIDOA segmented images for all SC test images utilized in the experiments.Table 4 displays the graphs illustrating the optimal COVIDOA threshold values for RGB channels for the last test image for levels 2, 3, 4, and 5.
Tables 5-8 provide the average findings of the corresponding MSE, PSNR, FSIM, and NCC evaluation matrices.The highest values of the thresholding approach, which produces the best results, are bolded in these tables.They show the optimal quality segmentation.We used the proposed fitness function, as seen in Equation (48).Table 3 displays COVIDOA segmented images for all SC test images utilized in the experiments.Table 4 displays the graphs illustrating the optimal COVIDOA threshold values for RGB channels for the last test image for levels 2, 3, 4, and 5.
Tables 5-8 provide the average findings of the corresponding MSE, PSNR, FSIM, and NCC evaluation matrices.The highest values of the thresholding approach, which produces the best results, are bolded in these tables.They show the optimal quality segmentation.We used the proposed fitness function, as seen in Equation (48).Table 3 displays COVIDOA segmented images for all SC test images utilized in the experiments.Table 4 displays the graphs illustrating the optimal COVIDOA threshold values for RGB channels for the last test image for levels 2, 3, 4, and 5.
Tables 5-8 provide the average findings of the corresponding MSE, PSNR, FSIM, and NCC evaluation matrices.The highest values of the thresholding approach, which produces the best results, are bolded in these tables.They show the optimal quality segmentation.We used the proposed fitness function, as seen in Equation (48).Table 3 displays COVIDOA segmented images for all SC test images utilized in the experiments.Table 4 displays the graphs illustrating the optimal COVIDOA threshold values for RGB channels for the last test image for levels 2, 3, 4, and 5.
Tables 5-8 provide the average findings of the corresponding MSE, PSNR, FSIM, and NCC evaluation matrices.The highest values of the thresholding approach, which produces the best results, are bolded in these tables.They show the optimal quality segmentation.We used the proposed fitness function, as seen in Equation (48).Table 3 displays COVIDOA segmented images for all SC test images utilized in the experiments.Table 4 displays the graphs illustrating the optimal COVIDOA threshold values for RGB channels for the last test image for levels 2, 3, 4, and 5.
Tables 5-8 provide the average findings of the corresponding MSE, PSNR, FSIM, and NCC evaluation matrices.The highest values of the thresholding approach, which produces the best results, are bolded in these tables.They show the optimal quality segmentation.We used the proposed fitness function, as seen in Equation ( 48).Table 3 displays COVIDOA segmented images for all SC test images utilized in the experiments.Table 4 displays the graphs illustrating the optimal COVIDOA threshold values for RGB channels for the last test image for levels 2, 3, 4, and 5.
Tables 5-8 provide the average findings of the corresponding MSE, PSNR, FSIM, and NCC evaluation matrices.The highest values of the thresholding approach, which produces the best results, are bolded in these tables.They show the optimal quality segmentation.
Segmented Image R G B
Higher mean values for PSNR, FSIM, and NCC indicate a more accurate and effective algorithm, while the lowest mean value denotes the optimum MSE value.
Table 5 lists the average values for the MSE metric.The best MSE result has the lowest mean value.It is important to note that COVIDOA surpasses all other algorithms (as previously indicated), particularly in img7 and img9, which have fewer values with all threshold levels.The SCA has lower MSE values in img1 (levels 5) and img6 (level 3), as well as the FPA in img1 (level 4), img2, and img10 (level 2).
The PSNR values are shown in Table 6 for every algorithm; a higher mean value implies superior segmentation quality.It should be noted that the COVIDOA surpasses all other algorithms in most cases.
The FSIM measure's mean values are displayed in Table 7.This statistic examines and analyzes how well an image's features are retained after processing.The SCA presents superior results in img1 (level 2) and img2 (level 5).Except for a few cases, the test images are not much improved by the AOA, SCA, RSA, FPA, and SOA.In comparison, the COVIDOA surpasses other algorithms in terms of FSIM on most test images.
The mean NCC values and NCC outcomes of the proposed technique (COVIDOA) outperform the other comparable algorithms, as shown in Table 8.The RSA provides a better value at only one level in img3 (level 3), GTO in img2 (levels 2 and 5), and the AOA in img2 (level 2) and img7 (level 3).The SCA gives higher results in only one image, img1 (levels 2 and 4).
The results of comparing the COVIDOA to other algorithms are shown in Figure 3 for the overall average values for MSE, PSNR, FSIM, and NCC.outperform the other comparable algorithms, as shown in Table 8.The RSA provides a better value at only one level in img3 (level 3), GTO in img2 (levels 2 and 5), and the AOA in img2 (level 2) and img7 (level 3).The SCA gives higher results in only one image, img1 (levels 2 and 4).
The results of comparing the COVIDOA to other algorithms are shown in Figure 3 for the overall average values for MSE, PSNR, FSIM, and NCC.According to Figure 3, the COVIDOA has the lowest average MSE for skin lesion images.All four measures' bar charts indicate that the COVIDOA is superior.The highest PSNR, FSIM, and NCC values produced by COVIDOA reflect the superior caliber of the segmented images.
Conclusions and Future Work
SC is among the most prevalent kinds of cancer; consequently, early detection can According to Figure 3, the COVIDOA has the lowest average MSE for skin lesion images.All four measures' bar charts indicate that the COVIDOA is superior.The highest PSNR, FSIM, and NCC values produced by COVIDOA reflect the superior caliber of the segmented images.
Conclusions and Future Work
SC is among the most prevalent kinds of cancer; consequently, early detection can significantly lower the related mortality rate.Image segmentation is essential to any CAD system for extracting regions of interest from SC images to enhance the classification phase.One of the most successful and effective techniques for segmenting images is thresholding.This work addresses the challenge of choosing the appropriate threshold value for segmenting images in MLT.The COVIDOA with the proposed fitness function was applied to a collection of color SC images.The COVIDOA's performance is validated using 10 skin lesion images and compared to six other meta-heuristic algorithms, AOA, SCA, RSA, FPA, SOA and GTO using a range of two to five different threshold values.The performance of the proposed algorithm has been evaluated using the following metrics: MSE, PSNR, FSIM and NCC.The outcomes of the experiments proved that the proposed fitness function improves the COVIDOA with classic Otsu, Kapur, and T'sallis fitness functions for the segmentation issue.According to the results, the COVIDOA surpasses all other algorithms regarding MSE, PSNR, FSIM, and NCC segmentation measures.The proposed method may solve various image processing difficulties and improve applications, including visualization, computer vision, CAD, and image classification.Future research should widen the examined image dataset and raise the threshold values to obtain accurate results.Furthermore, the proposed method must be evaluated with other different meta-heuristic optimization and deep learning methods to enhance the outcomes of segmentation techniques.
Future studies might involve combining the innovative COVIDOA with one of the existing meta-heuristics to address the MLT problem for skin lesion segmentation in color images.The COVIDOA developed here can solve more complex, real-world optimization problems.The proposed COVIDOA's accuracy and resiliency may be further evaluated in various engineering and real-world situations with an unknown search space.
Figure 1 .
Figure 1.Flow chart of the COVIDOA with proposed fitness function.
Figure 1 .
Figure 1.Flow chart of the COVIDOA with proposed fitness function.
Table 1 .
The original SC images and their histogram of constituent colors (red, green, and blue).
Table 1 .
The original SC images and their histogram of constituent colors (red, green, and blue).
13, x FOR PEER REVIEW 18 of
Table 1 .
The original SC images and their histogram of constituent colors (red, green, and blue).
Table 1 .
The original SC images and their histogram of constituent colors (red, green, and blue).
Table 1 .
The original SC images and their histogram of constituent colors (red, green, and blue).
Table 1 .
The original SC images and their histogram of constituent colors (red, green, and blue).
Table 1 .
The original SC images and their histogram of constituent colors (red, green, and blue).
Table 1 .
The original SC images and their histogram of constituent colors (red, green, and blue).
Table 1 .
The original SC images and their histogram of constituent colors (red, green, and blue).
Table 2 .
The results of the COVIDOA with all fitness functions.
Table 5 .
Based on the average MSE values, a comparison of COVIDOA and the other chosen algorithms.
Table 6 .
Based on the mean PSNR values, the COVIDOA and the other chosen algorithms are compared.
Table 8 .
Based on the mean NCC values, a comparison of the COVIDOA and the other chosen algorithms.
|
v3-fos-license
|
2020-10-19T18:10:14.596Z
|
2020-09-24T00:00:00.000
|
224890309
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-9284/7/4/74/pdf",
"pdf_hash": "a2250ceaf069ae85f4625733271544bce89e6366",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46052",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "b29a39149307bb2442dfb1ab84e952d4202f3936",
"year": 2020
}
|
pes2o/s2orc
|
Monitoring of Natural Pigments in Hennaand Jagua Tattoos for Fake Detection
Temporary tattoos are a popular alternative to permanent ones. Some of them use natural pigments such as lawsone in the famous henna tattoos. Recently, jagua tattoos, whose main ingredients are genipin and geniposide, have emerged as an interesting option. This study was conducted to identify the presence and concentration of henna and jagua active ingredients (lawsone; genipin and geniposide, respectively) in commercial tattoo samples. Since natural pigments are often mixed with additives such as p-phenylenediamine (PPD) in the case of henna, PPD has been included in the study. Green and simple extraction methods based on vortex or ultrasound-assisted techniques have been tested. To determine the compounds of interest liquid chromatography (LC) with diode-array detection (DAD) has been applied; and PPD absence was confirmed by LC-QTOF (quadrupole-time of flight tandem mass spectrometry). This work demonstrated that only one out of 14 henna samples analyzed contained lawsone. For jaguas, genipin was found in all samples, while geniposide only in two. Therefore, quality control analysis on these semi-permanent tattoos is considered necessary to detect these ingredients in commercial mixtures, as well as to uncover possible fraud in products sold as natural henna.
Introduction
Tattoos have been used since ancient times, but in recent decades they have become more popular among people of different ages, backgrounds, and cultures. There are two types of "tattoos," permanent tattoos and temporary tattoos. Temporary tattoos are becoming a more common alternative to permanent ones, because of the risks associated with the inks used in permanent tattoos. Temporary tattoos like henna-based tattoos, use natural pigments such as lawsone, and recently, jagua-based tattoos, in which the main ingredients are genipin and geniposide, have emerged as an alternative in the market. They are usually sold on the Internet. From a regulatory point of view, while all types of tattoos can be included in the group of new format cosmetics or borderline products in the EU Regulation [1], temporary tattoos are a product with a diverse chemical composition and unclear legislation. They should be correctly labelled according to both the Cosmetics Regulation [2] and the Toys Directive [3] and they should also follow the guidelines of the Manual of Borderline Products. However, temporary tattoos do not meet these requirements. A recent study [4] showed that the labelling on stickers tattoos was either non-existent or had incorrect ingredients listed. Some natural pigments in temporary tattoos are mainly discussed throughout this article. The most well-known tattoos are henna-based tattoos, and in particular, the so-called black henna tattoos. This is where natural henna has been adulterated with p-phenylenediamine (PPD) in high concentrations [5,6]. Regulations on natural based pigments for temporary tattoos have not been yet issued and most commercial products of plant origin are not labelled. While henna and its main active ingredient lawsone have been evaluated for safety in hair dye products [7], no such evaluation has been done yet for other similar products containing henna extracts such as temporary tattoos. Jagua-based tattoos have not even been considered from the regulatory point of view, even though there has already been evidence of its allergenic potential. The safety of these products and their potential to skin reactions is highly questionable. This situation is due to their recent appearance on the market, lack of regulation, and limited scientific research.
Natural henna (Lawsonia inermis, from the Lythraceae family) has as its active ingredient lawsone (2-hydroxy-1,4-naphthoquinone, HNQ), which is responsible for the typical reddish-brown coloring [8]. After drying and crushing the leaves and stems of this tropical plant, a brownish-green powder is obtained. After mixing it with water or essential oils, a paste is produced that is traditionally used as a dye to decorate nails, hands, and feet. However, different henna formulations can be found in the market that are sold as temporary natural henna tattoo dyes.
Lawsone and PPD are generally determined by high-performance liquid chromatography with a diode array detector (HPLC-DAD). Almeida et al. [9] have used this technique to quantify HNQ and PPD in 11 commercial henna products (9 had HNQ and 2 PPD) and in 3 preparations used by henna tattoo artists (finding PPD in all 3 but HNQ in only one of them). A few years ago, a qualitative and quantitative determination of HNQ and PPD in black henna tattoo samples was also proposed by HPLC-DAD [10]. The study was focused on products marketed in Turkey, where these tattoos are part of the popular culture. Lawsone was found in 21 of the 25 samples considered, while PPD was detected in all of them. In both papers, sample preparation was based on single dilution followed by sonication and final filtration before the analysis. Other techniques for analyzing PPD in henna powders or mixtures are discussed in a recent review [11]. However, there are no further papers determining lawsone in henna tattoos available globally through the Net; one of the main objectives of the present study.
The current fashion for jagua tattoos is becoming more popular although they have been used in the past by certain populations. In jagua-based tattoos, the natural pigment is obtained from an Amazonian tropical fruit known as Genipa americana L., from the Rubiaceae family. The dye comes from the sap of an unripe fruit and it turns dark blue or blackish when it is applied to the body. Because of its coloring, it could be substitute for p-phenylenediamine in black henna tattoos to darken it. Jagua main ingredients are geniposide and its bioactive compound genipin. Genipin can be obtained by hydrolysis of the geniposide which is also present in other types of plants such as Gardenia jasminoides. Geniposide is a glycosylated iridoid and genipin is a colorless substance. However, when placed in contact with the skin, genipin reacts immediately to the skin's proteins to produce the pigment's color. Because the epidermis is constantly regenerating, it disappears from the skin in a few days. Certain studies suggest that jagua may become a potential new allergen in temporary tattoos [12,13]. A recent case of allergic contact dermatitis caused by temporary tattoos named Earth Jagua ® and bought on the Internet has been described [14]. The list of ingredients for this brand of temporary tattoo which caused the allergic reaction mentioned above was incomplete. However, after extraction and LC-UV analysis, the presence of genipin and geniposide was confirmed, ruling out the possibility of a PPD allergy. Genipin was identified as the compound causing the allergic reaction. This is probably due to its high affinity for proteins, making it a possible contact allergen candidate. Although some methods have been reported for the quantification of geniposide in pharmacokinetic works [15][16][17], no studies have been conducted on analytical determinations in cosmetic samples. All those methods used HPLC. Nathia-Neves et al. [18] determined both genipin and geniposide directly in the unripe fruits of Genipa in their research. While other surveys have been carried out [19][20][21][22], there are no studies about the determinations of these two ingredients in temporary jagua tattoos samples. Hence, up to now there are a very few studies published about both compounds found in jagua-based tattoo samples, and most of them deal with allergy cases. Hence, analytical methodologies for the detection and determination of genipin and geniposide in products intended for use as tattoos are required.
Thus, this study was mainly conducted to verify the natural origin of 19 commercial tattoo samples based on henna and jagua, using their active pigments as quality markers. Following the desired trend toward simplifying and standardizing the analytical methodology, simple sample preparation procedures are applied. It is also interesting to develop selective and reliable methods for the quantitation of these compounds. To meet these objectives, liquid chromatography with diode-array detection was used for the simultaneous quantification of all the markers involved (lawsone in henna; genipin and geniposide in jagua). In addition, LC with quadrupole time-of-flight (QTOF) mass spectrometry detection was applied to check for the presence or absence of the recognized allergen PPD. These approaches are useful in two ways: first, for the quality control of these semi-permanent tattoos, and second, to detect potential fraud in beauty products containing these ingredients in their formulations and sold as natural when they are not. As a result, a quantitative analysis of the bioactive compounds in jagua natural tattoos has been proposed in this study, as currently the analytical methodology published is scarce or non-existent. To the best of our knowledge, this is the first methodological approach that can help to determine fraud jointly in both henna and jagua temporary tattoos.
Chemicals and Reagents
All solvents and reagents were of analytical grade. MS-grade methanol and acetonitrile were provided by Sigma-Aldrich Chemie GmbH (Steinheim, Germany), ultrapure water MS-grade by Scharlab (Barcelona, Spain), and formic acid was obtained from Merck (Darmstadt, Germany). The analyzed compounds, their chemical names, structures, Chemical Abstract Services (CAS) numbers, and purity of the standards are shown in Table 1. Genipin, geniposide, and PPD are white powders while lawsone is a greenish powder. Genipin was purchased from Biosynth Carbosynth (Berkshire, United Kingdom), geniposide from Sigma Aldrich (Steinheim, Germany), lawsone was supplied by Alfa Aesar (Karlsruhe, Germany), and PPD by Tokyo Chemical Industry (TCI) Europe (Zwijndrecht, Belgium). Individual stock solutions of each compound were prepared in methanol as well as further dilutions and mixtures. All stock solutions were stored in glass vials protected from light and kept in a freezer at −20 • C.
Sample Preparation
For each sample, 0.02-0.03 g of raw material and 7.5 g of methanol were exactly weighted into a
Sample Preparation
For each sample, 0.02-0.03 g of raw material and 7.5 g of methanol were exactly weighted into a 10 mL glass vial. All solutions were colorful. The complete solubility of the samples was assured by ® 98% 24512-63-8
Sample Preparation
For each sample, 0.02-0.03 g of raw material and 7.5 g of methanol were exactly weighted into a 10 mL glass vial. All solutions were colorful. The complete solubility of the samples was assured by shaking them in an ultrasonic bath Raypa ® model UCI 150 (Barcelona, Spain) at 35 kHz of ultrasound frequency for 5 min. The dilution factor was modified according to the concentration of the analytes
Sample Preparation
For each sample, 0.02-0.03 g of raw material and 7.5 g of methanol were exactly weighted into a 10 mL glass vial. All solutions were colorful. The complete solubility of the samples was assured by shaking them in an ultrasonic bath Raypa ® model UCI 150 (Barcelona, Spain) at 35 kHz of ultrasound frequency for 5 min. The dilution factor was modified according to the concentration of the analytes
Tattoo Samples
Tattoo samples (12 henna tattoos and 4 jagua tattoos) were obtained via the Internet from a well-known site available to everyone. These hennas samples were provided by two different sellers. Two additional henna samples were collected from a local source in Morocco, labelled as from Pakistan origin. There were a total of 14 hennas and as many jagua samples as we could make available at the time of acquisition. Additionally, other plant origin sample (labelled as herbaceous plant tattoo), whose composition and origin species are unknown, was also purchased online. All these samples are described in detail in Table 2. Until their analysis, samples were kept in their original containers at room temperature and protected from light.
Sample Preparation
For each sample, 0.02-0.03 g of raw material and 7.5 g of methanol were exactly weighted into a 10 mL glass vial. All solutions were colorful. The complete solubility of the samples was assured by shaking them in an ultrasonic bath Raypa ® model UCI 150 (Barcelona, Spain) at 35 kHz of ultrasound frequency for 5 min. The dilution factor was modified according to the concentration of the analytes in the samples. Samples were injected directly. The sample solutions were stored in glass vials at −20 • C. Prior to the chromatographic analysis, solutions were filtered through 0.22 µm polytetrafluoroethylene (PTFE) syringe filter. Figure 1 below illustrates the process described.
Sample Preparation
For each sample, 0.02-0.03 g of raw material and 7.5 g of methanol were exactly weighted into a 10 mL glass vial. All solutions were colorful. The complete solubility of the samples was assured by shaking them in an ultrasonic bath Raypa ® model UCI 150 (Barcelona, Spain) at 35 kHz of ultrasound frequency for 5 min. The dilution factor was modified according to the concentration of the analytes in the samples. Samples were injected directly. The sample solutions were stored in glass vials at −20 °C. Prior to the chromatographic analysis, solutions were filtered through 0.22 μm polytetrafluoroethylene (PTFE) syringe filter. Figure 1 below illustrates the process described.
Liquid Chromatography (LC) with Diode-Array Detector (DAD)
High-performance liquid chromatography (HPLC-DAD) was performed in a Jasco LC Net II, equipped with the PU-4180 quaternary pump, the AS-4150 auto sampler and the MD-4010 diode detector. The system was controlled with the JASCO ChromNAV Version 2.01.00 (JASCO International Co., Ltd., Tokyo, Japan). The separations were carried out using a Kinetex chromatographic column 5 µm C18-100Å (150 mm × 4.6 mm, 2.6 µm) supplied by Phenomenex, (Torrance, CA, USA). The mobile phase consists of water (A) and methanol (B) both acidified with 1% formic acid, with the following gradient program: 0 min, 70% A; 10 min, 70% A; 15 min, 0% A; 18 min, 70% A and 23 min, 70% A. The temperature and flow rate that allowed the best chromatographic performance were 30 • C and 1.0 mL/min, respectively, resulting a total run time of 23 min, including column clean-up and re-equilibration. Re-equilibration time is necessary in gradient HPLC to ensure that the column environment has returned to the initial stable conditions. Five microliter of the sample was injected in duplicate. The UV-Vis absorption spectra of standards and samples were acquired in the range of 200 to 600 nm to determine the absorption maxima of the three target compounds: lawsone, genipin, and geniposide. The listed compounds were identified in the real samples by comparison of their retention times and UV-Vis spectra to those of pure standards. Quantification was performed by external standard calibration. In the particular case of the PPD, several analyses were carried out using the column mentioned above. In addition, a Kinetex chromatographic column HILIC-100Å (150 mm × 2.1 mm, 2.6 µm) supplied by Phenomenex, (Torrance, CA, USA) was also used.
Ultra High Performance Liquid Chromatography Quadrupole Time-of-Flight Mass Spectrometry (UHPLC-QOF-MS)
Rapid analysis of PPD was carried out in an Elute UHPLC 1300 coupled to a quadrupole time-of-flight mass spectrometry (QTOF) Compact Instrument (Bruker Daltonics, Bremen, Germany). Separation was carried out on an Intensity Solo HPLC column C18 (100 mm × 2.1 mm, 2.0 µm; Bruker Daltonics, Bremen, Germany) which was kept at a constant temperature of 40 • C. The mobile phase consists of 0.1% formic acid in both water (A) and methanol (B) and the flow rate was 0.25 mL/min. The gradient method was in 95% A for 0.4 min, then was from 5% B to 35% for 0.1 min, and to 100% B for 7 min, and then held for 5 min, thus returning to the initial conditions until reaching the 15 min of total running time. Two µL of the sample were injected in triplicate.
The mass spectrometer was operated in the electrospray ionization (ESI) in positive mode, detecting mainly pseudo molecular ions [M + H] + . The MS method used was a broadband collision-induced dissociation (bbCID) approach, which allows the exhaustive recording of all detectable precursor and products ions, independently of precursor intensity. The voltage ramp applied was from 10 to 105 eV, with spectra rate of 8 Hz and mass filtering from 20 to 1000 m/z, with a total cycle time range equal to 1 s. All acquisitions were obtained using the Compass HyStar software and quantification was performed using the TASQ Version 2.1 (Build 201.2.4019) software.
Solubility Studies
The solubility of the studied compounds was evaluated in both water and methanol. Slusarewicz et al. [23] have already explored the aqueous stability of genipin. In their work, they concluded that genipin is decomposed in aqueous solution, drastically influencing the pH of that solution in the degradation. In addition, the poor solubility of lawsone in water (1 mg·mL −1 [24]) could be a problem. Based on these evidences and the fact that the compounds are properly soluble in methanol, it was decided to discard the use of water in the preparation of individual standards, mixtures or samples, with the aim of using a common solvent for all markers.
Chromatographic Analysis
The chromatographic conditions were optimized to achieve an efficient separation of the three target compounds used in natural pigments-based tattoos.
Retention times (RT) were 3.50, 5.35, and 11.04 min, for geniposide, genipin, and lawsone, respectively. The elution order of geniposide and genipin was the same observed by other authors [19][20][21][22]. The HPLC-DAD method was validated in terms of linearity and precision, limits of detection (LODs), and limits of quantification (LOQs). The results are summarized in Table 3. Calibration curves were obtained employing standard solutions prepared in methanol covering a concentration range from 1 to 100 µg·mL −1 (geniposide, genipin and lawsone), with six concentration levels and three replicates per level. For the quantitative analysis absorption, 250 nm was selected. Figure 2 shows a section of the chromatogram obtained for an intermediate calibration level, as well as the UV-spectra of the three compounds. It should be noted that, although the first two spectra are identical because the differential moiety of the geniposide does not absorb in UV, both compounds are clearly identified by their different chromatographic retention as shown below. The method showed a good linearity, with coefficients of determination (R 2 ) higher than 0.9990. The instrumental precision was evaluated within a day (n = 3) for all concentration levels showing mean relative standard deviation (RSD) values about 2%. The LODs and LOQs were calculated as the compound concentration giving a signal-to-noise ratio of three (S/N = 3) and ten (S/N = 10), respectively. standard deviation (RSD) values about 2%. The LODs and LOQs were calculated as the compound concentration giving a signal-to-noise ratio of three (S/N = 3) and ten (S/N = 10), respectively. The same chromatographic conditions were applied in the case of PPD. However, because of its high polarity, it was not detected. Alternatively, a HILIC (hydrophilic interaction liquid chromatography) column was chosen owing to its different selectivity and efficiency in the separation of polar compounds. Several tests were carried out with different elution gradients (now mixing water and acetonitrile as mobile phase). However, no results were achieved, hence, it was decided to perform the PPD analysis using UHPLC-ESI-QTOF-MS. Direct infusion of the PPD pattern was performed to search for the mass transitions that were subsequently selected. Figure 3 shows the mass spectrum of PPD. The peak of the [M + + 1] ion (m/z = 109.08, C6H9N2) was accurately determined. The theoretical exact masses of the protonated compound [M+H] + were calculated in the TASQ software based on the molecular formula resulting in an accurate mass. The retention time for the PPD was 2.23 min. The same chromatographic conditions were applied in the case of PPD. However, because of its high polarity, it was not detected. Alternatively, a HILIC (hydrophilic interaction liquid chromatography) column was chosen owing to its different selectivity and efficiency in the separation of polar compounds. Several tests were carried out with different elution gradients (now mixing water and acetonitrile as mobile phase). However, no results were achieved, hence, it was decided to perform the PPD analysis using UHPLC-ESI-QTOF-MS. Direct infusion of the PPD pattern was performed to search for the mass transitions that were subsequently selected. Figure 3 shows the mass spectrum of PPD. The peak of the [M + + 1] ion (m/z = 109.08, C 6 H 9 N 2 ) was accurately determined. The theoretical exact masses of the protonated compound [M+H] + were calculated in the TASQ software based on the molecular formula resulting in an accurate mass. The retention time for the PPD was 2.23 min. Table 3. High-performance liquid chromatography with a diode array detector (HPLC-DAD) method performance. Linearity, precision, limits of detection (LODs) and limits of quantification (LOQs).
Compound
Linearity The same chromatographic conditions were applied in the case of PPD. However, because of its high polarity, it was not detected. Alternatively, a HILIC (hydrophilic interaction liquid chromatography) column was chosen owing to its different selectivity and efficiency in the separation of polar compounds. Several tests were carried out with different elution gradients (now mixing water and acetonitrile as mobile phase). However, no results were achieved, hence, it was decided to perform the PPD analysis using UHPLC-ESI-QTOF-MS. Direct infusion of the PPD pattern was performed to search for the mass transitions that were subsequently selected. Figure 3 shows the mass spectrum of PPD. The peak of the [M + + 1] ion (m/z = 109.08, C6H9N2) was accurately determined. The theoretical exact masses of the protonated compound [M+H] + were calculated in the TASQ software based on the molecular formula resulting in an accurate mass. The retention time for the PPD was 2.23 min.
Application to Real Samples
The previously described methodology was applied to the analysis of 19 natural tattoo samples, which present a wide variety of tonalities in the case of henna products. Analyses of the samples showed the presence of some of the target analytes. Figure 4 shows overlaid chromatograms of some selected samples and standards. Lawsone was only found in one (HNT-11) out of the 14 henna samples (Figure 4a). The analysis by HPLC-DAD showed an amount of 8736.47 µg·g −1 lawsone. For the other colored hennas, their active ingredient was not detected in any of the cases (Figure 4b). The absences of lawsone in right henna samples have already been reported [9,10]. However, the ratio of samples with HNQ was around 70-80% while here it scarcely reaches 10% of the total. It is also important to highlight that samples reported were black, brown, or red, but in this study less typical color samples were considered like pink, green, or blue, among others, which led to the perception that these colored pigments are not based on natural henna. In summary, the proposed analytical method is suitable for identification and determination of the natural coloring agents considered. In addition, although the main objective of this work was the detection and determination of active compounds in henna and jagua semi-permanent tattoos, since PPD is a very popular additive in hennas and some of the jagua samples were labelled as PPD-free, all samples were analyzed by LC-QTOF. PPD was not found in any of the 19 samples studied, indicating that the considered samples are been adulterated with PPD; however, in the case of most hennas, they did not contain the active ingredient either.
Conclusions and Future Trends
In this work a method based on HPLC-DAD has been proposed to simultaneously evaluate the presence of the active ingredients in plant pigments-based tattoos formulations. It is worthy to mention that the proposed sample preparation procedure is quite simple, rapid, and easy to handle. The method performance study showed that HPLC-DAD was appropriate, allowing a rapid recognition of a sample as natural or fake, depending on the presence or lack of its expected active ingredients. Only one out of 14 henna samples analyzed contained lawsone. For jaguas, genipin was found in all samples, while geniposide only in two. Then, the determination of the marker compounds was performed by a simple chromatographic method that can be easily applied in laboratory. However, if the presence of PPD is suspected, it is necessary to reconfirm it using more sophisticated equipment. In this work, LC-QTOF was applied for the rapid testing of PPD adulteration in the samples.
The growing use of temporary tattoos because of the global availability on the Internet and the current regulatory situation, can cause an increase in allergic reactions in the near future. Another issue relates to the lack of information about concentrations added to the incorrect or non-existent labelling. In parallel to the necessity of improvement in these issues, the development of analytical The situation was quite different for jagua samples. The average results for jagua tattoos are summarized in Table 4. Genipin was found in the all samples, its concentration in the first three samples (JNT-1 to JNT-3) being different from that of the fourth sample, where the genipin concentration is much higher. Geniposide was identified in two jagua samples (Figure 4c,d). As for genipin, the highest concentration was found in the JNT-4 sample in both cases, being very low in the other tattoo preparations. The reason for the higher amounts of both compounds in the JNT-4 sample may be related to its physical state, since it is the only solid sample. In fact, this commercial product was not dissolved in solvents or essential oils, so it could be considered as pure jagua. Additionally, this sample label claims to be 100% natural. If the two compounds are compared, concentrations of the main active ingredient genipin are higher than those of geniposide. Up to now, few works about both analytes have been reported in natural tattoo samples, and only one of them mentions the genipin detection. In any case, no previous analytical approaches have been reported for the simultaneous quantification of these target compounds. Lastly, with regard to the sample considered as an herbaceous plant tattoo (HPT), neither genipin nor geniposide was detected. This implies that its origin is not Lawsonia inermis or Genipa americana L.
In summary, the proposed analytical method is suitable for identification and determination of the natural coloring agents considered. In addition, although the main objective of this work was the detection and determination of active compounds in henna and jagua semi-permanent tattoos, since PPD is a very popular additive in hennas and some of the jagua samples were labelled as PPD-free, all samples were analyzed by LC-QTOF. PPD was not found in any of the 19 samples studied, indicating that the considered samples are been adulterated with PPD; however, in the case of most hennas, they did not contain the active ingredient either.
Conclusions and Future Trends
In this work a method based on HPLC-DAD has been proposed to simultaneously evaluate the presence of the active ingredients in plant pigments-based tattoos formulations. It is worthy to mention that the proposed sample preparation procedure is quite simple, rapid, and easy to handle. The method performance study showed that HPLC-DAD was appropriate, allowing a rapid recognition of a sample as natural or fake, depending on the presence or lack of its expected active ingredients. Only one out of 14 henna samples analyzed contained lawsone. For jaguas, genipin was found in all samples, while geniposide only in two. Then, the determination of the marker compounds was performed by a simple chromatographic method that can be easily applied in laboratory. However, if the presence of PPD is suspected, it is necessary to reconfirm it using more sophisticated equipment. In this work, LC-QTOF was applied for the rapid testing of PPD adulteration in the samples.
The growing use of temporary tattoos because of the global availability on the Internet and the current regulatory situation, can cause an increase in allergic reactions in the near future. Another issue relates to the lack of information about concentrations added to the incorrect or non-existent labelling. In parallel to the necessity of improvement in these issues, the development of analytical methods to control pigments, impurities, as well as potential undesirable additives, is highly important. It is probable that commercial products may contain prohibited or restricted compounds in cosmetics, as well as dyes to obtain such attractive colors. In addition, in further experiments, untargeted approaches should be included to better characterize the composition of these beauty products.
|
v3-fos-license
|
2021-03-29T05:24:01.479Z
|
2021-03-01T00:00:00.000
|
232384701
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/22/6/3201/pdf",
"pdf_hash": "76dfe50f8e7e9342de3f6acb52ec535390d05083",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46053",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "76dfe50f8e7e9342de3f6acb52ec535390d05083",
"year": 2021
}
|
pes2o/s2orc
|
Immunoglobulins and Transcription Factors in Otitis Media
The causes of otitis media (OM) involve bacterial and viral infection, anatomo-physiological abnormalities of the Eustachian canal and nasopharynx, allergic rhinitis, group childcare centers, second-hand smoking, obesity, immaturity and defects of the immune system, formula feeding, sex, race, and age. OM is accompanied by complex and diverse interactions among bacteria, viruses, inflammatory cells, immune cells, and epithelial cells. The present study summarizes the antibodies that contribute to immune reactions in all types of otitis media, including acute otitis media, otitis media with effusion, and chronic otitis media with or without cholesteatoma, as well as the transcription factors that induce the production of these antibodies. The types and distribution of B cells; the functions of B cells, especially in otorhinolaryngology; antibody formation in patients with otitis media; and antibodies and related transcription factors are described. B cells have important functions in host defenses, including antigen recognition, antigen presentation, antibody production, and immunomodulation. The phenotypes of B cells in the ear, nose, and throat, especially in patients with otitis media, were shown to be CD5low, CD23high, CD43low, B220high, sIgMlow, sIgDhigh, Mac-1low, CD80(B7.1)low, CD86(B7.2)low, and Syndecam-1low. Of the five major classes of immunoglobulins produced by B cells, three (IgG, IgA, and IgM) are mainly involved in otitis media. Serum concentrations of IgG, IgA, and IgM are lower in patients with OM with effusion (OME) than in subjects without otitis media. Moreover, IgG, IgA, and IgM concentrations in the middle ear cavity are increased during immune responses in patients with otitis media. B cell leukemia/lymphoma-6 (Bcl-6) and paired box gene 5 (Pax-5) suppress antibody production, whereas B lymphocyte inducer of maturation program 1 (Blimp-1) and X-box binding protein 1 (XBP-1) promote antibody production during immune responses in patients with otitis media.
Introduction
Otitis media (OM) refers to all inflammatory phenomena that take place in the middle ear [1]. OM is classified as acute if its duration is less than 3 weeks, subacute when it lasts more than 3 weeks but less than 3 months, and chronic if it lasts more than 3 months. The disease can also be classified based on the presence or absence of perforation in the tympanic membrane and the form of otorrhea/ear discharge. OM with effusion (OME) is defined as an absence of perforation coupled with the accumulation of inflammatory defined as an absence of perforation coupled with the accumulation of inflammatory fluid in the middle ear, whereas chronic suppurative OM is defined as the presence of both perforation and suppurative discharge. Chronic OM (COM) can be categorized as COM with or without cholesteatoma depending on the presence of cholesteatoma. In most patients, acute OM heals without complications, but some patients experience a relapse of inflammation or OME. COM may also develop if the inflammation is not treated sufficiently ( Figure 1) [1,2]. The causes of acute OM are very diverse and are associated with complex interactions of various factors, including infection with viruses or bacteria, malfunction of the Eustachian tube, allergy, physiological/pathological/immunological factors within the middle ear, and environmental and genetic factors [2]. The factors responsible for the development of chronic inflammation in patients with acute inflammation of the middle ear and mastoid have not yet been clarified.
Acute OM (AOM) and OME can cause structural changes in the tympanic membrane, with histologic alterations observed in the fibrous layers of the lamina propria. These alterations affect the elasticity of the tympanic membrane, creating conditions that can result in retraction or perforation of the tympanic membrane. COM that develops in adulthood may also be the result of AOM. The mechanism underlying the conversion of AOM to COM is not yet clear, but risk factors for the development of AOM and OME may also be risk factors for chronicity. Acute inflammation of the middle ear causes pathological transformation and hyperplasia of the middle ear mucosa. This hyperplasia, as well as the influx of various inflammatory cells into the mucous membrane, is mostly reversible. After the stimulation associated with otitis media disappears, the mucous membrane recovers to its normal shape through de-differentiation. However, the repeated occurrence and chronicity of pathologic conditions, such as hyperplasia of the middle ear mucosa, middle ear effusion due to hyperproliferation reactions, atelectasis, tympanosclerosis, and middle ear cholesteatoma, can cause irreversible structural changes in the middle ear cavity. Although AOM is usually cured without sequelae, some patients may experience recurrent inflammation, resulting in recurrent otitis media or persistent OME, leading to the development of COM [3,4].
There have been significant advances in the treatment of OM due to the development of antibiotics. Although antibiotics have reduced critical complications of OM, they have not reduced the frequency of occurrence, with some patients experiencing serious complications. About 10% of patients with OM develop COM. These patients may develop complications, including conductive/sensorineural hearing loss, tympanic membrane perforation, retraction pocket/atelectasis, tympanosclerosis, ossicular discontinuity/fixation, The causes of acute OM are very diverse and are associated with complex interactions of various factors, including infection with viruses or bacteria, malfunction of the Eustachian tube, allergy, physiological/pathological/immunological factors within the middle ear, and environmental and genetic factors [2]. The factors responsible for the development of chronic inflammation in patients with acute inflammation of the middle ear and mastoid have not yet been clarified.
Acute OM (AOM) and OME can cause structural changes in the tympanic membrane, with histologic alterations observed in the fibrous layers of the lamina propria. These alterations affect the elasticity of the tympanic membrane, creating conditions that can result in retraction or perforation of the tympanic membrane. COM that develops in adulthood may also be the result of AOM. The mechanism underlying the conversion of AOM to COM is not yet clear, but risk factors for the development of AOM and OME may also be risk factors for chronicity. Acute inflammation of the middle ear causes pathological transformation and hyperplasia of the middle ear mucosa. This hyperplasia, as well as the influx of various inflammatory cells into the mucous membrane, is mostly reversible. After the stimulation associated with otitis media disappears, the mucous membrane recovers to its normal shape through de-differentiation. However, the repeated occurrence and chronicity of pathologic conditions, such as hyperplasia of the middle ear mucosa, middle ear effusion due to hyperproliferation reactions, atelectasis, tympanosclerosis, and middle ear cholesteatoma, can cause irreversible structural changes in the middle ear cavity. Although AOM is usually cured without sequelae, some patients may experience recurrent inflammation, resulting in recurrent otitis media or persistent OME, leading to the development of COM [3,4].
There have been significant advances in the treatment of OM due to the development of antibiotics. Although antibiotics have reduced critical complications of OM, they have not reduced the frequency of occurrence, with some patients experiencing serious complications. About 10% of patients with OM develop COM. These patients may develop complications, including conductive/sensorineural hearing loss, tympanic membrane perforation, retraction pocket/atelectasis, tympanosclerosis, ossicular discontinuity/fixation, mastoiditis/petrositis, labyrinthitis, facial nerve paralysis, cholesterol granuloma, infectious eczematoid dermatitis, postauricular abscess, Bezold's abscess, zygomatic abscess, lateral sinus thrombophlebitis, meningitis, extradural abscess, subdural abscess, brain abscess, or otitic hydrocephalus [5,6].
The development of OM involves the interactions of various bacteria; viruses; epithelial, inflammatory, and immune cells; and effusions. Moreover, these factors may respond to each other in a complex manner. Important inflammatory mediators in OM are involved not only in the invasion of immune cells, such as neutrophils, monocytes, and lymphocytes, but also in interaction with local cells such as keratinocytes and mast cells [7]. The present study therefore sought to identify the antibodies related to immune reactions to external antigens in the middle ear and the transcription factors that induce the production of antibodies.
Literature databases, including SCOPUS, PubMed, the Cochrane Library, and EM-BASE, were searched for studies published in English. Studies were included if they (1) were prospective and retrospective investigational studies; (2) included patients diagnosed with acute OM, OME, COM without cholesteatoma, or COM with cholesteatoma, while excluding patients with complications of OM; and 3) included human patients only. Keywords searched included OM, acute OM, OME, COM without cholesteatoma, COM with cholesteatoma, immunoglobulin, and antibody.
Types and Functions of B cells 2.1. Types and Distribution of B cells
Humans possess five types of immunoglobulin (Ig), IgG, IgA, IgM, IgD, and IgE, although concentrations of IgD and IgE are very low. The biological characteristics of these antibodies include neutralization of toxins, immobilization of bacteria, condensation of bacteria and antigen particles, precipitation of soluble antigens enabling their phagocytosis by macrophages, binding to bacteria facilitating cytolysis by serum complement, and destruction of bacteria by phagocytes and cytotoxic T lymphocytes [8].
Both acute and chronic inflammation of the tympanic cavity result in the production of antibodies by B lymphocytes, cells involved in antigen recognition, antigen presentation, antibody formation, and immunomodulation and that possess receptors for IgM and IgD, and the surface markers cluster of differentiation 19 (CD19), CD20, and CD21. Antibodies are present, but are distributed unevenly, in body fluids, including blood and cerebrospinal fluid (CSF), the spleen, the thymus, and peripheral lymphoid tissues and are involved in humoral immunity. The ratio of B lymphocytes to T lymphocytes differs among tissues and organs. B lymphocytes are rarely found in the thymus, with the ratio of T lymphocytes to B lymphocytes in this organ being 8:1. This ratio is 1:1 in the spleen and 1:3 in the CSF. Flow cytometry has shown that the ratio of B lymphocytes to T lymphocytes in mouse cervical lymph nodes is 1:2.8 ± 0.76 [7].
B lymphocytes can be classified as B-1 and B-2 cells. B-1 cells express the glycoprotein CD5 on the pan-T cell surface, distinguishing them from B-2 cells. In addition to CD5, B-1 cells express surface Ig (sIg) M high , sIgD low , B220 low , CD23 low , and CD43 high on their surfaces, whereas B-2 cells express (sIg)M low , sIgD high , B220 high , CD23 high , and CD43 low but do not express CD5. Most B lymphocytes in the spleen are B-2 cells, whereas most B lymphocytes in the abdominal cavity and thoracic cavity are B-1 cells [9][10][11].
Morphologically, B-1 cells are larger but less dense than B-2 cells. More than 90% of fetal lymphocytes are B-1 cells, but this percentage decreases with age, with B-1 cells constituting 25~35% of B cells and 0~6% of total lymphocytes in adults [5,6].
B-1 cells constitute 50~80% of the B lymphocytes obtained by perfusion of the abdominal cavity of newborn mice and 20% of the B lymphocytes in the spleen [12]. In comparison, B-1 cells constitute 5% of the B lymphocytes in adult mice, with almost none of these cells present in the lymph nodes [12]. Following CSF transplantation, B-1 cells are the first to proliferate and increase in immunodeficiency states during the immune reconstruction stage, suggesting that B-1 cells play a role as a primitive immune system. The abundance of B-1 cells in newborns is regarded as a primary immune mechanism of early natural immunity, as these cells are produced during early stages of immune system development and are especially increased when newborns are infected [9,13].
Functions of B cells
B cells have several important biological functions, including antigen recognition, antigen presentation, antibody production, and immune regulation. B cells are categorized into CD5-positive B-1 cells and CD5-negative B-2 cells. B-1 cells constitutively secrete IgM, which not only contributes to natural immunity but also reacts with autoantigens and is significantly increased in patients with chronic lymphatic leukemia and several autoimmune diseases. Stimulation of B cells by T cells, by cytokines produced by T cells, or by T-independent antigens results in the proliferation, differentiation, and apoptosis of B cells, which ultimately differentiate into effector cells, either plasma cells or memory cells. Stimulators used most often in basic science research include CD40L, lipopolysaccharide (LPS), and interleukin-4 (IL-4). As T-dependent antigens, CD40L and IL-4 play important roles in homing and localization of B lymphocytes in lymphoid organs through the proliferation and differentiation of B lymphocytes, as well as adherence, the transition of immunoglobulin types, and induction of cytoskeleton activation. LPS, a T-independent antigen, can stimulate the activation, proliferation, and differentiation of B cells without involvement of other cells or cytokines [14,15].
The B-lymphocyte proliferation and differentiation pathways include several maturation phases prior to their final differentiation into plasma cells that produce antibodies. Plasma cells that differentiate from B cells initially produce immunoglobulins M and D, followed by DNA rearrangement and class switching to produce immunoglobulins G, A, or E [16]. B-1 cells and B-2 cells differ in their constitutive production of immunoglobulins. For example, B-2 cells in the spleen do not constitutively produce immunoglobulins, whereas B-1 cells in the abdominal cavity produce IgM in the absence of stimulation. B-1 cells not only secrete IgM constitutively; some of these cells are also precursors of plasma cells that secrete IgA, which is involved in immunity of mucous membranes including that of the intestines. Immunoglobulins derived from B-1 cells have fewer mutations and short nontemplated N-insertions, thus limiting the number of these cells, inasmuch as the immunoglobulins produced by these cells are generally closer to the germline state than the immunoglobulins produced by B-2 cells. Immunoglobulins produced by B-1 cells can distinguish among the factors composing bacterial cell walls, suggesting that B-1 cells are implicated in specific germlines or produce natural antibodies that provide serological defense against microorganisms prior to immune responses induced by microorganisms. Natural immunoglobulins can limit the spread of pathogens and can play a major role in the survival of infected hosts [17,18]. These types of natural immunoglobulins, however, are not produced only by B-1 cells in the abdominal cavity and do not remain confined to the abdominal cavity. Rather, they can migrate to the spleen as their Mac-1 phenotypes are diminished and produce natural IgM at this site. Moreover, splenic B-2 cells stimulated by antigens possess the phenotype of B-1 cells. These cells can move to the abdominal cavity, suggesting that the spleen plays an important role in maintaining balances of B-1a and IgM production [19,20].
B cell proliferation usually begins with primary B cells, which normally remain in the G0 phase of the cell cycle. These cells move into the S phase when the cycle is stimulated by a metabolic change caused by the cross-reaction of an antigen with immunoglobulin receptors on the surface of B cells. Although B-1 cells were found to be generally unresponsive to anti-Ig stimulation, B-2 cells showed active progression through the cell cycle in response to anti-Ig [21]. This difference was attributed to the activation of sufficient phospholipase C of B-2 cells compared with the non-activation of phospholipase C of B-1 cells and problems with regulation of signal transduction mediated by CD5associated Src homology region 2 domain-containing phosphatase 1 (SHP-1). Nevertheless, phorbol ester stimulation of thymidine incorporation was found to peak after 24~30 h in B-1 cells, whereas phorbol ester did not stimulate the proliferation of B-2 cells. Rather, thymidine incorporation in B-2 cells peaked 54-60 h after the addition of calcium ionophore to phorbol ester, rather than after 24~30 h. This weak or negligible response of B-2 cells to phorbol ester was likely due to the absence of cyclin D2 production and the inability of cyclins D2 and D3 to form a complex with cyclin dependent kinase 4/6. Further, although phorbol ester is capable of forming a cyclin D3-cdk complex in B-2 cells, it was unable to stimulate the phosphorylation of the retinoblastoma tumor suppressor gene (pRb) [21][22][23].
Given that phospholipase C and CD5-associated SHP-1 are activated in splenic B-2 cells, but these cells do not produce cyclin D2, the proliferative responses of B-2 and B-1 cells in the abdominal cavity differ following the stimulation of surface receptors on B cells.
B cells in Otorhinolaryngologic Fields
Various studies have established that B cells in the ear, nose, and throat are involved in the production of antibodies, including the secretion of IgA. Another study reported, however, that these cells were more involved in the production of IgG, a disparity thought to be due to immune system differences in selected tissues and the functions of these tissues. The distributions of B-1 cells and B-2 cells or the type of stimulation may influence the type of immunoglobulin produced and the amount secreted. IgM and IgG are normally produced in nasal polyps, with IgA occasionally being secreted. Most of the B cells in the middle ear mucosa secrete IgA, with a small number of cells producing IgM [24,25]. In contrast, more of the CD5+ and CD5-B lymphocytes present in the adenoids produce more IgG than IgA or IgM [26].
A study of murine expression factors involved in the differentiation and proliferation of B cells found that the phenotype of B cells in cervical lymph nodes was CD5 low , CD23 high , CD43 low , B220 high , sIgM low , sIgD high , Mac-1 low , CD80 (B7-1) low , CD86 (B7-2) low , and Syndecam-1 low , as shown by flow cytometry-double immunofluorescent labeling, enzymelinked immunosorbent assay (ELISA), and [ 3 H]thymidine incorporation assay. Ig was not constitutively produced during cell differentiation. IgM was secreted in response to LPS stimulation, with IgA and IgG observed on Day 5. Active cell proliferation was observed through the S (synthetic) phase on Day 2 of CD40 and anti-CD8 stimulation [27,28].
A flow cytometry study investigating the distribution and frequency CD5+ B cells, γδ T cells, and CD56+ NK cells, which are involved in natural immunity, during idiopathic adenoid and tonsillar hypertrophy, found that more cells were stained with anti-CD5 monoclonal antibodies than with antibodies to γδ T cell receptors and CD56. CD5-positive cells were usually located in interfollicular and subepithelial sections, with some of these cells also observed in follicles, the follicular mantle, and the epithelium. Most CD5-positive cells in the epithelium and subepithelium were located near the stratum basale of the epithelium and at the junction between the epithelium and subepithelium. The numbers of CD5-positive cells differed significantly in sections of the tonsils, with the number being higher in the follicular mantle than in follicular areas (p < 0.01). CD5-positive cells were also present in the epithelium and subepithelial sections of the normal pharyngeal mucosa of posterior pillars, but there were fewer cells at these locations than in tonsillar tissues. The percentages of CD19-positive cells in children that were also CD5-positive were similar in palatine tonsils (19.8 ± 8.7%), adenoids (24.8 ± 14.1%), and blood (21.1 ± 9.6%). In adults, the percentages of CD19-positive cells that were also CD5-positive were also similar in palatine tonsils and blood (15.6 ± 7.2% vs. 19.3 ± 10.6%, p = 0.89). In addition, the mean fraction of CD5-positive cells in blood was similar in adults (19.3 ± 10.6%) and children (21.1 ± 9.6%). Taken together, these findings indicate that CD5+ B cells are abundant in tonsillar tissue [29].
Since the adenoids are located in the upper wall of the nasopharynx, these organs are always in contact with allergy-inducing antigens via air breathed through the nose. Some of these antigens on mucous membranes are transferred to the nasopharynx by ciliary movement and to the lower part of the pharynx by swallowing behavior. The mucosa moves from the nasal and paranasal cavities directly to the adenoids covered by the ciliated epithelium. This trait allows allergy-inducing antigens to be in contact with immune cells, one of the main components of adenoid tissue [30]. In the adenoids, B lymphocytes are mainly distributed in the germinal center and columnar layer, with most B lymphocytes in the germinal center being activated B lymphocytes and most in the columnar layer being stabilized or memory B lymphocytes. Immunohistochemical assays of IgE expression in adenoid samples obtained from patients with and without allergic rhinitis who underwent surgery for adenoid hypertrophy found that IgE was expressed near the germinal centers and submucosal regions in both groups, and that staining intensity and extent in four selected areas did not differ significantly [31].
Antibody Formation in Otitis Media
Five classes of Igs are present in blood: IgG, IgA, IgM, IgD, and IgE, with IgG constituting 75% of Igs in blood. IgG, the only immunoglobulin that can pass through the placenta, can be categorized into four subclasses [32]. The normal proportions of these IgG subtypes are IgG1 60~70%, IgG2 20%, IgG3 10%, and IgG4 5% [33]. IgG is produced starting at birth, with concentrations of IgG1 and IgG4 reaching adult levels at age 7-8 years; and concentrations of IgG3 and IgG2 reaching adult levels at ages 10 and 12 years, respectively [34]. IgG1 is the main subtype that forms antibodies against viral protein antigens, and IgG2 is the main subtype against Streptococcus pneumoniae (Spn), Haemophilus influenzae (Hi), and polysaccharide antigens [33].
IgA constitutes about 15% of Igs and is usually located in mucous membranes of the nasopharynx, providing primary defense against local inflammation. The quantity of IgA present in mucous membranes is greater than that of all other classes of Ig. IgA can be divided into two subclasses, IgA1 and IgA2, and can form secretory IgA (sIgA) dimers, which are present in tears, saliva, sweat, and colostrum, as well as in mucosal secretions of the genitourinary tract, gastrointestinal tract, prostate, and respiratory epithelium. The secretory element of IgA protects these proteins from degradation by protein hydrolases in the gastrointestinal tract environment and from microorganisms that proliferate in body secretions [6]. sIgA can suppress the inflammatory effects of other Igs [35]. sIgA primarily binds to ligands in pathogens and inhibits their binding to epithelial cell receptors [36]. IgA reduction results in lower production of antibodies against pneumococcal polysaccharide, which influences recurrent infection. Moreover, IgA deficiency may occur temporarily in children. IgA concentrations increase relatively slowly in children, with deficiency diagnosed after age 2~3 years. IgA deficiency is defined as 14~159 mg/dl at age <5 years, and 33~236 mg/dl at age 6~10 years.
IgM constitutes about 10% of serum immunoglobulins and is involved in most humoral immune reactions, especially to bacteremia [26]. IgM antibodies are the first to be secreted by B cells following antigen stimulation, including during early stages of infection, and reappear, but at lower concentrations, upon re-exposure to antigen [32]. Unlike IgG, IgM cannot pass through the placenta. IgM is useful in diagnosing infectious diseases, as the presence of IgM in a patient's serum indicates a recent infection.
Igs are among the most important defenses against pathogen invasion and the resulting upper respiratory infection, such as OM. Expression of Ig is closely associated with disease activity, with many studies reporting differences in expression between serum and middle ear fluid (MEF) ( Table 1). (1) Acute phase: IgG to PhtD, LytB, PhtE, Ply: rAOM < AOM = AOMTF (2) Convalescent phase: IgG to PhtD, LytB, PhtE, Ply: rAOM = AOMTF < AOM Otitis-prone and AOMTF children mount less of an IgG serum antibody response as compared with non-otitis-prone children to Spn proteins after AOM. IgA: pOME < OME. Lower concentrations of IgA in middle ear fluid of patients with OME may be related to OME recurrence and chronicity. Acute OM (AOM) is an acute inflammatory disease in the tympanic cavity caused primarily by bacterial or viral infection. Since immunoglobulins are important in defenses against bacterial infections, immunoglobulin expression patterns have been assessed in patients with AOM. A study comparing the levels of expression of IgG, IgM, and IgA in MEF and serum from 255 patients with AOM found that the levels of IgG and IgM were higher in serum, whereas the level of IgA was higher in MEF. These findings suggested that the MEF in these patients primarily represented a secretory response to inflammation rather than a transudate. In addition, infants older than 9 months of age who showed higher concentrations of IgA in MEF were generally culture-negative, but the exact mechanism remained unclear [37]. Similarly, studies have compared concentrations in serum and MEF of antibodies against pathogens such as Hi and Spn, which are considered primary causes of AOM and targets of vaccination. A study comparing the concentrations of pneumococcal antibody serotypes 1, 3, 6, 14, 18, 19, and 23 in MEF and serum of 61 children with AOM found that, during the acute phase of disease, IgG and IgM were predominant in serum, whereas IgG, IgM, and IgA were all detected in MEF. In addition, the concentrations of IgG, IgM, and IgA were increased during the convalescent phase of AOM. The detection of significant concentrations of IgA in MEF during the acute phase of AOM suggests that IgA is involved in the inflammatory process in the middle ear of children with AOM. Moreover, the finding that the three classes of immunoglobulin were increased in serum during the convalescent phase suggests that the systemic immune response is involved in the pathophysiology of AOM [38]. Similar results were observed in 40 children with AOM caused by Hi, with IgG being predominant in serum samples and high ratios of IgG, IgM, and IgA to Hi concentrations in serum and MEF. The concentrations of IgG and IgA in MEF were higher than the concentration of IgM, with the IgA antibody being more frequently observed in MEF of patients lacking the IgA antibody in serum. These findings suggested that young children aged <2 years with OM responded both systemically and locally to Hi by producing specific antibodies [39]. A similar trend was observed in patients with AOM induced by viral pathogens. Although IgG was predominant in serum, the concentration of IgA was more than 4-fold higher than that of IgG in MEF. Additionally, the virus-specific IgA concentration was found to be higher in patients vaccinated against viruses. Taken together, these reports all demonstrate the existence of local immune responses against inflammatory events in the tympanic cavity of patients with AOM. Moreover, a significant increase in IgA concentrations in vaccinated individuals suggests that vaccines are effective in children with AOM [40]. In another study, children with AOM caused by Hi and Spn were divided into groups with cleared and uncleared MEF, and the concentrations of IgG, IgM, and IgA in MEF were compared. The concentrations of the three classes of immunoglobulins were higher in the cleared MEF than in the uncleared MEF group, suggesting that the clearance of Spn or Hi from MEF was significantly associated with the presence and concentration of specific antibodies in MEF at the time of diagnosis [33].
Another study investigated whether colonization of causative pathogens affects the expression of Igs. That study, which compared serum antibody concentrations against Moraxella catarrhalis (Mcat) in 35 AOM patients and 149 healthy controls, found that specific IgG antibodies against Mcat were detected in all serum samples regardless of AOM occurrence. IgG antibodies against outer membrane proteins (OMP) were significantly higher during the convalescent phase of AOM. In addition, serum concentrations of IgG antibodies against oligopeptide permease (OppA), Moraxella surface protein (Msp)22NL, and hemagglutinin (Hag)5-9 were lower when Mcat colonized the nasopharynx, suggesting that high levels of antibody against these three proteins were correlated with reduced carriage [41].
Another study sought to identify the origins of Igs found in MEF by comparing the concentrations of IgG and IgA in nasal wash (NW) fluid, MEF, and serum of 137 patients with AOM. IgG concentrations were higher in MEF and serum than in NW, whereas IgA concentrations were highest in NW. The similar patterns of expression in serum and MEF suggested that Igs in MEF originate by diffusion from serum rather than by reflux through the Eustachian tubes from the nasopharynx, whereas sIgA in MEF likely derived from local immune responses in the MEF [42]. Another study assessed the serum IgG titer against Spn according to recurrence and response to treatment. In that study, involving 34 patients with AOM, 35 with recurrent AOM (rAOM), and 25 with AOM treatment failure (AOMTF), the serum anti-Spn IgG concentration was lowest in the rAOM group during the acute phase, suggesting that the lower immune response in this group could increase the likelihood of AOM recurrence. The serum anti-Spn IgG concentration was highest in the patients with non-recurrent AOM during the convalescent phase, suggesting that lower levels of production of anti-Spn IgG could increase the risks of recurrence and treatment failure [43]. Similar results were observed in assays of IgG against nontypeable Hi (NTHi). The concentration of serum anti-NTHi IgG was lowest in the rAOM group during the acute phase, whereas the concentration of anti-protein D (anti-PD) was significantly higher only in the non-recurrent AOM group during the convalescent phase. These findings suggested that anti-PD IgG could protect patients with AOM due to NTHi from AOM recurrence and treatment failure [44]. The serum concentrations of total IgG (IgG1, IgG2), IgM, and IgA were found to be significantly lower in children who were than those who were not prone to recurrent OM. Serum IgA, IgM, IgG, and IgG1 concentrations in each age group were similar in children with and without OM or were higher in those affected by OM due to antigen stimulation, whereas IgG2 concentrations were generally lower in children with OM [34]. Similarly, IgG and IgG1, but not IgG2, titers against pneumococcal polysaccharides were higher in 166 patients with rAOM than in 61 healthy controls, with trends being similar in serum and MEF [45]. Another study found that serum and MEF concentrations of anti-NTHi IgG were higher in patients with rAOM than in healthy controls [46], perhaps because recurrent infection in the rAOM group consistently stimulated the immune system to produce high concentrations of antibodies. In particular, IgG2, which was found at low levels in the rAOM group, is an important primary defense factor against Spn and Hi, as these antibodies opsonize capsular polysaccharides. Therefore, the low concentrations of IgG2 in the rAOM group support the limited ability to defend against pathogens leading to rAOM. However, when serum IgG2 concentrations and the frequency of respiratory tract infection were measured in adults who had shown low IgG2 as children, these adults showed normal levels of IgG2 and no increase in the frequency of respiratory tract infections [47]. Similarly, IgG2 levels are higher than IgG1 levels in healthy adults, whereas IgG1 levels are higher in children. In addition, IgG2 concentrations are lower in children with prone-OME (pOME) than in age-matched healthy controls [48]. These findings suggest that low IgG2 levels in childhood could be associated with an increased frequency of rAOM in children, but that IgG2 concentrations were normal in adults through growth and normal age-related increases. Thus, despite differences in childhood, these groups show similar patterns of defenses against respiratory tract infections as adults.
A study comparing serum and MEF levels of IgG, IgM, and IgA antibodies against Spn and Mcat in children with rAOM and chronic OME (cOME) found no between-group differences in the concentrations of these antibodies, whether in serum or MEF. Moreover, immunoglobulin concentrations were independent of bacterial species. However, only IgG concentrations in serum and MEF were strongly correlated in both the rAOM and cOME groups. These findings suggested that ET function or environmental factors play more important roles than immunoglobulins in the pathophysiology of rAOM and cOME. Thus, immunoglobulins may be less potent in patients with repeated infection and those who develop chronic OME. These discrepancies with previously described studies indicate the need to evaluate additional influences, such as ET function or environmental factors [49].
Studies have also investigated the effects of IgG, IgM, and IgA concentrations in MEF on chronicity and recurrence of OM. The concentration of IgA in MEF has been reported lower in patients with pOME than OME, suggesting that a lower IgA concentration affects the chronicity and recurrence of OME. Bacterial stimulation of immunity in the middle ear of patients with AOM has been found to increase the concentration of IgA. However, defects in secretory Ig production and disorders in local defense mechanisms may decrease IgA concentrations, leading to the recurrence and chronicity of OME [50]. Concentrations of Ig-immune complexes (ICs) in MEF were found to vary when patients with OME were classified as having acute, subacute, and chronic phases according to disease activity. The concentration of IgG-ICs was the highest in the acute and chronic phases, whereas the concentration of IgA-ICs was the highest in the subacute phase. These results suggested that immune complexes in the tympanic cavity may play an important role in the prolonged inflammatory process of OME by activating the complement following the chemotaxis of neutrophils [51].
A comparison of IgG, IgM, and IgA concentrations in serum of cOME patients and normal controls found that the concentrations of all three immunoglobulins were lower in the cOME group than in the control group. The inflammatory reactions in cOME are regarded as chronic rather than acute. Since Ig concentrations were low in the cOME group, OME was regarded as not improving in its initial stage; rather, it continues to persist in the form of cOME. In addition, analysis of Ig expression in the serum and MEF of cOME patients showed that Ig concentrations in MEF did not correlate with the species of bacteria or serum Ig concentrations, with serum Ig concentrations being higher in patients in whom bacteria were identified than in those in whom bacteria were not identified [49]. Differences in the patterns of immune responses in MEF and serum may be responsible for differences in responses in effusion fluid and serum samples obtained from individual patients. Thus, the high serum Ig concentration observed when bacteria were positively identified may be due to the effect of systemic immunity in patients with cOME. In contrast, because the Ig concentration in effusion fluid was not affected by bacterial identification, immune reactions in effusion fluid are less influenced by systemic immunity than immune reactions in serum. Alternatively, immune reactions in effusion fluid may be independent of systemic immune reactions. Other studies of serum concentrations of total IgG, IgG subclasses, total IgA, and IgA subclasses in OME patients aged >3 years showed that all of these concentrations did not differ between control and OME groups, and that each concentration did not differ from that in age-matched normal controls. Regardless of OME, both groups showed similar aged-matched normal antibody responses. Moreover, the contribution of systemic immune reactions to the pathophysiology of OME was likely relatively small in each age group [53].
Ig expression patterns were also compared in MEF and serum. MEF has been categorized as serous and mucoid types, with the sIgA concentration being higher in the mucoid type. Mucoid-type MEF may show a stronger immune response than the fluid type, with the viscosity of MEF increased due to the production of various inflammatory products, including mucin, lysozyme, and IL-8 [54]. A study of Ig expression patterns in MEF of 59 children with OME showed that the expression level was high, in the order IgG > IgM > sIgA > IgA, with the concentration of all sampled Igs being higher in MEF than in serum. IgG and IgM showed the most increased pattern in the acute phase, and sIgA was increased in the subacute or chronic phase. Additionally, serum and MEF IgG were lower in patients with recurrent OME than patients with non-recurrent OME. Taken together, these findings suggested that the continuation and recurrence of OME were due to reduced IgG in serum and MEF [55]. Similarly, the IgG concentration was higher than IgA and IgM in both serum and MEF, and IgA was higher than IgM in MEF. These findings suggested that OME induces local immune responses in the tympanic cavity through the activation of IgA in MEF [56].
Another study compared IgG and IgM concentrations in serum and MEF of patients with acute suppurative OM (aSOM), chronic suppurative OM (cSOM), and a control group. Serum IgG concentrations were higher in the cSOM than in the control group, but lower than in the aSOM group. Serum IgM concentrations were higher in both SOM groups than in the control group. These findings suggested that chronic and repetitive inflammatory responses could enhance the production of serum IgG in patients with cSOM, as well as somewhat increasing the production of serum IgM in patients with aSOM and cSOM, as these enhancements usually indicate recent infection. This hypothesis was supported by findings showing that IgM concentrations in MEF were higher in aSOM than in cSOM patients, whereas IgG concentrations were higher in MEF of patients with cSOM [57]. In addition, serum IgE concentrations were highest in cSOM patients, as well as being somewhat higher in aSOM patients than in controls. MEF IgE concentrations were also higher in cSOM patients, showing significant correlations with serum IgE concentrations. These findings suggested that IgE-related allergy appears to play a contributory role in cSOM and that elevated IgE in MEF is indicative of a likely mucosal response [58].
In summary, the patterns of increases and decreases in Igs concentrations as a function of the type of OM or disease activity are diverse. Moreover, they suggest that local immune responses in the middle ear may be independent of systemic immune responses in the tympanic cavity.
Antibodies and Related Transcription Factors in Otitis Media
Following antigen stimulation, B cells can differentiate into plasma cells in germinal centers, producing high-affinity antibodies and often surviving for several months in the bone marrow. Four transcription factors, B cell leukemia/lymphoma 6 (BCL6), B lymphocyte inducer of maturation program-1 (BLIMP-1), paired box gene 5 (PAX5), and X-box binding protein 1 (XBP1), have been associated with the production of Ig and play major roles in the process by which LPS stimulates B-2 cells to differentiate into plasma cells [59,60].
BCL6 and PAX5 suppress antibody production and differentiation into B cells and plasma cells in germinal centers, whereas BCL6 and XBP1 facilitate differentiation by suspending the cell cycle. The transcription repressor BCL6, which is involved in B cell differentiation and is highly expressed in germinal centers, represses the cyclin-dependent kinase inhibitors p27 and p21 to suppress the rapid cell differentiation. The main functions of BCL6 are its inhibition of BLIMP-1 and its suppression of the differentiation of B cells into plasma cells.
BLIMP-1 induces the expression of cdk inhibitor 18; the proapoptotic genes GADD45 and GADD153, which are required for differentiation of B cells into plasma cells; and J chain, XBP1, and HSP-70, which are involved in Ig secretion [61].
A study of transcription factors involved in antibody production in MEF obtained during surgery for OME found that the expression of BLIMP-1 and IgA was significantly lower in the OME-prone than in the non-prone OME group, suggesting that the reduction in antibody production in response to reduced BLIMP-1 expression would reduce immunity against pathogens and contribute to the recurrence and chronicity of OME [50].
PAX5 is required for differentiation of germinal center B cells into plasma cells and for inhibition of antibody production. PAX5 possesses dual activities, as it can activate or suppress gene transcription. PAX5 inhibition increases XBP1, which is needed for immunoglobulin and heavy chain secretion. PAX5 elevation suppresses the differentiation of B cells into plasma cells, reducing antibody production [62].
Although defects in XBP1 result in the production of normal T cells and B cells, the formation of normal germinal centers, and normal cytokine production, Ig production is severely impaired. Thus, XBP1 is crucial for the differentiation of B cells into plasma cells. XBP1 expression was significantly lower in patients prone than not prone to OME, as well as being associated with reduced antibody production and increased recurrence of OME [50].
The levels of expression of BLIMP-1 and XBP1 were significantly lower in patients who were prone than those who were not prone to otitis, whereas expression of BCL6 and PAX5 tended to be higher in the otitis-prone group. These results suggest that BCL6 and PAX5 expression suppresses antibody production, whereas BLIMP-1 and XBP1 expression promotes antibody production in the middle ear, and that impaired production of antibodies against invading pathogens in the tympanic cavity is more related to the recurrence and chronicity of OM [50].
Conclusions
OM is one of the most common diseases in infants and children, with significant social and economic costs. OM may lead to language development disorders, delayed language acquisition, aprosexia, and behavioral abnormalities. It is also a representative otologic disease, in that recurrent or chronic OM can induce otological symptoms, including hearing loss, otorrhea, tinnitus, and ear fullness at all ages. OM can result in various complications, including those regarding the temporal bone, and requires surgical treatment. Many studies have attempted to identify factors that induce OM, as well as inflammatory responses, inflammatory mediators, and innate and acquired immune responses, enabling the progressive elucidation of the pathogenesis and pathophysiology of OM.
The occurrence of otitis media is accompanied by the production of B cell-related antibodies during acute and chronic inflammation of the middle ear cavity. Of the five classes of immunoglobulins produced by B cells, three (IgG, IgA, and IgM) are produced in patients with otitis media. Phenotypically, B cells in otitis media are B-2 cells that express sIgM low , sIgD high , B220 high , CD23 high , and CD43 low . Immunoglobulins are among the important defense mechanisms in upper respiratory infections such as otitis media, and the expression of appropriate immunoglobulins to protect against pathogens invading the body is closely related to disease activity. Immunoglobulin expression patterns, including differences between expression in serum and middle ear fluid, have been found to differ in patients with acute otitis media, otitis media with effusion, and chronic otitis media with or without cholesteatoma. In particular, lack of production of antibodies in serum and middle ear fluid due to otitis media can result in hearing loss, aggravation of symptoms, chronicity of otitis media, and increases in complications.
The present study analyzed published results on antibody production and antibodyrelated transcription factors in OM. OM alters serum Ig levels and results in the secretion of Igs into the tympanic cavity. Four transcription factors, BCL6, BLIMP-1, PAX5, and XBP1, play important roles in antibody production in the tympanic cavity.
|
v3-fos-license
|
2018-12-14T19:32:52.612Z
|
2015-09-01T00:00:00.000
|
54757762
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.journals.aiac.org.au/index.php/IJALEL/article/download/1580/1520",
"pdf_hash": "e19cbce6caffa1af874967ff4b967a56f2922b6e",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46057",
"s2fieldsofstudy": [
"Linguistics",
"Computer Science"
],
"sha1": "e19cbce6caffa1af874967ff4b967a56f2922b6e",
"year": 2015
}
|
pes2o/s2orc
|
Text Variants and First Person Domain in Author Identification: Hermeneutic versus Computerized Methods
Since a language variety contains shared variants, and since a complete correlation between author and linguistic features is rarely acquired, it is suggested that linguistic features which fall outside the correlational agreement in a variety belong to the author's First Person Domain (FPD). Advances in computerized vocabulary profiling and readability provide useful characterization of features found in Academic English (AE), but they cannot capture the full range of linguistic features in a text. A corpus of about 38 extracts and texts (111.000 words) from local and international authors is analyzed to determine interpersonal and intrapersonal variations. The results show that language variation determines the features of FPD which are crucial for author identification and that computational methods are not adequately sensitive to insure a hundred percent author identification. Therefore, epistemological author identity profile (AIP) is suggested to plot alleged texts against the socio-physical and epistemological parameters of alleging authors.
Background
Since a text is normally assumed to be produced by an author, and since texts can be grouped in varieties by using situational parameters and linguistic features they share, it is safe to assume that the study of language variety has theoretical and practical implications for communal and individual use of language, implications which determine what an author observes due to conventions and what he/she can say even within the limits of one variety. The study of lexis, readability, grammar and textuality can highlight the boundary between conventional aspects of a variety, and thus a text in a variety, and individualistic formulation of that text. Aspects of what is found in actual texts and the overlap among texts (intra and inter textual properties) can modify our perception of the notion of text and language variety, especially when studying samples from authors who are not native speakers of the language and who claim to have written texts when textual and circumstantial evidence do not uphold the claim.
Linguistic features of language variety and variation commonly correlated with situational and geographical factors in terms of use, user and settings, all actively interact with (non-individual) parameters, overlooking any traces of a writer voice or author identity. It is rather surprising that although Author Identity (AI) i and author attribution of a text, is the focal topic for establishing a "Science of Text" for Dressler (1978) and De Beaugrande and Dressler (1981Dressler ( , 2002, still there is no theoretical recognition for distinctive features of the individual writer or author in terms of voice or identity.
Variety Features: the need for Text Variants
Approaches to characterize language variety are as old as Aristotle's Poetics; but research in linguistics has resorted to levels of linguistic analysis and situational or rhetorical parameters to identify shared features of a given variety. J. R. Firth expounded the "context of situation", a notion which was later developed by his followers in Britain to determine the demarcation of language variety. Hill (1958) was the first linguist to use the term "register" (Ellis, 1965) (Ellis, 1965, p. 5). The focus on people takes the discussion to "dialects", while the focus on "conditions" takes the discussion to langue use, i.e. variety. However, it was in (1964) that register received elaborate treatment in Halliday et al., who suggested that register identification and classification can be conducted in the form of: 1) field of discourse, 2) tenor of discourse and 3) mode of discourse (Halliday, McIntosh and Strevens, 1964, 90-92), Flourishing Creativity & Literacy notions which received further elaboration in Ure and Ellis (1977), Sinclair (1972) and Sinclair and Coulthard (1975) among others. Ure and Ellis (1977, pp. 198-201) stipulated "complete" correlation between contextual features, of field, formality, mode and tenor, and linguistic features, for the attainment of a language register (ibid. 201). One may find an approximation of this complete correlation in certain cases like application forms, questionnaires and multiple choice test items, i.e. varieties which exemplify limited choices making a "sublanguage" found in ritualistic language and strict conventional formulae (marriage ceremony for instance). The room for variation is pre-determined, allowing the applicant to be male or female, married, single or divorced, white, black or colored, and the like. Most texts, however, are not so restrictive, and hence allow for choices at different levels from wide range of possibilities, especially at the level of the mental lexicon and grammatical structure (Author, 1986 and2012).
Despite the power of conventional and regularity forces associated with linguistic use, there remains a considerable room on the continuum of linguistic infinity for personal style, idiosyncrasies and identity, features which need to be described and explained by a text theory. The need for developing an "inner voice" in a writer or in a language learner has been recognized and encouraged (Russell, 1999), and the need for seeking empirical method and evidence for author attribution has valid, and even "ethical" grounds, since questions of identity of author and "text" have direct bearing on questions of academic argumentation, forensic evidence, copyright disputes and literary criticism. But linguistically, the vital and central ground, the object language, the linguistic unit under the spotlight in this search is the "text". In a corpus-based approach, including variety and variation studies, the text is a real physical entity, a processing mechanism and a theoretical frame is at stake. The two approaches of discourse and text linguistics merge in the shift from focusing on discourse features, variety features, or stylistic features to relocating the "text" in the central paradigm for knowledge and linguistic study. Whether one takes the realization of linguistic formulation (paraphrase, abstracting, plagiarizing, cut and paste), translating or investigating language variety, style, or discourse, the unit under scrutiny is the text as realization or abstraction.
Text variant is a feature of language user or linguistic use. The term is an all-comprising notion which refers to features of the speaker, the situation, the geographical space, or the speaker-specific human or non-human speaker including divine speakers, imaginary entities and creatures assumed actual or mythological. Therefore, any of the purposes described in the above section (text and variety description, writer identity, author attribution, or studying one specific text) will be best served by focusing on text variants, and consequently questions of text integrity can be captured and served. ii
Writer Identity from Pedagogical and Ethnographic Perspectives
In pedagogical settings encouraging learners of English as a foreign/second language to express themselves, takes various forms one of which is keeping diaries or writing an autobiography or "autoethnography". Chamcharatsri uses composition classes to examine how second language learners "construct their identities" and how autoethnographical aspects of their personality play a role in identity building by Asian students in American universities (Chamcharatsri, 2009). For Ivanić learning and teaching about academic writing can evolve around the self and identity, a notion which receives extensive treatment in her (1997) book. Identity involves complex and powerful constructs such as "social identity", "the self" and "discoursal identity", helps in developing discourse and academic community (Ivanić, 1997) and it also helps in exploring the "multiple possibilities of self-hood" as an academic writer (Ivanić, 1997). One can say that according to Ivanić's model, the journey of an academic apprentice writer is tenuous and problematic as having to convey content while giving a representation of the self, in multilayered structures of socio-communal groupings in the wider environment and academia. Ivanić's model positions the "self" and "identity" in the face of socio-cultural settings which can be penetrated only through discoursal compromises between the self and the other.
Catherine Russell (1999) explores the "autobiographical" and "autoethnographical" as two basic foundations of the construction and representation of identity in filmmaking. The deconstruction of the other through the "transformation of "personal expression" in the avant-garde to a more culturally based theory of identity" (Russell, 1999, p. 25) is the way to vitalize and create the dynamics of identity. This is achieved by harnessing the authobiograpical elements and the autoethnic expression in self-expressing discourse. The "inner voice" expressed by the "I" as opposed to "you" finds its echo in the "mental voice" found in expressions of identity in women intimate relations (Moonwomon-Baird, 2000). In all three perspectives reviewed above, linguistic (or discourse) identity hinges on extra-linguistic factors, biography, community, ethnicity, or libido motivation.
According to Klein and Kirkpatrick (2010), writing can be a tool for "communicating and learning", distinguishing two types of variables, moderator variables "gender, previous writing experience" and mediator variables "genre knowledge and approach to writing". They found that gender predicted previous writing experience but was not affected by instruction, while instruction affects genre knowledge" (Klein and Kirkpatrick, 2010). Patchan et al. (2009) studied the writer's identity by comparing comments by students (peer review), a writing instructor and a content instructor, to test the hypothesis which states that students are capable of rating their peers. Writing instructors' comments were largely evaluative (72%) rather than coaching (20%) or common reading ( (2010), Korean students in American universities face difficulties as they engage in a "power struggle" in their attempt to construct "their authoritative identities in the U.S. academy -which requires authoritative writer identity" (Bruke, 2010, p. 13).
Author Attribution
Universities in USA prepare students guides to ensure knowledge about violations of copyright and plagiarism offences (see: http://w.w.w.judicial affairs,sa.ucsb.edu/ Academic Integrity, Academic Integrity: A Student's Guide). The implicit assumptions behind such documents reveal that texts and authors have their entity, identity and integrity, terms which provide notional frames that need demarcation, but that also pose questions about the linguistic ground and implications of these terms.
Studies of Author Attribution (AA) have a long history (see Grieve 2005 for a review), but they have recently experienced a surge with the availability of experimental methods enhanced recently by computational linguistics. Grieve puts the earliest date for studying AA as (1787) referring to Edmond Malone work on the three parts of Henry VI in which Malone used meter and rhyme as features of author attribution (Grieve 2005, p. 4). In his review of AA, Grieve discusses the main issues in the area including: meter and rhyme, word length, sentence length, punctuation, contractions, vocabulary richness, graphemes, etymology, errors, words, word position, N-Grams iii and syntax (as in Startvik's analysis of forensic evidence: the discrepancies in two written statements, Grieve, 2005, p. 53).
With advances in computer and information technology, one can predict the rush towards using and developing available technologies. This trend is felt in the Internet link for Appen Speech Language Technology Inc. (http://www.appen.com.au). Three short quotations will clarify the point: Appen Text Attribution Tool (TAT) was developed under US government funding and sponsorship to meet and identify needs of intelligence and law-enforcement organizations.
[*underlining by researchers] (Appen,, 2008a, p. 1) TAT determines author's age by passing a document's features to a machine classifier (an SVM; SMO as implemented in WEKA [6]). By using features other than surface level, the TAT is able to identify constructs that reveal an author's true age.
[*underlining by researchers] (Appen, 2008a, p. 2) The TAT is intended to support human analysis by identifying candidate material for more assessment. It is not intended to provide definitive analysis. In its law-enforcement and intelligence configurations, the user brief was to provide an investigative profiling tool than evidentiary tool.
[*underlining by researchers] (Appen, 2008b, p. 2) The first quotation declares that TAT is customized to serve intelligence and law-enforcement organizations, and that TAT makes the task too specific for a linguist and operates as an indicator motivated by problem-finding.
In the second quotation, the customer is promised that TAT will definitely identify the "construct that reveals an author's true age", a claim which is made using indirect language "identifying constructs" and which is simply not correct, since "true age" can be illusive and the constructs used including "slang/jargon, specialized vocabulary, or context specific language varieties as text spk (SMS text speak)" are all open to abuse and misuse.
The third quotation provides a disclaimer and a retreat from the position announced earlier, since it states now that the results do not provide "definitive analysis" and that "the developers are told of no legal incrimination against individuals who may be wrongly classified and consequently accused" (Appen, 2008, Internet site).
In another online paper (Appen, 200b), Appen reveals another tool, Data Stream Profiling Tool, which "uses biometrics modeling, specifically mathematical abstractions of a user's typing behavior, in order to identify them". One of the three main components, the Keylogger, is a "small software component that is installed (covertly if necessary) on any computer to be monitored" (Appen, 2008b, p. 1). The "covert" option used here is not for protecting the persons being monitored but to put them under surveillance, a matter which raises ethical and legal questions. Technically, the significant factors DSP works include "typing cadence; duration for which keys are held down; timing transitions between key sequences" (Appen, 2008b, p. 1). It is clear that although the linguistic product of the individual being monitored is in the background (i.e. is being processed), the factors being monitored are extra-linguistic factor relevant to non-verbal behavior. The DSP shows the wide range of issues which may be evoked in what is termed "profiling". Author linguistic identity in this paper, is limited to language, and may be most productive when it is limited to one text-type.
Although not all works on text/author identification or attribution are set for specific or narrow band of customers with hidden agenda, still by its nature, unlike pedagogical applications, text identification and author attribution can easily slide to a forensic type of investigation.
Author identification
The works reviewed below make a random selection in which the emphasis is on academic questions concerning the possibility of achieving Author Identification (AI) and the type of linguistic features and techniques (tools and methods) employed to achieve AI. Stamatatos et al. (2001) carried out experiments in genre detection, author identification and author verification tasks to test their method which they developed. Their technique utilizes one-word and two-word frequency. They maintain their method and technique is promising and that the "distributional lexical measures, i.e. functions of vocabulary richness and frequency of occurrence of the most frequent words" is better than most available methods for author identification (Stamatatos et.al, 2001, p. 471). Hoover (2003) questions the "usefulness of vocabulary richness for authorship attribution" rejecting the assumption that "vocabulary richness can capture an author's distinctive style or identity" (Hoover 2003, p. 152). But in Hoover (2006), a large corpus (200.000 words) of American poetry and a large corpus of 46 Victorian novels, are used to test the usefulness of "the less than 100 most frequent word units, only to conclude that word lists were unable to identify author or style (Hoover, 2006). Hoover is hopeful, however, that refinement in search measures and the large corpora that can be treated today are promising in enabling us to explain "why and how word frequency analysis is able to capture authorship and style" (Hoover, 2006, p. 1).
The critical issues of the works of suspected authors and number of words needed for suspected texts were discussed by Luyckx (2011). He adopted the "traditional number 10.000 per author as a minimum for an authorial set" (Luyckx 2011, p. 35). He clearly illustrates text categorization models in figure 1:
Corpus Training data
Feature selection Training instances Machine learning Test data Test instances Labeled test instances Figure 1. Luyckx and Daelemans, 2011, p. 38.
Luycks and Daelmans write:
It is possible that different types of features (e.g. character n-grams or function word distributions) are reliable for small as well as large sets of authors, the specific features may be very different in both conditions. (Luycks and Daelmans, 2011, 42) Considering forensic investigations in general, one can say that they reveal suspicion of text manipulations which compromise text "integrity" and which hide negative malicious intentions. Hence, the degree of offensive text manipulation and the degree of compromising text integrity is matched by human involvement in the manipulation, resulting in at least three types of offence: 1) Criminal persona corresponds to a criminal offense in which a text is forged or adapted as part of a crime.
2) Shadow/Ghost author corresponds to academic offense in which a text is wholly or partially claimed by a person other than the original writer (including cases in which the text is totally or partially commissioned to appear with a name other than that of the original writer).
3) Twilight assistant corresponds to soft offenses in which students or trainees receive assistance and/or lift material from outside sources (including commissioning) to unjustly earn grades, prizes or recognition (but not to the extent of procuring a complete project or thesis). iv Zhao and Zobel's investigation of a literary English corpus of (634) texts by famous authors "to further explore the properties of AA methods" (Zhao and Zobel, 2006), focusing on three linguistic features taken to represent style in the authors under investigation: 1) function words; 2) part-of-speech (pos) tags and pos pairs; 3), and combinations of these (ibid). Their main results show 85% accuracy in positive examples, 95% accuracy in negative examples, 10% accuracy in parts of texts, and 53% accuracy in 10.000 words extracts. Interestingly, the main error (misattribution) originated from translated texts "suggesting that style" -as measured by Zhao and Zobel -"does not survive the translation process." (Zhao and Zobel, 2006, p. 2). Conclusions show that token-level is the most reliable discriminating factor, and that the analysis level measures are more reliable than the phrase-level. Secondly, texts less than 1.000 words are less likely to be correctly classified. Thirdly, according to Stamatatos et.al the method has achieved a higher accuracy than Burrow's lexical method, which used fifty most frequent words (see Stamatatos et al., 2001. P. 212).
Stylometry is most often used for detection of plagiarism, finding authors of anonymously published texts, for disputed authorship of literature or in criminal investigations within forensic linguistic domain (Stańczyk andCyran, 2007, p. 151) Stańczyk, andCyran (2007) investigate two Polish writers using nine function words, eight punctuation marks, and combinations of function words and punctuation marks. They reported that the "textual descriptors" they used showed a preliminary advantage for using "syntactic attributes" in author attribution (see Stańczyk, and Cyran, 2007, p. 157).
The brief review above outlines three main concerns: 1. Identity development in pedagogical and academic settings represented by Ivanić (1997), 2. Autoethnographical self, represented by Russell (1999), 3. Author attribution and author identification recently represented by numerous researchers (e.g. Grieve, 2005, Grieve, 2005and Zhao and Zobel, 2006. It is clear from the comments made in relation to each of the reviewed works that the main emphasis is pedagogical, ethnographical or computational, which leaves the role of identity in text interpreting and making unexplained. Hence, placing the notion of identity in the linguistic network in the form of the IF, would hopefully reveal some aspects of interpretation, identity and author identification, by focusing on vocabulary, and readability. At the same time, investigating the author-specific features will address the features which fall outside the scope of use in variety studies and user in sociolinguistic and dialect studies, in addition to testing the potential of available computer programs. Author and text specific features are investigated through a simple experiment reported in the following paragraphs.
Author Linguistic Identity: Rationale
In order to construct an author identity (AI) outside the boundaries and concerns of traditional variety analysis of register, text-types and genre, and to test some of the stylometric techniques reviewed above, a well-researched sample is needed. The sample needs to be controlled in various ways including size, variety field and author to guarantee a better approximation in the results obtained and to allow comparisons of texts, or parts of texts, by overtly stated alleging authors and anonymous authors whose works are included for the purpose of shedding light on author identity and text integrity. Putting diverse authors in a list will not help in the search for a "possible author" (Grieve, 2005, p. 87).
Method and Sample
The primary method utilized here is observation of details of linguistic behaviour at the level of lexis (Nation's vocabprofile), text (readability scales) and syntax (sentence length and clause type). Observation of details and rigorous testing will enable researchers to obtain viable conclusions which can take the discussion beyond mere description and classification.
Sample Size and Diversity
The size of the sample is limited by considerations of availability of text in electronic forms and capacity of computer programs being used such as Paul Nation's Vocabprofiler and Flesch ease score, which added "human interest" to ease of reading in an attempt to supersede earlier formulas (Flesch, 2006). v Syntactic analysis was manually conducted, a practice which poses its own constraints on the amount that can be handled, but which is preferred to Xiaofei Lu's computerized syntactic complexity analyzer due to the limitations of Lu's classification of clause types. vi The sample is limited to one variety of English, academic English (AE), and within AE, only works from the field of linguistics are included, with one exception in the form of a poem by W. H. Auden for comparison and contrast. The diversity and size of the current sample are specified in Table 1 below.
The sample includes two selections from M.A. dissertations and Ph.D. theses, a sample from the introduction and method, and a sample from the survey of previous works, which allows examining this crucially intertextually mediated part of academic works. There are three types of summaries in the sample: M.A. summaries, Ph. D. summaries and academic papers summaries (column 5-7, Table 1). Also included in the sample are complete texts of academic papers and a selection from 3 books on Linguistics by two authors.
It is hoped that the sample will give results about various aspects of academic English in the field of language and Linguistics.
The works from which the samples are taken are by Arabic speaking academic staff specialized in English language, kept anonymous for privacy and ethical reasons, though the third out of the three (coded THREE) is the current author. Since the other two are anonymous authors (ONE and TWO) known by the author for long period of time, the author has first-hand circumstantial knowledge of authors ONE and TWO, knowledge which can be crucial for the purpose of author identity and author attribution. Works from four well-known linguists, John Sinclair, John Swales, M. A. K. Halliday and Noam Chomsky, are included to act as a yardstick against which other works are measured and compared. The poem from Auden acts as a reminder of the semantic possibilities and potential of the language, and it helps in evaluating various techniques and parameters used, such as lexical density, word frequency, readability and syntactic depth. There is a vast string of topics, issues and concerns which can be addressed by studying the current samples and in light of the results obtained from it. But the main issues of immediate interest here can be stated in terms of priority in the questions below: 1. Comparing one author with another: Are there any significant differences among and/or within the works of various authors in terms of vocabulary, readability, and syntactic depth?
2. Comparing variety with another variety: Are there any significant differences among the varieties (theses, summaries, papers or books) in terms of vocabulary, readability, or syntactic depth?
3. Comparing text with another text by the same author: Are there differences between two or more texts by the same author?
4. Comparing one part of a text with another part in the same text: Are there any differences among the parts of academic works of MAs and PhDs in terms of vocabulary, readability or syntactic depth?
The scope covered by investigating text and author identity is both dynamic and open-ended; whereas the features and the parameters utilized in computer programs are necessarily fixed, and currently limited. Three such programs have been utilized in the current analysis: 1. Paul Nation's Vocabprofiler, which handles various aspects of vocabulary statistics including the features of: a. total tokens, b. total types, c. K1, d. K2, e. word frequency.
2. Readability scales from which the following are obtained: a. number of words, b. number of sentences, c. ease score, d. readability level in terms of (school) grade.
Vocabulary, Readability and Syntactic Depth in MA and PhD Theses
Are there specific lexical tendencies or lexical behavior reflective of, or bearing the stamp of, a specific author? Can the impressions or pre-theoretical hunches and assumptions about a specific way of talk, a unique print or a "linguistic DNA" be supported by systematic observation and empirical investigation? The answer cannot be given lightly if one remembers the seriousness of the ethical, practical and material implications of cases of false authorship and identity theft, cases which range from text integrity to plagiarism. But the complexity and sensitivity of the questions are not reasons for delaying tackling them, nor should the present shortage in research tools and lack of effective software stop a preliminary attempt at evaluating currently available methods and techniques and suggesting future direction.
To narrow down possible differences in the results, one language variety is examined at a time, starting with academic theses where results from works of three unrevealed authors are reported including the percentage of K2 words (2 nd most frequent thousand in English) which shows reasonable similarity except in TWO Ph.D. Literature survey which uses 2.66% almost 50% less than TWO Ph.D. methodology 4.90%, where the two cases are less than 5% reported in ONE and THREE. Using 5.60% K2 words in Ph.D. literature is also attractive, since circumstantial knowledge of ONE and TWO puts ONE as low in writing ability. The notable results of Ph.D. K2 words point to a clear case of disciplinary deficiency compared with the two other authors as well as samples from MA and Ph.D. from the same author, TWO.
Another indicator, type/token ratio, shows similar distribution across the three authors except for TWO Ph.D. literature which is 0.41 compared with 0.37 in ONE and THREE for the same section of the thesis; otherwise, this parameter yields similar results across author comparison.
In readability indicators, the average words in a sentence shows a big difference in TWO MA (27.27 words) and in THREE MA (29.62 words); while ONE MA shows the lowest number of words in a sentence and the highest results in Flesch ease scale, which is not surprising taking into consideration the low writing skill found in ONE. In Readability indicators, Average Words in sentences, Flesch ease score, Flesch-Kincaid Grade level and Readability consensus confirm the weakness of ONE and surprisingly high scores in TWO MA, putting it most difficult and least easy, with highest number of words in sentences followed by THREE MA, Method.
Readability scales use three grammatical indicators: number of words, number of sentences and average number of words in a sentence, which means that they leave important significant syntactic features unrevealed. To carry on with the analysis of sentence length, a number of syntactic parameters have been investigated manually, including: 1. Number of words; 2. Number of sentences in text; 3. Number of clauses; 4. Number of clauses per sentence; 5. Number of main (independent) clauses (α); 6. Number of coordinated clauses (coα); 7. Number of subordinate clauses (β); 8. Number of coordinated to subordinate clauses (coβ); 9. Numbers indicating syntactic depth, calculated from total of β and coβs (cf. Lu). The final parameter of syntactic depth is calculated by the number of successive subordinated clauses, taking a coordinated inside a subordinated clause to be the same level of depth. Depth, together with sentence type, in terms of coordination/subordination, may prove to be indicators that distinguish author and/or text. The mechanism of coordination and recursive subordination are fundamentally different from the size of the Mental Lexicon, type/token ratio and lexical density. Ideally, a thorough description of syntactic complexity would take into consideration the degree of nominal modification and the degree of verb-phrase complexity, to account for depth at the level of the phrase as well as the level of the clause (Author, 1989). Essentially, syntactic complexity at the level of the clause in the present analysis may prove to be informative, and hence may point to distinctive features in an author's written texts; it carries identity features.
The results of examining various aspects of clause types and depth show that "clauses per sentence" is quite promising, since it sets THREE higher than ONE and TWO except in TWO. The higher number of clauses per sentences, has direct bearing on syntactic complexity, which confirms the difference of THREE from ONE & TWO; by using more clauses, especially subordinate clauses, in a sentence. ONE is lowest in depth and THREE is highest. But logical connectors, meta-textual deictics, and organizational elements are shown to be more salient as textuality indicators.
One can conclude that Vocabulary Profiler, readability scales, syntactic complexity and meta-textual connectors, have a measured degree of success in distinguishing one author from another when writing in the same academic variety, e.g. MA and Ph.D. theses. Numerous parameters show different results in the same work when two samples from methodology and literature reviews are examined. These results require further testing and the more sensitive (indicative) parameters closely monitored in varieties other than MA/Ph.D. Theses and works from more authors, is addressed in the following section when academic abstracts are examined.
Vocabulary, Readability and Syntactic Complexity in Academic Abstracts
The abstracts samples are necessarily small in words number, but they include abstracts of papers by the three anonymous authors and by four linguists, one American and three British. Syntactic analysis of the same sample of abstracts examined above show that readability, which is measured and influenced by sentence length (i.e. number of words per sentence), is not related to depth. In other words, what is easy to read or suited for higher grade readers in Readability scales is not necessarily characterized by syntactic depth.
In the author Textuality profile, it is noticeable that the same text by the same author exhibits consistent use of textual markers of different types or it consistently lacks textual markers, which means that when textual markers are preferred by an author, they appear in different types even in a small sample, Sinclair Abstracts (See Sinclair Papers, THREE Papers and THREE Ph.D., and TWO MA). Hence, textual markers like those currently used seem to be promising in author profiling.
Author Profile: Vocabulary
The vocabulary profile in the sample of academic research articles, academic books, and one poem, has the largest number of words in addition to being a more advanced stage on scholarship than in the sample from MAs and Ph.Ds. Therefore, it presents further evidence from published research claimed by the person whose name appears on the research article in journal, conferences and/or the Internet. One notable result is the high percentage of K2 words in one paper by ONE, all three papers in TWO, one paper in THREE and one paper by Sinclair (above 5.8% and 6.30% in TWO).
The poem is markedly different (9.72%) of K2 words. One strange result is found in THREE where in one research paper out of three the percentage of content words is (73.83%) compared with (26.79%) for function words; with the next highest percentages in ONE (40.54%) compared with (36.31%) for function words. This may be explained by the presence of foreign words in the translational data used in that paper by THREE (the current author). But the overall picture remains rather mixed with no clear trend in the distribution of content-versus-function words.
Type-token ratio does not show any consistency or special trend; the lowest ratio appears spread across the board: ONE (0.18), TWO (0.17), THREE (0.17 and 0.18), Chomsky (0.17). The highest type/token ratio, however, appears in British/American authors, up to (0.43) in Chomsky Book. But the highest of all type/token ratios is in the poem, recording (0.45) a result which might be influenced by the small number of words in the poem (663 words) compared with 2.000 to 1450 words in works in the research papers sample.
The type/token ratio is inversely related to the number of tokens per type, which is lowest in Auden's poem (1.84) tokens per type. Lexical density seems not to be susceptible to the size of the sample or to author, and hence it ranges in a narrow band between (0.51) in Auden's poem and (0.64) in ONE Papers, which means that lexical density is not significant for determining author identity or profile.
Author Profile: Readability
Moving to Readability designates parameters and scales, one finds that "Average words per sentence" does not correspond to Flesch ease score, as ONE Paper1, Paper2 show, since sentence length (16.57) and (14.00) words per sentence correspond to almost the same ease score of (45.5) and (45.8) respectively. The most striking off-the-point ease score, is that of Auden's poem where the average number of words per sentence is (27.35) and the ease score is (62.1), which is, supposedly, the easiest of all authors and texts. This result is both counter intuitive and not correct, since even experts on literature face difficulties in interpreting the poem as seen in the Author's work on contextualization (forthcoming). Another case which deserves some comment in relation to the average number of words per sentence and ease score, is Swales Paper1 in which the average number of words per sentence is (25.81) and the Flesch ease score is as low as (27.7), which is related to sentence length, but contradicts Auden's poem where the number of words per sentence is very near to that of Swales (27.35) but the ease score is very high (62.1) as observed above. Flesch-Kincaid Grade level shows a similar trend assigning Swales Paper1 to grade (17) and Auden's poem to grade (11.4), a trend which is carried over to the Readability score of grade (17) for Swales Paper 1 and grade (11) for Auden's poem, which is put at the same level seen in ONE Paper1 (Grade 11). The problem is surely not with the poem being so easy, but with the readability indicator which seems to be made for texts types that do not belong to literary genres.
Author Profile: Syntactic Depth
Syntactic features show discrepancies among the works of the same author in terms of clause per sentence; Sinclair's three papers have (6.15, 4.58 and 9.09) clauses per sentence; whereas Auden's poem has a very moderate and comparable number of clauses per sentence (4.61). The number of clauses coordinated to main clauses (co α) is markedly high in Auden's poem (46.66%) compared with (08.59%) in Sinclair Paper2 and less than this in the rest of the sample. Naturally, the high percentage of coordinated clauses lead to a low score in syntactic depth (31.66%) in Auden's poem compared with (64.16%) and more in the rest of the sample, which shows mixed scores and relatively narrow scope of difference (64.16%) lowest and (82.71%) highest. In brief, there is no clear-cut trend or differences in syntactic depth among various works by the various authors, with the significant exception of Auden's poem.
Textual indicators reflect a similar message of mixed usage with no clear predictable trend attached to an author or a text. This leaves us with more questions about the various parameters of vocabulary, readability, syntactic and textual parameters used in the present study (Author forthcoming, on intertextuality). One such question may be posed about the performance of the parameters in relation to the academic text-type of writing academic theses, abstracts of research articles and research articles, but a thorough author vocabulary profile needs to map up the full range of ML manipulated in the written works of an author, which in the present case should ideally include in the sample all texts by one author. The ultimate purpose is to specify the individual ML and the Communal ML, which means in the present context the ML shared by all authors in the sample (Author forthcoming). The aspects of linguistic author identity covered in the current paper leave much to be done at various levels of the language of one individual.
6.6 Vocabulary, Readability, Grammatical Depth and Textual Markers 6.6.1 Text-type Vocabulary Profile: Examining the percentage of K2 in the three academic text-types of theses, abstracts and articles/books, one finds slight differences among the three text-types, with more differences with the works of the same author. To obtain a reasonably better point of comparison of K2 results, one needs to shift attention to a completely different variety of English, e.g. poetry. In a short poem by Auden, one finds the percentage of K2 words to be about double the average found in the three academic text-types: (9.72%) for Auden's poem compared with (2.80%) in Swales' Papers and a noticeably high percentage of (6.30%) in TWO Paper3.
Type/token ratio show minor differences among the three academic text-types in the works by ONE, TWO and THREE. With the exception of Halliday's abstract (0.65) and Halliday's paper (0.33) no big difference is observed, even in Auden's poem whose ratio (0.45) is slightly higher than the rest (but the number of words in the poem is only (463) words, which may influence the Type/token ratio. Even lexical density does not distinguish the poem or any of the three academic varieties: (0.51) for the poem and a range between (0.64) and (0.49) for the three academic text-types. The vocabulary profile has not shown the three academic text/types to be characterized differently; Auden's poem exemplifies a non-academic variety.
6.6.2 Text-type Readability Profile: Flesch ease Scale reveals that ONE Paper Abstracts are markedly less easy (i.e. more difficult) than the texts and abstracts of other text-types and texts including abstracts, by the same author, a fact which calls for more investigation of this particular text. In fact, most abstracts are less easy (Flesch ease Score) in the abstract compared with, which seems reasonable, since the abstract is more condensed and focused than the body of the text.
Comparing the results from Flesch-Kincaid Grade level does not assign markedly higher grades to ONE Paper in the abstract compared with markedly high grades assigned to other authors. But again among the works of the same author, grades vary remarkably in the same text-type, e.g. TWO Abstracts are assigned to grades 11.4, 15.9 and 14.9, and TWO Papers are assigned to grades 13.7, 12.1 and 10.6. One noticeable result is Chomsky Abstract, assigned to grade (19.4). The Readability Consensus shows remarkable differences among the authors; but relatively higher grades are assigned to abstracts compared with papers with the exception of Swales and Sinclair when the papers are assigned to grades (16/17 and 16/14) and the abstracts are assigned to grades (15) in both cases. The overall picture is mixed and far from showing a tendency of conformity.
Text-type: Grammar and Syntactic Depth Profile
Clause per sentence in ONE theses and Abstracts are similar, but in ONE Papers it is slightly higher, whereas in TWO and THREE Theses have more clauses per sentence than abstracts and papers. The number of clause per sentence in the three academic text-types is mixed and does not show a clear tendency.
The number of subordinate clauses shows no significant differences among the three text-types except in Halliday Abstract (only 125 words) which has (83.84%) of subordinated clauses, the next linguist is well below that (62.96%) in THREE MA Abstract. The more indicative measurement of syntactic depth does not reveal marked differences among the three academic text-types, again with the exception of Halliday Abstract (83.87%) compared with (71.51%) for Halliday Paper. Sinclair Paper3 has (82.71%) in syntactic depth and only (67.67%) for his abstracts, which is the opposite trend of Halliday's short sample. As for textual markers, simple observation of the results shows that the abstracts use less textual connectors, whereas all three text-types scarcely use meta-textual organizational markers. In conclusion, it can be said that the three profiles of vocabulary, readability and syntax, do not characterize any of the three academic text-types, stamping them as consistently and significantly different.
To specify the size of the ML, Communal and Individual, a case-specific software needs to be developed, and that would lead to the creation of a comprehensive vocabulary profile which maps up all the known/available corpus of a given author.
Conclusion
The search for author-specific features has, on the whole, been approached primarily from the vantage point of surveying the total linguistic features found in a text, which calls for more scrutiny. The first observation comes from the fact that in variety studies in which it is acknowledged that each variety, regardless of the term used to designate a variety, is characterized by variety-specific features. The second basic notion relates to the fact that in variation studies, social and geographical factors, whether gender, dialect or jargon, show variation-specific features. If these two types of features found in a text are put together they may amount to a good percentage of the total linguistic features in a text, depending naturally on the degree of correlation and development of the text-type in language under analysis. This means that the FPD features will necessarily belong to the non-variety and non-variation features, a group which can be labeled non-communal factors, among which the author-specific, or FPD, features constitute a prominent component. Author identification should establish these features in the alleging author, and then should move to investigate them in the suspected or alleged text. In the absence of the techniques and data required for isolating the author-specific features in a text, circumstantial evidence can be considered. The best candidate for describing author identity profile is to establish a method for constructing and reconstructing aspects of author profile which will help map up the alleged text against the detailed profile of the alleging author. The following components can be suggested: In a comprehensive treatment of author identity profile (AIP), Author states some criteria for applying the above configurations: The reconstruction of the mental-epistemological identity should be congruent with the social/physical parameters of identity. The relationship between the two is judged against the criteria of: 1. Consistency (to eliminate mismatch), 2. Plausibility, 3. Ethical code, 4. Claimed pronouncements (texts). Each of these criteria may prove to be necessary in certain cases, can be more or less relevant depending on the case, i.e. author and text. (Author, forthcoming) For a possible application, a simple profile is attempted for the British linguist, M. A. K. Halliday, and the American linguist, Noam Chomsky by Author (forthcoming).
However, if we leave elaborating on the AIP for a future work, the results reviewed in details above enable us to make a number of remarks by way of conclusion and observation concerning what is needed to reach a more decisive position.
1. The results, except for Auden's poem, do not give evidence of coherent consistent features which mark an author with distinctive systematic set of features that justify a "whole" profile typical of a specific author.
2. The parameters used in the study of vocabulary, readability, structure and textuality do not allow any of the three academic varieties of theses, abstracts, papers and books to be assigned variety-specific features.
3. Considering one and two above and taking the differences at various levels in the three academic text-types, it is logical to assume that a more viable approach to author identification and attribution, must take text as the departure point in matters related to author identification and author profiling, and the text, rather than the author or the text-type, as a linguistic unit can be the focus.
4. Auden's poem included with academic articles and books, manifests meaningful distinctiveness at certain, but not all, levels of analysis.
5. In case of anonymous texts, the open question of "Who is the author of X?" is far more difficult to address than the narrower question of "Can X be written by Y?", where X is a specific text and Y is a possible (claiming or ghost) author. In other words, with reference to a particular author, it will be easier to determine if "X is or is not written by Y" than to determine if "Z is the author of X" (where X is a given text, Y a claiming author and Z any possible author). Unlike most cases in forensic linguistics where the speaker/writer is unknown, certain cases of attributing academic text (not author attribution) can be narrowed down, and subsequently resolved, by applying the "X" to this author, which means that in certain cases of academic texts the question about authorship can be formulated in the following question: "Can this individual (claiming author) have written this text he/she attributes to him/herself?" The above question can best be answered by comparing claimed text(s) with a sample of actual writing by the claiming author, comparing various claimed texts in the author profile for consistency and for present or absent fingerprints, comparing claimed texts with standard features of corpus representing the text-type to which the claimed text(s) belong.
In the absence of circumstantial evidence, and in the absence of sample texts written by the "suspected" author, it is difficult to be certain about author identity when the available texts are all in doubt.
It is more difficult to determine authorship when the author is nonnative speaker and does not have a text known to have been actually produced by him/her. For the purpose of author identification (AI), text-type, text, and author should be viewed as living dynamic beings in the making, rather than a fixed definitive entity. AI should incorporate this fact about text and author, and not deny or fight them by taking these two notions to be static.
|
v3-fos-license
|
2021-09-27T13:33:50.926Z
|
2021-08-06T00:00:00.000
|
237638689
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://eurjmedres.biomedcentral.com/track/pdf/10.1186/s40001-021-00590-y",
"pdf_hash": "dbd25abd87fd1e90d0cfe5eedbc19be1171aca3a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46058",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "dbd25abd87fd1e90d0cfe5eedbc19be1171aca3a",
"year": 2021
}
|
pes2o/s2orc
|
Role of fatty liver in coronavirus disease 2019 patients’ disease severity and hospitalization length: a case–control study
Background and purpose Fatty liver is one of the most common pre-existing illnesses; it can cause liver injury, leading to further complications in coronavirus disease 2019 patients. Our goal is to determine if pre-existing fatty liver is more prevalent in hospitalized COVID-19 patients compared to patients admitted before the SARS-CoV-2 pandemic and determine the disease severity among fatty liver patients. Experimental approach This retrospective study involves a case and a control group consisting of 1162 patients; the case group contains hospitalized COVID-19 patients with positive PCR tests and available chest CT-scan; the control group contains patients with available chest CT-scan previous to the COVID-19 pandemic. Patients’ data such as liver Hounsfield unit, hospitalization length, number of affected lobes, and total lungs involvement score were extracted and compared between the patients. Results The findings indicate that 37.9% of hospitalized COVID-19 patients have a pre-existing fatty liver, which is significantly higher (P < 0.001) than the prevalence of pre-existing fatty liver in control group patients (9.02%). In comparison to hospitalized non-fatty liver COVID-19 patients, data from hospitalized COVID-19 patients with fatty liver indicate a longer hospitalization length (6.81 ± 4.76 P = 0.02), a higher total lungs involvement score (8.73 ± 5.28 P < 0.001), and an increased number of affected lobes (4.42 ± 1.2 P < 0.001). Conclusion The statistical analysis shows fatty liver is significantly more prevalent among COVID-19 against non-COVID-19 patients, and they develop more severe disease and tend to be hospitalized for more extended periods.
Introduction
It has been more than 1 year since December 29, 2019, that the first confirmed SARS-CoV-2 case emerged from Wuhan city of China [1], which after a long time, still has so many unknown characteristics. The head of the World Health Organization (WHO) on January 30, 2020, declared the outbreak of COVID-19 to be a public health emergency of international concern and issued a set of temporary recommendations [2], and at the point of writing this study, there have been more than 210 million confirmed cases and more than 4.5 million global deaths. With almost 5 million confirmed cases and more than 100,000 deaths [3], Iran seems to be a good candidate for analyzing virus characteristics. Many researchers started testing different theories through this rough time to identify possible risk factors that affect this disease's severity and mortality rate, including analyzing pre-existing illnesses. These researches include systemic, respiratory, gastrointestinal, and cardiovascular symptoms [4].
According to some studies, liver injury has a notable prevalence in coronavirus disease 2019 (COVID-19) 26:115 patients, which could be mild (45%), moderate (21%), or severe (6.4%) [5]. Non-alcoholic fatty liver disease (NAFLD) is currently the most common form of chronic liver disease affecting adults and children [6]. These findings become more crucial when we understand that according to one study in China, up to 50% of the people with SARS-CoV-2 had liver dysfunction at some point during their illness [7]. The most significant modifiable risk factors for the poor prognosis from COVID-19 are obesity and metabolic disease [8,9]. These findings, such as NAFLD, cause the activation of inflammatory pathways [10]. It suggests that NAFLD can play a key role as a risk factor in the severity and prognosis of coronavirus disease 2019 patients. According to a meta-analysis conducted in 2016, the prevalence of NAFLD in Iran is 33.95% [11], and by factoring in lifestyle changes, the prevalence can be estimated to have increased in small amounts through past years. This study gives a more accurate understanding of disease prognosis while having one of the most common pre-existing illnesses.
It should be noted that it is not well understood if COVID-19 makes pre-existing liver disease worse. However, during the COVID-19 pandemic, many infected patients have been treated with antipyretic agents, mainly containing acetaminophen, a drug recognized to cause significant liver damage or induce liver failure [7]. SARS-CoV-2 binds to target cells through angiotensinconverting enzyme II (ACE-2) and uses ACE-2 as the cellular entry receptor [12] ACE2 cellular receptor is highly expressed in human lungs tissues, gastrointestinal tract, and liver [13]. Liver cells can act as a susceptible target for coronavirus disease 2019; however, this mechanism has not been fully confirmed or validated yet [14].
Attention was brought to this topic because of a high number of fatty liver patients while reporting and evaluating COVID-19 patients' lungs involvement scores. Some papers have studied the severity and prognosis of coronavirus disease 2019 using liver enzymes levels such as alkaline phosphatase (ALP), alanine aminotransferase (ALT), aspartate aminotransferase (AST), and viral shedding time [15]. The current study evaluated the severity of COVID-19 patients using different factors such as total lungs involvement score, number of affected lobes, and hospitalization length. The hypothesis was tested to determine if having pre-existing fatty liver can contribute to higher susceptibility, severity, or mortality rate of coronavirus disease 2019.
Ethics
This retrospective study tries to determine if there is a significant correlation between having fatty liver and being more susceptible to COVID-19 and developing more severe disease. The Ethics Committee of the Birjand University of Medical Sciences approved the study (IR.BUMS.REC.1399.187).
Study design
In this study, 1162 patients were included in the case (n = 575) and the control (n = 587) groups. Case group patients were selected from Birjand, South Khorasan's Vali-Asr hospital, the main hospital for diagnosing and treating COVID-19 patients. For the control group, data were also selected from the same hospital. A preexisting fatty liver can be determined by measuring the patient's liver Hounsfield unit (HU) based on their imaging data, which reports radiodensity on a quantitative scale. Hounsfield units are mainly used to report the fat content of the liver and diagnose pre-existing fatty liver. According to references and protocols, patients with a HU of 40 or below are considered fatty liver patients [16]. In this study, patients with a borderline score of 40 were evaluated once more to reduce the bias and develop more accurate results.
Chest CT-scan images were taken by Siemens SOMATOM Emotion 16 Slice CT-scan machine.
The data were extracted using the hospital picture archiving and communication system (PACS).
The severity of the disease was evaluated using three factors: 1. Days of hospitalization, 2. The number of affected lobes ranges from 0 to 5, 3. Total lungs involvement (chest severity) ranges from 0 to 20.
Using Table 1, guideline involvement scores were calculated separately for upper, middle, and lower lobes and individually for right and left lung. The sum of each lobe's scores gives total lungs involvement. The guideline is on par with lungs involvement measurement protocols set by the Iran health department and is the primary method used for evaluating lungs severity in Iran. The case group The case group consisted of hospitalized COVID-19 patients, all of whom had positive polymerase chain reaction (PCR) tests and a chest CT-scan. Each hospitalized patient underwent a CT-scan on the first day of admission. Patients were chosen from March 2020 through November 2020. The evaluated and collected data include liver HU, sex, age, admission date, total lungs involvement score, the number of affected lobes, and the hospitalization length. The data were analyzed to measure the prevalence of fatty liver in hospitalized COVID-19 patients. At first, the prevalence of fatty liver was analyzed based on sex, age, and admission date. The case group was then divided into two groups based on having pre-existing fatty liver. Total lungs involvement score, the number of affected lobes, the hospitalization length, and mortality rate were compared between these two groups based on different factors, such as sex, age, and month of the year.
The control group
The control group consists of all patients who have had a chest CT-scan a year prior to the COVID-19 outbreak from March 2019 through the end of November 2019. Chest CT-scan could have been performed for various reasons, but it is not performed for any reason relating to COVID-19 disease and data are not available to know if patients were hospitalized. The collected data include liver HU, sex, age, and admission date. Fatty liver prevalence was then measured and analyzed based on sex, and age, and date. The control group was then divided into two groups based on having pre-existing fatty liver. Sex distribution was then compared.
For the primary analysis, the prevalence of fatty liver was compared between hospitalized COVID-19 patients (case group) and non-COVID-19 patients (control group).
Inclusion criteria
All patients admitted from March 2020 through November 2020 were included in the case group if they met the criteria that included COVID-19 confirmation using PCR test, admission and hospitalization, available chest spiral CT-scan, and access to their imaging data for liver HU measurement.
All patients from March 2019 through the end of November 2019 were included in the control group if they met inclusion criteria that included available chest spiral CT-scan, administration before December 2019, and available access to their imaging data for measuring liver HU.
The exclusion criteria
Patients aged under 18 years were excluded from participation from both the case and the control groups. If a hospitalized COVID-19 patient had two or more chest CT-scan, only the first imaging data were used to evaluate fatty liver scores.
Statistical analysis
In order to control and balance heterogenicity between two groups, exclusion criteria were sought to be small so that control and case group patients would have the same heterogenicity. For statistical analysis, patients were grouped into six age groups: under 30, 30 to 40, 40 to 50, 50 to 60, 60 to 70, and upper than 70.
Categorical variables were compared using the Chisquared test, and between-group comparisons were assessed using unpaired t-tests. A scatterplot matrix was used to visualize and give a descriptive analysis of bivariate relationships between combinations of variables. Quantitative data were presented as mean, standard deviation (SD), or median. A P-value of 0.05 and below was considered statistically significant. Statistical Package for the Social Sciences (SPSS) version 22 software was employed for data analysis. Statistical analysis was performed only on patients with a complete set of data; if the patient's data were incomplete, they were excluded from the analysis.
Fatty liver distribution
The study consists of 1162 patients, and it includes a case group of 575 hospitalized patients with confirmed COVID-19 infection and a control group of 587 patients with chest CT scans in 2019. No patient had missing data. The prevalence of pre-existing fatty liver among hospitalized COVID-19 patients (case group) was significantly higher than the control group patients (37.9% vs. 9.02% P < 0.001).
When the case group is divided based on having preexisting fatty liver, the percentage of male patients is significantly higher in the group with pre-existing fatty liver in comparison to the group without fatty liver (60.8% vs. 50.7%, P = 0.02). However, there is no significant difference in the male gender's distribution among non-COVID-19 patients if divided by the pre-existing fatty liver (42.3% vs. 44.8%, P = 0.77). The distribution of fatty liver in the case group was significantly more concentrated in the 51-60 years age group (P = 0.01).
COVID-19 severity and mortality
The severity of the disease was compared among COVID-19 patients divided into two groups based on having pre-existing fatty liver. The virus seems to affect more lobes (4.42 ± 1.2 P < 0.001), leading to a higher total lungs involvement score (8.73 ± 5.28 P < 0.001) among COVID-19 patients who had pre-existing fatty liver. COVID-19 patients with fatty liver are hospitalized for more extended periods (6.81 ± 4.76, P = 0.02). Multivariable analysis of three previous factors showed a total P-value of < 0.001. Interestingly, while the results suggest that COVID-19 patients with fatty liver develop a more severe form of COVID-19, the findings do not show a significantly higher risk of mortality for patients with pre-existing fatty liver (11.5% vs. 10.1%, P = 0.58). However, deceased patients' lungs involvement scores (11.9 ± 6.25, P < 0.001) and the number of affected lobes (4.52 ± 1.16, P = 0.005) are significantly higher than survived patients. The elderly patient's mortality rate was noticeably higher among the case group patients (74.56% ± 11.99, P < 0.001).
According to our results, there is not a significant difference between the male and female patient's disease severity and mortality. Male patients' total score of lungs involvement (7.4 ± 5.05 P = 0.17), number of their affected lobes (4.01 ± 1.53 P = 0.13), and their hospitalization length (6.39 ± 4.9 vs. 6 ± 4, P = 0.32) are not significantly higher than female patients. Men are also not at a significantly higher risk of COVID-19 mortality (60.7%, P = 0.31).
Our study shows the severity of the disease is increased in months of autumn in Iran (September-November) with a higher total score of lungs involvement (10.36 ± 4.94, P < 0.001) and a more significant number of affected lobes (4.79 ± 0.70, P < 0.001). In contrast, hospitalization lengths are significantly longer in the first month of spring in Iran, which is from March through April (8.9 ± 6.61, P = 0.005).
Factors correlation
Analysis of bivariate correlations between combinations of variables ( Fig. 1) that includes lungs involvement, number of affected lobes, hospitalization length, and age shows the following results: (1) total lungs involvement score (r = 0.24, P < 0.001); (2) number of affected lobes (r = 0.27, P < 0.001), and (3) hospitalization length (r = 0.24, P < 0.01). The analysis shows elderly patients are more susceptible to be infected with COVID-19 and develop more severe disease with a higher total lungs involvement score and a more extended hospitalization period.
Sex distribution
The percentage of male patients in the case group is significantly higher than the control group (54.5% vs. 44.5%, P 0.001). In the control group, when divided based on having pre-existing fatty liver, males have higher average age than females in the group of patients without fatty liver (59.15 ± 18.91, P = 0.03) ( Table 2). In the case group, when divided based on having pre-existing fatty liver, females have higher average age than males among COVID-19 patients with fatty liver (59.29 ± 13.45, P = 0.03); while among COVID-19 patients without fatty liver, the average female age is not significantly higher than males. The number of affected lobes (4.61 ± 1.09, P = 0.05) and total score of lungs involvement (9.68 ± 5.70, P = 0.04) are noticeably higher among female COVID-19 patients with fatty liver. There is no significant difference between each gender's mortality rate among COVID-19 patients with fatty liver. In contrast, the male mortality rate is higher in COVID-19 patients without fatty liver (69.4%, P = 0.02) ( Table 3).
Discussion
Our results showed that fatty liver is significantly more prevalent in COVID-19 patients, which is on par with other studies stating that fatty liver has a higher percentage among COVID-19 patients in comparison with non-COVID-19 patients [10,15]. The fatty liver prevalence among hospitalized COVID-19 patients is higher than the calculated prevalence of NAFLD in Iran from 2016 (37.84% vs. 33.95%). Some studies' findings state that increased liver fibrosis in NAFLD might affect COVID-19 outcome [17]. Our results are also supported by Bramante et al. 's study, which indicates fatty liver patients have a much higher risk of COVID-19 hospitalization. That study suggested metabolic syndrome and NAFLD/NASH available treatments significantly mitigated risks of COVID-19, those with home metformin glucagon-like-peptide 1 receptor agonist (GLP-1 RA) use have a non-significantly reduced odds of hospitalization [9]. Our study demonstrates that COVID-19 patients, who suffer from fatty liver, have to be hospitalized for more extended periods, which is confirmed by the study of Dong Ji and colleagues [15]. Data analysis also shows that patients with fatty liver experience more severe symptoms during the disease. The number of involved lobes and total lungs involvement scores are higher in patients with pre-existing fatty liver, which can be attributed to the findings of the extended period of hospitalization data. A higher risk of disease progression is also suggested by another study that evaluated the disease severity by different factors [15]. Another study also confirms our findings that fatty liver patients experience a more severe form of the disease [18].
In addition, the results suggest that social awareness should be promoted regarding the negative impact of metabolic diseases such as pre-existing fatty livers on patients with COVID-19 and that health policymakers should promote the use of preventive measures to control obesity and fatty liver.
With increased disease severity, the mortality rate of the coronavirus disease 2019 was expected to be noticeably higher among fatty liver patients. However, from the data analysis results of the current study, it could not be concluded that fatty liver is linked to a higher COVID-19 mortality rate. This is in contrast with another study that concludes liver injury is strongly associated with the COVID-19 mortality risk [9,18].
According to our findings, the severity of COVID-19 is increased in Iran's autumn months, which is from September through October. It can be confirmed by other studies that suggest the emergence of virus mutations could have made the COVID-19 virus more transmissible and infectious [19]. COVID-19 hospitalization length was not linked to autumn; however, it was longer at the beginning of the COVID-19 pandemic. It can be speculated that patients used to be hospitalized for more extended periods because of not fully known treatment and hospitalization protocols. We suggest COVID-19 had a higher disease severity in the autumn; however, it should also be noted that the number of patients drastically increased during the autumn. Therefore, hospitals could only admit patients with more severe symptoms. A newly conducted study also suggests that the increase in the number of COVID-19 patients and severity could be related to the decrease in the individuals' vitamin D levels in the autumn and winter seasons [20]. Previous studies give a clear understanding that there is an essential and direct role for vitamin D in modulating liver inflammation and fibrogenesis [21,22]. Other studies show a clear correlation between COVID-19 and vitamin D deficiency [23,24], which indicates that treatment of fatty liver patients' vitamin D deficiency can reduce the chance of liver injury [24] and ultimately decrease coronavirus disease 2019 severity and mortality [9,25].
According to our findings and similar studies, the percentage of male patients is significantly higher than women in COVID-19 [26]. However, our findings cannot validate the theory that male patients are also more prone to more severe forms of the disease, which is contrary to the study of Kuno et al. [27]. Scatterplot matrix data analysis showed that older adults are more susceptible to developing a more severe form of the disease and have to be hospitalized for more extended periods. Their total score of lungs involvement is significantly higher, which is validated by previous studies. It can also be attributed to pre-existing illnesses [28]. The elderly male mortality rate is higher than expected, and it is validated by other studies [29].
Limitations
Deceased patients' data could only be collected from June through August and October through November of 2020. Hence the number of deceased patients could not be compared between different months of the year. The deceased patients' data were only used to compare mortality between the COVID-19 patients grouped based on pre-existing fatty liver and determine if one gender has a higher risk of mortality. There was no access to each patient's past medical history. Thus, patients could not be accurately categorized into nonalcoholic fatty liver disease patients. Therefore, the term "fatty liver patients" was used in this study. The lack of diagnosis data for control group patients limited us from removing patients with the diseases that could cause fatty liver. However, based on the date of the CTscan, it was assumed that the chest CT-scan was not related to COVID-19.
Conclusion
The study concludes that fatty liver can play a crucial role in susceptibility to being infected with SARS-CoV-2 and the severity of COVID-19 patients. The prevalence of fatty liver in COVID-19 patients is significantly higher than non-COVID-19 patients. COVID-19 patients with pre-existing fatty liver are hospitalized for more extended periods and have a higher total lungs involvement score. The results also further confirm findings from previous studies that male and elderly patients are more prone to coronavirus disease 2019 infection. In contrast to other studies, our findings show that male and elderly patients are not at a higher risk of disease severity and mortality.
Treatment for obesity and pre-existing metabolic disease should be a priority while knowing this significantly higher risk. Therefore, prospective studies are necessary to determine the exact cause and effect correlation between SARS-CoV-2 and fatty liver.
|
v3-fos-license
|
2019-03-07T14:26:51.000Z
|
2018-08-07T00:00:00.000
|
91179012
|
{
"extfieldsofstudy": [
"Computer Science",
"Physics",
"Medicine",
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1099-4300/21/3/260/pdf?version=1551965945",
"pdf_hash": "a8e31d882ba82e1b55e4d1b242d7b5ee6cf2cec3",
"pdf_src": "ArXiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46060",
"s2fieldsofstudy": [
"Physics",
"Computer Science"
],
"sha1": "a8e31d882ba82e1b55e4d1b242d7b5ee6cf2cec3",
"year": 2019
}
|
pes2o/s2orc
|
Coherence Depletion in Quantum Algorithms
Besides the superior efficiency compared to their classical counterparts, quantum algorithms known so far are basically task-dependent, and scarcely any common features are shared between them. In this work, however, we show that the depletion of quantum coherence turns out to be a common phenomenon in these algorithms. For all the quantum algorithms that we investigated, including Grover’s algorithm, Deutsch–Jozsa algorithm, and Shor’s algorithm, quantum coherence of the system states reduces to the minimum along with the successful execution of the respective processes. Notably, a similar conclusion cannot be drawn using other quantitative measures such as quantum entanglement. Thus, we expect that coherence depletion as a common feature can be useful for devising new quantum algorithms in the future.
I. INTRODUCTION
The emergence of quantum algorithms that are able to solve problems exponentially faster than any classical algorithms is one of the leading incentives for the rapid development of quantum information science over the last three decades.Especially exciting is the new concept of computing that makes use of quantum fundamental principles, coined quantum computing [1].In 1992, the Deutsch-Jozsa (DJ) algorithm [2] was first proposed, which can confirm a given function's type with only one single evaluation, compared to at worst 2 n−1 + 1 (n being the number of two-valued digits) queries by any possible classical algorithms.Moreover, the DJ algorithm is deterministic in the sense that it can always produce the correct answer, which greatly improves the original solution by Deutsch [3] that can only succeed with probability half.Soon, the basic problem of factoring a large integer was offered a new quantum solution, that is, Shor's algorithm [4].The exponentially faster speedup over any classical approaches could be used to break public-key cryptography schemes such as the widely-used Rivest-Shamir-Adleman (RSA) scheme once a quantum computer were built.Then, it is Grover's search algorithm [5] which is used to locate a target item in an unsorted database.For this problem, Grover's algorithm runs only quadratically faster compared to any classical algorithms, but it has been proven to be asymptotically optimal [6].
Coincidentally, all the quantum algorithms mentioned above were proposed in the 1990s.Since the dawn of this century, however, no new quantum algorithms were designed that are comparable in impact with the existing ones.One of the possible reasons lies in the fact that all the quantum algorithms known so far are basically taskdependent, in other words, they share very few common features, if there were any.Along with this line, the series of works by Latorre and coauthors [7][8][9] reported that all known efficient quantum algorithms obey a majorization principle.To be more precise, the time arrow in these algorithms is a majorization arrow, which is conjectured to be a sort of driving force for the respective processes.Besides this one, there are no other general principles being discovered ever since.
In this paper, however, we present a new principle underlining the efficient quantum algorithms in terms of quantum coherence (see Sec. II for a brief review).Specifically, we find that coherence of the system states all reduces to the minimum along with the successful execution of the respective algorithms.In a rough sense, this is a "coherence arrow" in quantum-algorithm design, but with many flexibilities.This principle is similar to the majorization principle, with possible reasons being that both the concepts of coherence and majorization make use of only partial information of the density matrix and both are basis dependent [10,11].However, unlike the descriptive nature of majorization, quantum coherence can be defined quantitatively using various coherence measures.In this aspect, the general principle that we find with coherence is a more versatile tool compared to the majorization principle for quantum-algorithm design.On the other hand, a similar conclusion cannot be drawn using other quantitative measures including quantum entanglement, which may be argued that entanglement is a property of the entire density matrix [12].
Actually, the analysis of quantum algorithms using coherence is not new [13][14][15], but the respective algorithms were considered independently in these works.For instance, in Ref. [13] the author examined the role played by coherence as a resource in the Deutsch-Jozsa and related algorithms, and found that the less of coherence there is, the worse the algorithm will perform.Although from different perspectives, both of Refs.[14,15] reported that the success probability of Grover's algorithm relies on coherence.Nevertheless, the results presented in this paper give a combined view of all the quantum algorithms known so far with coherence.
This paper is organized as follows.In Sec.II, we review briefly the resource theory of quantum coherence, and introduce the commonly-used coherence measures.
arXiv:1808.02222v1 [quant-ph] 7 Aug 2018
Then we start with the investigation of Grover's algorithm in Sec.III, where the evolution of quantum coherence is thoroughly analyzed.Next, we move on to the Deutsch-Jozsa algorithm in Sec.IV and Shor's algorithm in Sec.V.In Sec.VI, the consequences of coherence played in quantum-algorithm design are discussed, along with a comparison with other quantitative measures such as quantum entanglement.We close with a short conclusion in Sec.VII.
Along with the rapid development of quantum information science, an alternative way of assessing quantum phenomena as resources has appeared.Consequently, many tasks that are not previously possible within the realm of classical physics may be now exploited with the new approach.This resource-driven viewpoint has motivated the development of a quantitative theory that captures the resource character of physical properties in a mathematically rigorous manner.The formulation of such resource theories was initially pursued with the quantitative theory of entanglement [16,17], but has since spread to encompass many other operational settings, including quantum coherence [18][19][20]; see Ref. [21] for a recent review.
Resource theory provides a unified framework for studying resource quantification and manipulations under restricted operations that are deemed free.For coherence, we are restricted to incoherent operations, so only incoherent states are free.Recall that a state is incoherent if it is diagonal in the reference basis.Recently, it has been demonstrated that coherence can be converted to other quantum resources, such as entanglement and discord by certain operations [22][23][24].However, compared to entanglement and discord, evidences show that coherence may be a potentially more fundamental quantum resource [25].To quantify coherence, a rigorous framework has been proposed by Baumgratz et al. in Ref. [26].In this work, we will focus on the two most commonly-used coherence measures, namely, the relative entropy of coherence and the l 1 -norm of coherence.
The relative entropy of coherence [26] is defined as where S(ρ) = −tr(ρ log 2 ρ) is the von Neumann entropy and ρ diag = i ρ ii |i i| denotes the state obtained from ρ by deleting all the off-diagonal elements.For pure states, the von Neumann entropy is 0, so the relative entropy can be simplified to The l 1 -norm of coherence [26] is defined intuitively as which comes from the fact that coherence is tied to the off-diagonal elements of the states.Recently, it is demonstrated by Zhu et al. [27] that the l 1 -norm of coherence is the analog of negativity in entanglement theory and sum negativity in the resource theory of magic-state quantum computation.It is worth mentioning that both the relative entropy and the l 1 -norm are proper measures of quantum coherence.
III. GROVER'S ALGORITHM
We start with Grover's algorithm [5], which is a quantum search algorithm that runs quadratically faster than any equivalent classical algorithms.Given an unsorted database with N items, this algorithm is able to find the target item using only O( √ N ) steps, compared to at least O(N ) steps required by any classical schemes.
Although not offering an exponential speedup, Grover's algorithm has been proven to be asymptotically optimal for the search problem [6].For convenience, we assume N = 2 n such that the N entries in the database can be supplied by n qubits.Let f (x) be a function that takes in index x = 0, 1, . . ., N − 1, and outputs f (x) = 1 if x is a solution to the search problem, and f (x) = 0 otherwise.
Grover's algorithm begins with the initialized equal superposition state, which has the maximal coherence Suppose there are exactly M solutions in the database with 1 ≤ M ≤ N , we can reexpress |ψ (0) as where |α represents the group of states that are not solutions to the search problem (marked by x n below), while |β represents those that are solutions (marked by x s ).Explicitly, we have Then, a subroutine known as the Grover iteration is applied to |ψ (0) repeatedly.The Grover iteration consists of two basic operations G = DO, i.e., l 1 being the l1-norm of coherence in Eq. (13b).The plots show the results with M = 1, 2, 4 and 16 solutions respectively.The minimal values indicate that an solution is found.As we can see, with the number of possible solutions increased, not only the number of Grover iterations needed decreases, but also the minimal value of coherence increases accordingly.See text and Fig. 2 for more details.
where O is an oracle (a black-box operation), and D is the Grover diffusion operator.After k iterations of applying G, the state becomes where It is not difficult to see that, with high probabilities, a solution to the search problem can be obtained by having such that Note that the oracle O only marks the solution by changing the phase of state |β , i.e., so this operation will not change the coherence.It is the operation D that indeed changes the coherence.In Fig. 1, we plot the values of coherence with respect to the number of Grover iterations k, for the case of n = 10 qubits.Note that in Fig. 1 l1 + 1).As can be seen, a solution to the search problem is found when the coherence first reaches the minimum value, that is, when At this point, the task of Grover's algorithm is actually completed.However, if the Grover iteration is continued, a periodic feature of the coherence appears such that we will get the solution again around 2k * iterations.This observation is repeated as long as the Grover iteration goes on.
Another phenomenon from the plots is as follows: with the number of possible solutions increased, not only the number of Grover iterations needed decreases, but also the minimal value of coherence gets bigger accordingly.This is easy to understand as several answers (M > 1) make up a superposition state of which the coherence is finite; see Fig. 2. To understand it better, let's look at the derivatives of the coherence in Eq. ( 13), which are given by dC By forcing the derivatives to be zero, we get four different cases: 1. cos 2k+1 2 θ = 0 corresponds to the minimal values in Fig. 1, namely the solution state |β .
cot 2 2k+1
2 θ = cot 2 θ 2 corresponds to the maximal values in Fig. 1.Because of the square in this solution, there are actually two peaks close to each other (not quite visible if the number of solutions M is small).The right peak corresponds to the superposition state |ψ (0) , while the left one corresponds to O|ψ (0) .
cos 2k+1
2 θ = ±1 corresponds to the local minimal values between the two peaks in Fig. 1.Because the distance between these two peaks is exactly 1 and we are considering discrete operations, so this local valley has no physical meaning.4. θ = 0 means that there is no solution, i.e., M = 0.
To summarize, one learns that the reduction of quantum coherence can be used as a general principle for the successful executions of Grover's algorithm.We will see later that the same conclusion can be drawn for other quantum algorithms including the Deutsch-Jozsa algorithm and Shor's algorithm.
IV. DEUTSCH-JOZSA ALGORITHM
Given a function f (x) defined over the variable x = 0, 1, . . ., 2 n − 1 with n being the number of dichotomicvalued digits, the Deutsch-Jozsa (DJ) algorithm [2] aims to confirm whether f (x) is constant for all values of x, or else it is balanced, namely f (x) = 1 for exactly half of all possible x, and 0 for the other half.Although of little practical use, the DJ algorithm is deterministic in the sense that it can always produce the correct answer using only one correspondence, whereas it requires at worst 2 n−1 + 1 queries for any possible classical algorithms.
Same as Grover's algorithm, the DJ algorithm begins by first preparing the equal superposition state of Eq. ( 4), which has the maximal coherence However, unlike Grover's algorithm, no iteration is needed in the DJ algorithm.The next step is an oracle U f : |x → (−1) f (x) |x that transforms the state to which leaves the coherence unchanged.The final step of the DJ algorithm is to apply the Hadamard gate H, such that the state becomes where x • y is the bitwise inner product of x and y.Now, by examining the probability of measuring |0 ⊗n , i.e., x (−1) f (x) /2 n 2 , one gets 1 if f (x) is constant and 0 if it is balanced.Depending on the function type of f (x), the coherence of |ψ (2) can have the following two cases: the coherence of which is C (2) For the second case, the coherence has a range instead of a single value due to the possible different forms of the balanced function.For instance, if f (x) takes values 01010101 • • • (for more than three qubits), then |ψ (2) is nothing but a basis state with coherence being zero.But, if f (x) takes the sequence such as 01100101 • • • , then |ψ (2) is a superposition of basis states with nonzero coherence.Notably, the coherence cannot take the maximal value as that in Eq. ( 17), because the basis state |0 disappears in |ψ (2) for the balanced case.Therefore, no matter what the function type of f (x) is, we find that the coherence of the system state always decreases once the algorithm stops.Again, coherence reduction can be used as a good signature to signal the success of the DJ algorithm.
V. SHOR'S ALGORITHM/QUANTUM ORDER-FINDING Shor's algorithm [4] is a particular instance of the family of quantum phase-estimation algorithms [28].Informally, Shor's algorithm solves the following problem: given an integer N , find its prime factors.The crucial step in Shor's algorithm is the so-called quantum order-finding (QOF) subroutine which offers the quantum speedup over any classical approaches.For two positive integers x and N , the objective of QOF is to determine the order of x modulo N , which is defined as the least integer r > 0, such that x r = 1(mod N ).
The QOF subroutine begins with t = 2L + 1 + log(2 + 1 2 ) qubits initialized to |0 (the first register) and L qubits initialized to |1 (the second register), where L ≡ log(N ) denotes the closest integer larger than log(N ) and is the error tolerance.Application of the Hadamard gate H on the first register transforms the initial state to which has the maximal coherence on the first t qubits, i.e., Then a black box operation U x,N : |j |k → |j |x j (mod N ) transforms the state to Although the state |ψ (1) looks rather different from |ψ (0) , its coherence (on the first t qubits) does not change, namely
C
(1) Because of the periodic nature of the component |x j (mod N ) , the state |ψ (1) can be approximated as The period of the phase in |ψ (1) can be obtained by applying inverse Fourier transform to the first register, such that where | s/r is a pretty good approximation of the phase s/r.Now, coherence of the state |ψ (2) becomes
C
(2) which are functions of the solution r.Finally, by measuring the first t qubits, the solution r is obtained by applying the continued fractions algorithm [1].Once again, we find that coherence of the system state reduces to the minimum by the end of the QOF subroutine, in turn also in Shor's algorithm.
VI. DISCUSSION
For all the quantum algorithms that we have explored including Grover's algorithm, DJ algorithm and Shor's algorithm, we find that quantum coherence plays a consistent role for signaling the completion of all processes.To be more precise, upon successful executions of these algorithms, coherence of the respective systems all reduces to the minimum compared to the initial values.It is thus reasonable to conjecture that coherence reduction can be used as a general principle for quantum-algorithm design.Therefore, besides the majorization principle reported in Refs.[7][8][9], we find yet another general principle for quantum algorithms.However, unlike the descriptive nature of majorization, quantum coherence can be defined quantitatively using various coherence measures.In this aspect, the general principle that we find with coherence is a more versatile tool compared to the majorization principle for quantum-algorithm design.
Specifically, all the three quantum algorithms begin with the equal superposition state which has the maximal coherence.Then, an oracle is applied, which leaves the coherence unchanged.The final step can be seen as an adjustment for the system states, that is, diffusion operation for Grover's algorithm, Hadamard operation for DJ algorithm, and quantum inverse Fourier transform for Shor's algorithm.It is this final step of operation that indeed reduces the coherence.As a guide for future quantum-algorithm design, the coherence-reduction operation is suggested to be an indispensable requirement for the relevant processes, without which the quantum algorithm cannot be efficient nor even successful.
Then, it is natural to ask whether other quantitative measures such as entanglement may play a similar role as coherence in quantum algorithms.Unfortunately, the answer is negative.Our calculation and also many previous works show that a general principle cannot be drawn using entanglement.For instance, in Refs.[29][30][31] the authors analyzed thoroughly the entanglement properties in Shor's algorithm, and found that entanglement may vary with different entanglement measures.Then, similar conclusions were reported in Refs.[32][33][34][35] for the DJ algorithm.One of the possible reasons for the failure of using entanglement as a signature is due to the differences in definitions of entanglement and coherence (also majorization).Quantum entanglement describes the property of the entire density matrix [12], while coherence and majorization capture only partial information of the density matrix.Moreover, both the concepts of coherence and majorization are basis dependent [10,11].
No doubt that entanglement is a key (but not really sufficient [17]) resource for the quantum speedup in all these algorithms, it cannot be used as a general principle for quantum-algorithm design.Actually, it is an NPhard problem [36,37] to even detect entanglement if the system size is large, let alone to quantify it.For multipartite quantum systems, entanglement can be classified into different forms depending on how the subsystems are distributed.This complexity further makes any possible entanglement measures hard to compute.Therefore, although essential for the quantum speedup, entanglement is not a good signature to use for quantum algorithms as compared to coherence.Moreover, this fact also serves as an additional evidence that coherence may be a potentially more fundamental quantum resource than entanglement and discord, as initially argued in Ref. [25].
VII. CONCLUSION
The scarceness of efficient quantum algorithms suggests that maybe some basic principles are missing for quantum-algorithm design.In this paper, we have explored the possibility of considering the reduction of quantum coherence as a simple yet general principle for quantum-algorithm design.For all the three quantum algorithms that we investigated including Grover's algorithm, Deutsch-Jozsa algorithm and Shor's algorithm, quantum coherence of the system states all reduces to the minimum along with the successful execution of the respective algorithms.However, a similar principle cannot be drawn using other quantitative measures such as quantum entanglement.Thus, besides the fundamental interests in resource theory, this special feature of quantum coherence is expected to be useful for devising new quantum algorithms in future.
FIG. 1 .
FIG. 1.In the case of n = 10 qubits, we plot the values of coherence with respect to the number of Grover iterations k: (a) the relative entropy of coherence C (k) r in Eq. (13a); (b) C Grover iterations, where the symbol • denotes the closest integer to the rational number inside.Next, we calculate the quantum coherence of state |ψ (k) , FIG. 2. Minimal coherence of the system state with respect to logarithm of the number of solutions log 2 M .As we can see, with the number of solutions M increased, the minimal value of coherence gets bigger which clearly indicates a superposition state consisting of more terms.
|
v3-fos-license
|
2016-01-15T18:20:01.362Z
|
2013-07-19T00:00:00.000
|
18178136
|
{
"extfieldsofstudy": [
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=34970",
"pdf_hash": "9ac5262f6995b84e0c01f37da18f7bd0ab364b12",
"pdf_src": "Crawler",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46062",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "9ac5262f6995b84e0c01f37da18f7bd0ab364b12",
"year": 2013
}
|
pes2o/s2orc
|
Design of Half-ducted Axial Flow Fan Considering Radial Inflow and Outflow —comparison of Half-ducted Design with Ducted Design
Half-ducted fan and ducted fan have been designed and numerically analyzed for investigating the radial flow effect on the overall performance and the three dimensional flow field in design. Based on quasi-three dimensional flow theory, the meridional flow was calculated by adopting the radial balance equations, while the calculation of the blade to blade flow was obtained by 2D cascade data with the correction by a potential flow theory. Two types of axial flow fan were designed. One is the full ducted case as if it was in the straight pipe and another is the half-ducted case with the radial inflow and outflow. The previous experimental results of authors were used to decide the inclinations of both the inflow and outflow. And the circular arc blade with equal thickness was adopted. The numerical results indicate that both of the designed fans can reach the specified efficiency and also the efficiency surpasses more than 11%. Furthermore, the static pressure characteristic of half-ducted fan is much better than that of ducted fan. In comparison of the three dimensional internal flow of these two fans, the improvement of the flow angle at inlet and outlet, the distributions of velocity in the flow field and the pressure distributions on the blade surfaces can be achieved more successfully in accordance with the design intension on consideration of flow angle in design. The conclusion that half-ducted design with considering radial inflow and outflow is feasible and valid in comparison with ducted design for axial flow fans has been obtained at the end of the paper.
Introduction
A lot of axial fans, which are small size and low pressure rise, are used in our daily life.Some common examples are a room ventilation fan, a radiator fan in car engine room, a power unit cooling fan of personal computer, and so on.Many of them are designed by the inverse design method.The inverse methods make a close link between the intention of designer and the blade geometry.Zangeneh [1] introduced an inverse method of a fully 3D compressible flow for the design of radial and mixed flow turbomachinery blades.The blade shape of an initial guess is obtained by the specified rV t and assuming uniform velocity.The 3D inverse design method was also applied to design vaned diffusers in centrifugal compressor and centrifugal and mixed flow impellers in order to investigate highly nonuniform exit flow and analyze and minimize the generation of the secondary flows respectively [2,3].The theory and application of a novel 3D inverse method for the design of turbomachinery blade in rotating viscous flow were systematically reported [4].
The inverse methods of axial flow fans can be divided into the free vortex design and controlled vortex design.In the controlled vortex design, the axial velocity can be nonuniform and the designed blade circulation can be able to specified nonconstant.Most of the axial flow fans are designed by the controlled vortex design.The proper blade loading distributions and reduced loss near the tip can be reached by the controlled vortex design [5].The controlled vortex design also provides a method for the multistage machinery with a reasonable distribution of exit flow angle [6].However, all these studies are not taking the radial velocity component into account.It is advantageous to consider the radial velocity in nonfree vortex design, which was investigated by three dimensional laser Doppler anemometer measurement [7].
In this paper, the radial flows at both the inflow and the outflow have been considered for the improvement of the design method.Two types of axial flow fans are designed by the controlled vortex design by specifying the constant tangential velocity both at inlet and outlet of the rotor.One type is ducted axial flow fan which is usually designed to prescribe almost uniform inflow and outflow as if it was in the straight pipe.However, many axial flow fans are not used in the straight pipe, such as the application of using in the ventilation and cooling systems without pipe.Thus it is important to take the real flow situation into account in design.Then the other type which is half-ducted axial flow fan was designed to compare with the traditional design of ducted ones by specifying the flow angles according to the previous experimental results of authors [8,9].
Design and Numerical Method
The quasi three-dimensional flow theory was applied to investigate the flow of the axial flow fans.The meridional flow and the revolutional flow between blades were calculated by the method of streamline curvature.Based on the theory, the meridional flow was calculated by adopting the radial balance equations [10], while the calculation of the blade to blade flow was obtained by 2D cascade data with the correction by a potential flow theory so as to consider the axial flow velocity change and the inclination of the flow surface [11].
In calculation of meridional flow, the following force balance equation was evaluated at the quasi-orthogonal direction on meridional plane: when the compressibility of the fluid is ignored, The arbitrary constant C i can be got by the relative equation of mass flow rate and velocity.The energy per second getting through outlet of the rotor can be calculated by the following equation on assumption of constant V t at inlet and outlet (0 at inlet): So that V t2 can be got.The total pressure rise is presumed to be able to calculate by the Euler equation as: Therefore, the meridional velocity and the tangential velocity can be obtained so that the calculation of meridional flow is finished.
The blade profile on the revolutional plane was selected by referring to the diagram of circular arc carpet.Thus, the circular arc blade with equal thickness and quadrilateral blade on the meridional plane was adopted.
Two types of axial flow fan were designed in this paper.One is the ducted case as if it was in the straight pipe and another is the half-ducted case with the radial inflow and outflow.The blade shapes on top view were obtained showing in Figure 1 on right side.The highly twisted blades can be avoided by half ducted fan with the controlled vortex design by specifying the flow angles.For the ducted fan, the streamlines on the meridional plane are uniform and parallel to each other, as shown in Table 1 shows the designed parameters of ducted fan and half-ducted fan.All these parameters are specified the same value for both of the designed fans.Besides, the blade shape near the hub and the casing is not straight line but modified into spline curve, which can be seen in Figure 7(a).The flow rate and pressure rise are represented with nondimensional form of flow coefficient and pressure rise coefficient which are defined as: and The designed fan blade profile data were tackled in the commercial software for the analysis of three-dimensional flow.In order to increase the speed of the numerical simulation, the internal fl w fields of the axial flow o and the difference value between them can reach to 22.75 Pa, which is relatively substantial pressure rise for a fan with a diameter of 200 mm.So it can be said that the design of half-ducted fan which is considering the radial velocity inflow and outflow is much better than the design of the ducted fan.In addition to make this assertion amenable, the three-dimensional flow field of half-ducted fan will be described in comparison with that of ducted fan.The following text will analyze the velocity and pressure distributions of internal flow in these two axial flow fans.The discrepancy of flow field caused by these two design method will be clarified by the numerical analysis.
Velocity Field at Fan Inlet and Outlet
Figures 2 and 3 present the distributions of meridional and tangential velocity at inlet and outlet of the rotor respectively for half-ducted fan and ducted fan along the radial direction.The circled lines illustrate the designed value of the tangential velocity and the lines with diamonds denote the calculated results obtained from the circumferentially averaged velocity.The meridional velocity of ducted fan is nearly uniform both at inlet and outlet, however, it is so changeable for the calculated results shown in Figure 2. The meridional velocity for half-ducted fan also doesn't come close so well, the calculated ones are lower than the designed value but they are almost in the same tendency.While the tangential velocity of half-ducted fan in calculation is much closer to designed data both at inlet and at outlet than that of the ducted fan, as shown in Figure 3.The deviations of the tangential velocity at inlet in Figure 3(a) are 2.28, 1.01, and the ones at outlet are 3.21, 2.67 in Figure 3(b), respectively for the ducted fan and half-ducted fan.The tangential velocity especially at outlet has a significant effect on pressure rise according to Euler equation as fan were divided into five periodic segments, the one fifth flow passage was numerically simulated by periodicity method with RNG k-ε viscous model.The numerical calculation also take the mesh dependence into account, the mesh number between 2.06 -3.25 million has been able to obtain the flow in general accuracy so as to the ratio of the energy obtained by the fan and the theoretical power can reach above 0.96.The pressure characteristics and the velocity field of designed halfducted and ducted fans were analyzed in the followed text.
Aerodynamic Performance
According to the calculation results, the efficiency computed by the Equation ( 9) is 62.85% and 61.2% respectively for ducted fan and half-ducted fan, both of which have completely reached to the designed value 50% and not differ so much.However, the static pressure rise of half-ducted fan is much larger than that of ducted fan, shown in the Equation ( 6).
Why the tangential velocity is so divergent from the designed data?In order to clarify the cause, the distributions of flow angles which are the intersection angle of relative velocity and meridional plane at inlet and outlet are investigated as shown in Figure 4.The lines with hollow circles and diamonds present the distributions of the inlet flow angle β 1 and the ones with solid circles and diamonds show the distributions of the outlet flow angle β 2 .The deviations of the flow angles at inlet and at outlet are below 4.5 degree except the points near hub and casing at outlet.Thus the divergency of the tangential velocity of the half-ducted fan to the designed data at inlet and outlet is smaller, especially at inlet the deviation is 1.01.However, the situation is a bit difference for the ducted fan, the inlet flow angles does not differ so much from the designed condition of uniformed axial inflow as seen in Figure 4(b), which improves the outflow angles near the hub.However, in the dominate flow region the outflow angles are divergent from designed data, which also make tangential velocity at outlet have the deviation of 3.21.
Figures 5 and 6 present the distributions of velocity vector on the meridional plane and a section 1 mm away from the blade leading edge.The meridional plane is set between blade and blade, and it locates near the leading edge of pressure surface and the trailing edge of next suction surface.The inflow of half-ducted fan is more uniform than that of the ducted fan.The vortex marked by arrows in Figures 5(b Figures 7 and 8 show the distri on the suction surface and the pressure surface respectively.The static pressure on suction surface is negative pressure and increases from the leading edge to the trailing edge.However, the static pressure on suction surface of ducted fan increases from the mid part to the leading edge and suddenly forms a high pressure center near the tip as shown in Figure 7(b).Furthermore, the static pressure on pressure surface of ducted fan in Figure 8(b) decreases to negative near the leading edge.These phenomena observed in ducted fan weaken the suction performance of suction surface and make the flow twist at inlet region as seen in Figures 5(b) and 6.While the performance of half-ducted fan is better than that of ducted fan and the static pressure uniformly increases on pressure surface with respect to the increment of radius which follows the assumption in design well in Equation (6).
Fi essure of half-ducted fan and ducted fan at outlet along radial direction.The total pressure of half-ducted fan is larger than that of ducted fan except in the casing areas and near the hub region.Furthermore, the meridional velocity dominating the flow field at outlet makes the mid-span region the most important part of the flow as seen in Figure 2(b), which is beneficial to the energy acquisition.Therefore, the static pressure rise of halfducted fan would be able to surpass that of ducted fan rameters.
Concl
Half-ducted fan and duct numerically analyzed for inve effect on the overall performance and the three dimensional flow field in design.Higher twisted blade can be avoided by the design of half ducted fan.The numerical results indicate that both of the designed fans can reach the specified efficiency and also surpass more than 11%.Furthermore, the static pressure rise of half-ducted fan is 16.6% more than that of ducted fan.In comparison of the three dimensional internal flow of these two fans, a certain number of interesting features can be summarized as follows: 1) Compared with ducted fan, the meridional velocity of half-duct us tendency, and the tangential velocity is much closer to designed data, which has a significant effect on the pressure rise.
2) The distributions of flow angle and velocity are improved in the w angle in design of half-ducted fan.
3) The static pressure gradually increases on suction surface of half-ducted fan from leading ge, and also uniformly increases on pressure surface with the increment of radius in accordance with the design assumption.While in the case of ducted fan, the pressure distributions on the suction and pressure surfaces are not beneficial for the pressure rise.
As mentioned above, on consideration of flow angle in design, the improvement of the flow angle tlet, the distributions of velocity in the flow field and Copyright © 2013 SciRes.OJFD -624.
the press a distributions on the blade surfa chieved more successfully.Therefore, half-ducted design with considering radial inflow and outflow is feasible and valid in comparison with ducted design for axial flow fans.
Ackno
The authors gratefully ackn
Fig- ure 1
(a).While the streamlines of half-ducted fan shown in Figure1(b) obviously differ from that of ducted fan because the inclinations of inflow and outflow are given based on the experimental data[8,9] so as to consider the effect of radial velocity component on the internal flow field.The flow angles of the streamlines are given 63, 16, 38 degrees respectively on tip streamline at inlet, on tip streamline at outlet, on hub streamline at outlet.
Figure 1 .
Figure 1. Outline of meridianal flow and top view of blades.(a) Ducted fan; (b) Half-ducted fan.
Figure 6 .
Figure 6.Distributions of velocity vector on inlet section plane (ducted fan).fducted fan, which cause the meridional velocity to de-
|
v3-fos-license
|
2022-06-24T15:12:21.636Z
|
2022-06-21T00:00:00.000
|
249967928
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/22/13/4674/pdf?version=1655872142",
"pdf_hash": "852dc3da48214679ebe9e3eed3a834cc09573171",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46063",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"sha1": "5c44bc304ea5177696c2392a06f02277554262d3",
"year": 2022
}
|
pes2o/s2orc
|
Machine Learning Assisted Handheld Confocal Raman Micro-Spectroscopy for Identification of Clinically Relevant Atopic Eczema Biomarkers
Atopic dermatitis (AD) is a common chronic inflammatory skin dermatosis condition due to skin barrier dysfunction that causes itchy, red, swollen, and cracked skin. Currently, AD severity clinical scores are subjected to intra- and inter-observer differences. There is a need for an objective scoring method that is sensitive to skin barrier differences. The aim of this study was to evaluate the relevant skin chemical biomarkers in AD patients. We used confocal Raman micro-spectroscopy and advanced machine learning methods as means to classify eczema patients and healthy controls with sufficient sensitivity and specificity. Raman spectra at different skin depths were acquired from subjects’ lower volar forearm location using an in-house developed handheld confocal Raman micro-spectroscopy system. The Raman spectra corresponding to the skin surface from all the subjects were further analyzed through partial least squares discriminant analysis, a binary classification model allowing the classification between eczema and healthy subjects with a sensitivity and specificity of 0.94 and 0.85, respectively, using stratified K-fold (K = 10) cross-validation. The variable importance in the projection score from the partial least squares discriminant analysis classification model further elucidated the role of important stratum corneum proteins and lipids in distinguishing two subject groups.
Introduction
Atopic dermatitis (AD) is a chronic disease with a current prevalence of 10-15% in developed countries. The chronic skin inflammatory disorder affects the quality of life ranging from intense itch to mental health issues such as social impact, anxiety, depression, etc. along with economic healthcare burden in terms of direct and indirect costs [1]. In a recent published survey, it was confirmed that the overall prevalence of AD is 13.1% in a Singaporean population with an even higher prevalence of 20.6% in children [2,3]. It is characterized by human skin epidermal barrier dysfunction culminating in dry skin and immunoglobulin E (IgE)-mediated sensitization to food, external environmental allergens, irritants, etc. [4,5]. AD is primarily a disease of an immune system with cytokines IL-4, IL13, and IL-33 causing skin barrier dysfunction such as increased PH, reduced water retention, regular itch, etc., and hence eczema. The skin of AD patients has been demonstrated to have increased trans-epidermal water loss (TEWL) and defects in the terminal keratinocyte differentiation. The formation and function of the stratum corneum (SC) is controlled by Sensors 2022, 22, 4674 2 of 13 a set of genes, including filaggrin (FLG) which encodes profilaggrin, the precursor of the filament aggregating protein filaggrin [6]. FLG is broken down into amino acids, which are a crucial component in forming the natural moisturizing factor (NMF) and further hydration of the SC [7]. The presence of FLG mutations has been associated with decreased SC hydration and decreased NMF components, as well as increased AD severity [5,6]. Current assessment of eczema severity is subjective and dependent on the clinicians' judgements on localized skin inflammation, TEWL, and other biophysical measurements and does not consider the concentration of skin constituents. Hence, there is a need for an objective eczema severity scoring method that is easy to measure and yet sensitive to change with minimal bias. Confocal Raman micro-spectroscopy (CRM) allows for non-invasive, in vivo measurements of skin biomolecular composition or constituents up to several hundred micrometers below the skin surface to detect important nucleic acids, proteins, lipids, etc. Raman scattering or Raman spectroscopy is a measure of the inelastic scattering of photons by matter, i.e., there is either an increase or decrease in the scattered photon energy. The SC of the human skin consists of corneocytes surrounded by multilamellar lipid membranes that prevent excessive water loss from the body and entrance of chemical and biochemical substances from the outside environment. Major skin barrier lipids are ceramides, fatty acids, and cholesterol, and its dysfunction in terms of quantitative analysis of the molecular composition of skin has been widely studied using CRM [8][9][10].
Caspers et al. has studied the in vivo and in vitro skin characterization of its molecular composition varying with respect to the depth using CRM. Water, NMF, and other amino acids in the stratum corneum were quantified with the help of Raman spectra from synthetic skin constituents and mathematical curve-fitting models. It has been observed that the NMF concentration plays an important role in skin molecular constituents at different depths [11][12][13]. Raman spectroscopy has been extended to understand the changes in skin constituents that are responsible for the pathogenesis of AD. O'Regan et al. discovered that NMF's Raman signature in the SC can be used as a marker of the FLG genotype in patients with moderate-to-severe AD and that NMF content can even further differentiate among them [6]. Miltz et al. measured lipid and water concentrations other than NMF in the stratum corneum using Raman spectroscopy in a French population and genotyped for the major European FLG mutation. It was found that low concentrations of specific skin constituents such as histidine, alanine, glycine, and pyrrolidone-5-carboxylic acid in the SC was associated with FLG mutations with 92% specificity [7]. Janssens et al. first found that an increase in short chain length ceramides led to the aberrant lipid organization in the SC and hence to an impaired skin barrier function in patients with eczema. Later, they reported that the lipid/protein ratio in patients with eczema is related to skin barrier function and it is proportional to the dry SC mass per skin area in lesioned SC of patients with eczema [8,9]. Verzeaux et al. identified a modification of lipid organization and conformation in addition to the decrease in the lipid-to-protein ratio using ordinary partial least square (OPLS) regression in atopic skin [10]. Recently, Ho et al. used a support vector machine (SVM) model to derive an Eczema Biochemical Index (EBI) to further stratify the severity of AD patients based on the skin constituents' content measured using a handheld CRM system [14].
The filaggrin mutation or the loss of skin ceramides resulting in skin barrier dysfunction has been studied extensively. However, in the previously reported eczema clinical studies using CRM, machine learning models were not utilized to understand the underlying chemical biomarker information in terms of their characteristic Raman wavenumber or wavebands. In this work, we report the use of machine learning methods for discrimination or classification analysis between patients with eczema and healthy subjects from the Raman signatures acquired within the fingerprint wavenumber region. Machine learning classification results from multiple supervised machine learning algorithms are reported from the Raman eczema spectral dataset to develop an accurate and robust classification model. For optimum performance of the partial least squares discriminant analysis (PLS-DA) classification model, the minimum required number of latent variables (LVs) were Sensors 2022, 22, 4674 3 of 13 evaluated while preserving the least classification error. Further, the classification model was evaluated using the stratified K-fold cross-validation method and computing classification metrics such as confusion matrix, sensitivity, specificity, etc. The detailed Raman spectra analysis in terms of wavenumber contribution responsible for the discrimination between two subject groups is presented with the help of a variable importance in projection (VIP) score developed using the PLS-DA classification model, which can be further utilized for the variable selection to improve classification accuracy.
Materials and Methods
In this study, patients with atopic dermatitis (n = 52) and healthy subjects (n = 20) with no other known skin disease participated after initial consultation with clinician. The disease severity of the eczema patients was evaluated by using scoring of atopic dermatitis (SCO-RAD) after their routine checkup by the clinician. A total of 8 mild (SCO-RAD < 25), 31 moderate (25 < SCO-RAD < 50), and 13 severe (SCO-RAD > 50) eczema subjects participated in the clinical study. Since the skin pigmentation reduces the signal-to-noise ratio of the acquired Raman signal, volar arm location with visible lesion is preferred for data acquisition. Additionally, subjects with a Fitzpatrick (FP) score either 3 or 4 were preferred for data acquisition. Among all the 52 eczema subjects, 12 subjects with an FP score of 3, 38 subjects with an FP score of 4, and 2 subjects with an FP score of 5 were recruited. On the other hand, among 20 healthy subjects, 11 subjects with an FP score of 3, 5 subjects with an FP score of 4, and 4 subjects with an FP score of 5 were recruited. Skin physiology measurements such as TEWL were acquired and used for power analysis to substantiate each group sample size. The mean ± standard deviation of TEWL for the eczema and healthy subject group was 17.77 ± 9.0891 and 10.58 ± 2.7964, respectively. The two-sample t-test of the respective two group populations' TEWL value confirmed that the sample size of two groups was sufficient to be differentiated with p < 0.001. All participants were above age 21 at the time of recruitment and did not apply any dermatological products to their forearms prior to the clinical study. All the Raman spectroscopy data were processed using open-source Python programming language version 3.8.6 with additional routines and libraries through the Anaconda distribution. The clinical study was approved by the National Healthcare Group Domain Specific Review Board (DSRB reference number 2017/00932) and informed consent was obtained from all the participants. The clinical study involving human participants was performed in accordance as approved by the Institutional Review Board.
In Vivo Non-Invasive Raman Micro-Spectroscopy
An in-house handheld CRM system was developed to conduct the eczema clinical trial by measuring the Raman spectra at the lower volar arm location in vivo as shown in the schematic in Figure 1. A 3D-printed fixture was fabricated to hold and rest the arm while acquiring the data from the handheld CRM system. The skin surface was illuminated using a 785 nm single-mode fiber-coupled laser (Innovative Photonic Solutions U-type, Innovative Photonic Solutions, Inc., Plainsboro, NJ, USA) and a laser line filter (Chroma 49950-RT, Chroma Technology Corp, Bellows Falls, VT, USA) with a laser power of ≈ 25 mW on the skin surface after passing through a microscopic objective (Nikon CFI Plan Fluor 40×/0.75, Nikon Corporation, Tokyo, Japan). In order to avoid direct contact between the microscopic objective and subject's skin, a thin glass window (cover glass) was placed in the handheld CRM system housing. The cover glass came in direct contact with the patient's skin and allowed the z-motion of the microscopic objective. The scattered Raman signal was collected using the same microscopic objective and was focused back to the multimode optical fiber with the help of beam expander and relayed to the spectrograph (Andor Kymera 193i, Andor, Belfast, UK) coupled with a charged coupling device (Andor iDus 416, Andor, Belfast, UK). The spectrograph grating (830 lines/mm blazed at 820 nm) and the back-illuminated deep depleted charge coupling device (CCD) camera were chosen in such a way to have the maximum collective quantum efficiency of ≈ 90% in the wavelength region of 800-900 nm where most of the skin Raman spectra was acquired while achieving spectral resolution of ≈0.3 nm. With the help of the stepper motor embedded inside the CRM system, Raman spectra as a function of depth were acquired in vivo at 10 depths at a step size of 10 µm (Appendix A, Figure A1). The Raman spectra acquired between 400 cm −1 to 1800 cm −1 with unique 1384 intensity data points per subject were further used for spectral preprocessing, normalization, and chemometric machine learning analysis. camera were chosen in such a way to have the maximum collective quantum efficiency of ≈ 90% in the wavelength region of 800-900 nm where most of the skin Raman spectra was acquired while achieving spectral resolution of ≈0.3 nm. With the help of the stepper motor embedded inside the CRM system, Raman spectra as a function of depth were acquired in vivo at 10 depths at a step size of 10 μm (Appendix A, Figure A1). The Raman spectra acquired between 400 cm −1 to 1800 cm −1 with unique 1384 intensity data points per subject were further used for spectral preprocessing, normalization, and chemometric machine learning analysis.
Raman Spectra Preprocessing
Endogenous tissue autofluorescence and background scattering adds noise to the acquired Raman spectra, masking the important spectral information related to the tissue under investigation. This unwanted noise signal may cause deviation from the linear relationship between the acquired Raman intensity signal and molecular concentration of skin constituents [15,16], warranting the preprocessing of Raman spectra. In the present work, we used the asymmetric least squares (AsLS) baseline correction method to remove the baseline of Raman spectra acquired at all depths [17]. The baseline-corrected Raman spectra at each depth were further processed with the Savitzky-Golay spectral smoothing algorithm to improve the signal-to-noise ratio [18]. After the spectra baseline correction and smoothing, Raman spectra acquired at different depths above and below the skin surface were further analyzed to observe the skin surface precisely. Keratin is an important (≈80% dry weight) component of the skin's top layer, i.e., SC. It was reported earlier that the keratin contribution of the Raman spectra at 1655 cm −1 can help to decipher skin surface information [19]. In our analysis, the skin surface was determined by locating the position of the maximum of keratin amide I intensity profile at 1655 cm −1 . Thus, the Raman spectra associated with the skin surface were determined from 10 Raman spectra acquired at different depths below the skin surface (Appendix A, Figure A2). The same procedure of preprocessing and skin surface determination was repeated for Raman spectra acquired on all healthy (n = 20) and eczema subjects (n = 52) and further used as a combined Raman spectral dataset.
Raman Spectra Preprocessing
Endogenous tissue autofluorescence and background scattering adds noise to the acquired Raman spectra, masking the important spectral information related to the tissue under investigation. This unwanted noise signal may cause deviation from the linear relationship between the acquired Raman intensity signal and molecular concentration of skin constituents [15,16], warranting the preprocessing of Raman spectra. In the present work, we used the asymmetric least squares (AsLS) baseline correction method to remove the baseline of Raman spectra acquired at all depths [17]. The baseline-corrected Raman spectra at each depth were further processed with the Savitzky-Golay spectral smoothing algorithm to improve the signal-to-noise ratio [18]. After the spectra baseline correction and smoothing, Raman spectra acquired at different depths above and below the skin surface were further analyzed to observe the skin surface precisely. Keratin is an important (≈80% dry weight) component of the skin's top layer, i.e., SC. It was reported earlier that the keratin contribution of the Raman spectra at 1655 cm −1 can help to decipher skin surface information [19]. In our analysis, the skin surface was determined by locating the position of the maximum of keratin amide I intensity profile at 1655 cm −1 . Thus, the Raman spectra associated with the skin surface were determined from 10 Raman spectra acquired at different depths below the skin surface (Appendix A, Figure A2). The same procedure of preprocessing and skin surface determination was repeated for Raman spectra acquired on all healthy (n = 20) and eczema subjects (n = 52) and further used as a combined Raman spectral dataset.
Machine Learning Methods
Prior to any chemometric analysis, Raman spectra related to each subject's volar arm skin surface were standardized using the standard normal variate normalization method. For dimensionality reduction of the Raman spectral dataset, an unsupervised principal component analysis (PCA) was employed to extract a set of orthogonal principal components (PCs) that accounted for the maximum variance in the Raman spectral dataset. However, the first five PCs accounted for a relatively low explained variance (≈65%), which suggested that the features in the Raman spectral dataset for all the subjects could not be explained with few orthogonal dimensions. These PCs did not show a clear demarcation between the two subject groups and thus, the related PC loadings could not deduce the underlying spectral biomarkers in terms of the wavenumber for subject group class differentiation. Since the variance in the Raman spectral dataset could not be explained in fewer principal components, there is a need for other supervised machine learning methods for Raman spectral dataset dimensionality reduction and its binary classification.
Other supervised machine learning methods such as linear discrimination analysis (LDA), logistic regression, naïve Bayes, K-nearest neighbor (KNN), SVM, and PLS-DA were explored as binary classification models for the preprocessed, normalized Raman spectral dataset. Among all these exploratory supervised machine learning methods, PLS-DA was preferred for further analysis of the Raman eczema spectral dataset because it not only works as a multivariate dimensionality reduction tool like the PCA but also functions as a binary classifier for a dataset with many variables (wavenumbers). The basic principle behind the supervised PLS-DA classifier has been described elsewhere [20][21][22]. In our analysis of the Raman spectral dataset, the correct number of LVs were determined to further enhance the classification accuracy of the PLS-DA classification model to discriminate between the healthy and eczema subject groups. The accuracy of this classification model was verified using the stratified K-fold cross-validation method, as this method allowed us to preserve the percentage of the same sample size from both healthy and eczema groups into two subsets for training and test. Based on this cross-validation, the performance of the classification model was evaluated using binary classification metrics such as classification accuracy, sensitivity, specificity, and the receiver operating characteristics (ROC) curve.
Variable Importance in Projection
The VIP score is an important parameter that can be evaluated from the PLS-DA classification model and estimates the importance of each variable (wavenumber). The VIP score is the weighted sum correlation between PLS latent variables. Variables with a VIP score value greater than the numerical value of one (1) are considered as important and can further help to optimize the PLS-DA classification model [22,23]. The VIP score helps to identify important wavenumbers or Raman band regions that are significantly different in two groups under investigation, i.e., the VIP score can be used to discriminate between the two subject groups by selecting certain wavenumbers closely related to underlying skin constituents. With the help of this quantitative VIP scoring through the classification model, the underlying Raman wavebands associated with different skin constituents could be evaluated. Figure 2 shows the mean ± standard deviation of non-normalized preprocessed skin surface Raman spectra for all healthy (n = 20) and eczema subjects (n = 52) in the fingerprint region (400-1800 cm −1 ). The major differences in the Raman intensity for the two groups appeared within the wavenumber range 850-930 cm −1 , amide III band (1240-1330 cm −1 ), intensity variation of the shoulder at 1420 cm −1 , and amide I band (1640-1680 cm −1 ). This difference was related to the Raman spectra of the uppermost SC layer that was dominated by the vibrational bands of its structural proteins, amino acids, and lipids. The Raman intensity difference within the wavenumber range 850-930 cm −1 and shoulder intensity at 1420 cm −1 could be attributed to the molecular composition of the SC in terms of NMF that primarily constitutes of amino acids (serine, glycine, alanine, etc.), its derivatives, and pyrrolidone carboxylic acid having a distinctive peak at 885 cm −1 . The Raman intensity difference in the amide III band (1240-1330 cm −1 ) was predominantly due to ceramide III (having one of the distinctive peaks at 1296 cm −1 ), the most abundant lipid in the stratum corneum. The largest mean Raman intensity difference between the two groups was visualized in the amide I band having an intensity peak at 1650 cm −1 , which corresponds to urocanic acid in the stratum corneum [11][12][13].
Results and Discussion
Sensors 2022, 22, 4674 6 intensity at 1420 cm −1 could be attributed to the molecular composition of the SC in t of NMF that primarily constitutes of amino acids (serine, glycine, alanine, etc.), its d atives, and pyrrolidone carboxylic acid having a distinctive peak at 885 cm −1 . The Ra intensity difference in the amide ⅠⅠⅠ band (1240-1330 cm −1 ) was predominantly d ceramide ⅠⅠⅠ (having one of the distinctive peaks at 1296 cm −1 ), the most abundant lip the stratum corneum. The largest mean Raman intensity difference between the groups was visualized in the amide I band having an intensity peak at 1650 cm −1 , w corresponds to urocanic acid in the stratum corneum [11][12][13].
Figure 2.
Preprocessed Raman spectra achieved using CRM system within the fingerprint 400 cm −1 wavenumber range for eczema (n = 52, top) and healthy (n = 20, middle) subjects. The diffe between the two spectra is shown in grey at the bottom. Shaded region depicts 1 standard dev variation in the data while the solid line depicts the means of the spectra.
These differences in the Raman spectral signature acquired for two groups depic regions of interest as chemical biomarkers for AD. To further understand the under quantitative Raman biomarkers, different supervised binary classification methods tested for the classification of Raman spectra from the two subject groups. The pr cessed, baseline-corrected, and mean-centered Raman spectral dataset for two su groups with binary group affinities was used with multiple binary classification meth Table 1 shows a summary of the results from some of the popular binary classific methods such as LDA, naïve Bayes, logistic regression, KNN, and SVM. The results evaluated in terms of classification metrics such as aggregated classification accu specificity, sensitivity, and mean ROC AUC score through the stratified K-fold cross idation method. The stratified cross-validation method maintains the class imbalan different folds of the training and test Raman spectral dataset. It is evident from results that these binary classification methods demonstrated good classification accu but lacked visualization of classification results in terms of LV score or a scatterplot. Preprocessed Raman spectra achieved using CRM system within the fingerprint 400-1800 cm −1 wavenumber range for eczema (n = 52, top) and healthy (n = 20, middle) subjects. The difference between the two spectra is shown in grey at the bottom. Shaded region depicts 1 standard deviation variation in the data while the solid line depicts the means of the spectra.
These differences in the Raman spectral signature acquired for two groups depict the regions of interest as chemical biomarkers for AD. To further understand the underlying quantitative Raman biomarkers, different supervised binary classification methods were tested for the classification of Raman spectra from the two subject groups. The preprocessed, baseline-corrected, and mean-centered Raman spectral dataset for two subject groups with binary group affinities was used with multiple binary classification methods. Table 1 shows a summary of the results from some of the popular binary classification methods such as LDA, naïve Bayes, logistic regression, KNN, and SVM. The results were evaluated in terms of classification metrics such as aggregated classification accuracy, specificity, sensitivity, and mean ROC AUC score through the stratified K-fold cross-validation method. The stratified cross-validation method maintains the class imbalance in different folds of the training and test Raman spectral dataset. It is evident from these results that these binary classification methods demonstrated good classification accuracy but lacked visualization of classification results in terms of LV score or a scatterplot. The PLS-DA classification method allows one to visualize the classification between two (or more) subject groups in terms of its LVs and allows for the identification of important variables responsible for classification. The mean-centered, baseline-corrected Raman spectral dataset from the two study groups was used as the descriptor (X) matrix, whereas the response (Y) vector was artificially generated to designate group affinities. The PLS-DA determines the fit between the descriptor matrix and class groups by maximizing the covariance. As a result, LVs in terms of the PLS score were determined. In our analysis, the minimum number of LVs was determined based on minimizing the classification error or improving classification accuracy. Figure 3 shows that a minimum of four (4) LVs was required to build an adequate PLS-DA classification model as the average calibration and cross-validation classification error was ≈0% and 8%, respectively, with four LVs. Thus, a minimum number of four (4) LVs was an ideal number to build a classification model that prevents underfitting or overfitting and imparts highly accurate predictions. The PLS-DA classification method allows one to visualize the classification between two (or more) subject groups in terms of its LVs and allows for the identification of important variables responsible for classification. The mean-centered, baseline-corrected Raman spectral dataset from the two study groups was used as the descriptor (X) matrix, whereas the response (Y) vector was artificially generated to designate group affinities. The PLS-DA determines the fit between the descriptor matrix and class groups by maximizing the covariance. As a result, LVs in terms of the PLS score were determined. In our analysis, the minimum number of LVs was determined based on minimizing the classification error or improving classification accuracy. Figure 3 shows that a minimum of four (4) LVs was required to build an adequate PLS-DA classification model as the average calibration and cross-validation classification error was ≈0% and 8%, respectively, with four LVs. Thus, a minimum number of four (4) LVs was an ideal number to build a classification model that prevents underfitting or overfitting and imparts highly accurate predictions. through the stratified K-fold (K = 10) cross-validation method to nullify any ambiguity due to class imbalance. Table 2 shows an aggregated confusion matrix and related classification metrics evaluated through cross-validation. From the PLS-DA classification model with an accuracy of 0.92 ± 0.05, a sensitivity and specificity of 0.94 and 0.85 was achieved, respectively. Figure 4b shows an averaged receiver operating characteristics (ROC) curve from multiple cross-validation folds that demonstrates the capability of the PLS-DA binary classifier to distinguish between eczema and healthy subjects with threshold variation.
4b shows an averaged receiver operating characteristics (ROC) curve from multiple crossvalidation folds that demonstrates the capability of the PLS-DA binary classifier to distinguish between eczema and healthy subjects with threshold variation. Table 2. Classification metrics from the PLS-DA binary classification model using the stratified Kfold (K = 10) cross validation method for eczema (n = 52) and healthy (n = 20) subjects. As shown in Figure 2, the Raman difference spectrum demonstrated a direct comparison between the eczema and healthy subject molecular vibration spectra. As described earlier, this difference spectrum shows that there were certain wavenumber regions where there is a difference in the mean Raman spectral intensity acquired between eczema and healthy subjects. Figure 5 shows the VIP score evaluated using the PLS-DA classification model. A VIP score greater than numerical value one (1) indicates the spectral bands or wavenumbers that are important for optimal PLS-DA classification model performance, i.e., these Raman bands or wavenumbers are discriminatory between the healthy and eczema subject groups. From this figure, it is evident that the most discriminatory wavenumber bands appeared in the lipid along with protein and nucleic acid band (1030 to 1130 cm −1 , 1300 to 1450 cm −1 , and 1620 to 1700 cm −1 ) regions of the skin biomolecular composition [24]. Thus, the Raman wavenumber bands attributed to lipids (960, 980, 1078, 1379, and 1655), proteins (618, 755, 855, 980, 1003, 1154, 1207, 1552, and 1655), amino acids (855, 1420, 1452, 1586, and 1716), and nucleic acids (787, 1078, and 1452) could be observed as the most important spectroscopic signatures for the classification between the eczema and healthy subject group. These Raman wavenumbers and wavebands are also tabulated in Table 3 with the peak assignments. Cholesterol [24][25][26][27][28] 755
Conclusions
Non-invasive quantitative analysis of AD in terms of skin molecular composition is important for dermatological diagnosis and treatment. Quantitative cognition of skin biomolecular composition can be accomplished using CRM; however, multivariate analysis is needed to elucidate complex Raman spectroscopic signatures. Here, we presented a method to classify between AD and healthy subjects based on multivariate analysis using PLS-DA and CRM. Thus, the PLS-DA classification model is currently limited for binary classification; however, it would be interesting to evaluate multiclass classification based on eczema disease severity. Our approach to use the PLS-DA classification method permitted dimensional reduction, classification, and variable selection for Raman micro-spectroscopy data. We cross-validated our PLS-DA classification model and achieved a sensitivity and specificity of 0.94 and 0.85, respectively. The classification accuracy of the PLS-DA model can be further enhanced by selecting only the wavenumber bands having a VIP score ≥1. Further, with the help of the VIP score, important Raman spectroscopic signatures in terms of Raman peaks or wavenumber bands for lipids, proteins, and nucleic acids were evaluated that can act as biomarkers to assess the skin condition for eczema subjects in clinics and further therapeutics. This quantitative analysis of skin inflammatory conditions such as AD using CRM and multivariate analysis may pave the way for next-generation diagnosis unlike the current subjective scoring assessments used in clinics.
Institutional Review Board Statement:
The clinical study was approved by the National Healthcare Group Domain Specific Review Board (DSRB reference number 2017/00932) and informed consent was obtained from all the participants. The clinical study involving human participants was performed in accordance as approved by the Institutional Review Board.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: Not applicable.
Acknowledgments: Authors would also like to acknowledge Li Xiuting, Ghayathri Balasundaram, Perumal Jayakumar, Wong Chi Lok, Lim Hann Qian, and other colleagues at IBB, A*STAR for their help in this study.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A
Sensors 2022, 22, 4674 1 Appendix A Figure A1. Raman spectral acquired from one of the healthy subject's volar arm locations handheld CRM system in the fingerprint region at 10 different depths (10 to 100 μm). Figure A1. Raman spectral acquired from one of the healthy subject's volar arm locations using handheld CRM system in the fingerprint region at 10 different depths (10 to 100 µm). Figure A1. Raman spectral acquired from one of the healthy subject's volar arm locations using handheld CRM system in the fingerprint region at 10 different depths (10 to 100 μm). Figure A2. Skin Raman spectra acquired at 10 depths. The skin surface (highlighted) was evaluated by finding Raman spectra maximum peak intensity related to amide I (Keratin) 1655 cm −1 .
|
v3-fos-license
|
2020-10-14T13:04:58.067Z
|
2020-10-01T00:00:00.000
|
222315414
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.201200",
"pdf_hash": "a59610bc4c3d98c587b9278be98474154d4fc847",
"pdf_src": "RoyalSociety",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46068",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "a59610bc4c3d98c587b9278be98474154d4fc847",
"year": 2020
}
|
pes2o/s2orc
|
Adaptation of the carbamoyl-phosphate synthetase enzyme in an extremophile fish
Tetrapods and fish have adapted distinct carbamoyl-phosphate synthase (CPS) enzymes to initiate the ornithine urea cycle during the detoxification of nitrogenous wastes. We report evidence that in the ureotelic subgenus of extremophile fish Oreochromis Alcolapia, CPS III has undergone convergent evolution and adapted its substrate affinity to ammonia, which is typical of terrestrial vertebrate CPS I. Unusually, unlike in other vertebrates, the expression of CPS III in Alcolapia is localized to the skeletal muscle and is activated in the myogenic lineage during early embryonic development with expression remaining in mature fish. We propose that adaptation in Alcolapia included both convergent evolution of CPS function to that of terrestrial vertebrates, as well as changes in development mechanisms redirecting CPS III gene expression to the skeletal muscle.
Introduction
In living organisms, protein metabolism results in the production of nitrogenous wastes which need to be excreted. Most teleosts are ammonotelic, excreting their toxic nitrogenous waste as ammonia across gill tissue by diffusion. As an adaptation to living on land, amphibians and mammals are ureotelic, using liver and kidney tissues to convert waste ammonia into the less toxic and more water-soluble urea, which is then excreted in urine. Other terrestrial animals such as insects, birds and reptiles are uricolotic and convert nitrogenous waste into uric acid, which is eliminated as a paste; a process which requires more energy but wastes less water [1].
While most adult fish are ammonotelic, the larval stages of some teleosts excrete nitrogenous waste as both ammonia and urea before their gills are fully developed [2]. Additionally, some adult fish species such as the gulf toad fish (Opsanus beta; [3]) and the African catfish (Clarias gariepinus; [4]) also excrete a proportion of their nitrogenous waste as urea. This is usually in response to changes in aquatic conditions, such as high alkalinity. It has been shown experimentally that high external pH prevents diffusion of ammonia across gill tissue [5,6]. Unusually, the cichlid fish species in the subgenus Alcolapia (described by some authors as a genus but shown to nest within the genus Oreochromis) [7], which inhabit the highly alkaline soda lakes of Natron (Tanzania) and Magadi (Kenya), are reported to be 100% ureotelic [8,9].
Once part of a single palaeolake, Orolonga [10], Lakes Natron and Magadi are one of the most extreme environments supporting fish life, with water temperatures up to 42.8°C, pH approximately 10.5, fluctuating dissolved oxygen levels, and salt concentrations above 20 parts per thousand [11]. Alcolapia is the only group of fish to survive in these lakes, forming a recent adaptive radiation including the four species: Alcolapia grahami (Lake Magadi) and A. latilabris, A. ndlalani and A. alcalica (Lake Natron) [11,12]. The harsh environment of the soda lakes presents certain physiological challenges that Alcolapia have evolved to overcome, including the basic need to excrete nitrogenous waste. While other species are able to excrete urea in response to extreme conditions, none do so to the level of Alcolapia [13,14], and unlike facultative ureotelic species, the adaptation of urea production and excretion in Alcolapia is considered fixed [15]. Moreover, the heightened metabolic rate in Alcolapia, a by-product from living in such an extreme environment [8,16], requires an efficient method of detoxification.
Alcolapia and ureotelic tetrapods (including humans) detoxify ammonia using the ornithine urea cycle (OUC) where the mitochondrial enzyme carbamoyl-phosphate synthetase (CPS) is essential for the first and rate-limiting step of urea production [17]. This enzyme, together with the accessory enzyme glutamine synthase, provide an important switch regulating the balance between ammonia removal for detoxification and maintaining a source of ammonia for the biosynthesis of amino acids [18]. CPS has evolved into two biochemically distinct proteins: in terrestrial vertebrates CPS I uses ammonia as its preferential nitrogen donor, while in teleosts CPS III accepts glutamine to produce urea during larval stages (reviewed Zimmer et al. [2]). While CPS I/III are mitochondrial enzymes and part of the urea cycle, CPS II is present in the cytosol catalysing the synthesis of carbamoyl phosphate for pyrimidine nucleotide biosynthesis. CPS I/III are syntenic, representing orthologous genes; their somewhat confusing nomenclature is based on the distinct biochemical properties of their proteins. CPS I/III genes from different vertebrate species clade together, separate from CPS II (electronic supplementary material, figure). For simplicity, we will continue to refer to fish, glutamine binding CPS as CPS III and tetrapod, ammonia binding CPS as CPS I. The teleost CPS III binds glutamine in the glutamine amidotransferase (GAT) domain using two amino acid residues [19], subsequently, the nitrogen source provided by the amide group is catalysed by a conserved catalytic triad; Cys-His-Glu [20]. In terrestrial vertebrates CPS I lacks a complete catalytic triad and can only generate carbamoylphosphate in the presence of free ammonia [21]. This change in function from glutamine binding CPS III to ammonia binding CPS I is believed to have evolved in the stem lineage of living tetrapods, first appearing in ancestral amphibians [21].
In tetrapods and most fish, the OUC enzymes are largely localized to the liver [22], the main urogenic organ [23]. Alcolapia are different, and the primary site for urea production in these extremophile fishes is the skeletal muscle [24]. Notably, glutamine synthase activity is reportedly absent in Alcolapia muscle tissue. The kinetic properties of CPS III in Alcolapia, therefore, differ from that of other teleosts in that it preferentially uses ammonia as its primary substrate, having maximal enzymatic rates above that of binding glutamine (although it is still capable of doing so) as opposed to in other species where the use of ammonia yields enzymatic rates of around 10% to that of glutamine [24]. These rates are similar to ureotelic terrestrial species, where CPS I preferentially binds ammonia and is incapable of using glutamine [20].
Here, we report the amino acid sequence of two Alcolapia species (A. alcalica and A. grahami) that reveals a change in CPS III substrate binding site. In addition, we show that the expression of Alcolapia CPS III in skeletal muscle arises early in embryonic development where transcripts are restricted to the somites, the source of skeletal muscle in all vertebrates, and migrating myogenic precursors. We discuss changes to the structure of functional domains and modular gene enhancers that probably underpin evolutionary changes in Alcolapia CPS III substrate binding and the redirection royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201200 of gene expression from the hepatogenic to myogenic lineage [25]. Our findings point to adaptation in Alcolapia including both convergent evolution of CPS function to that of terrestrial vertebrates, as well as changes in development mechanisms redirecting CPS III gene expression to the skeletal muscle.
Experimental animals
Fieldwork at Lake Natron, Tanzania, was conducted during June and July of 2017 to collect live specimens of the three endemic species in an attempt to produce stable breeding populations of these fishes in the UK. Live fish were all collected from a single spring (site 5 [11,26]) containing all three species found in Lake Natron and identified using morphology as described in Seegers & Tichy [12]. A stand-alone, recirculating aquarium was adapted to house male and female A. alcalica in 10 or 30 l tanks at a constant temperature of 30°C, pH 9 and salt concentration of 3800 µS at the University of York.
Expression of CPS III in adult tissues
Reverse transcription-polymerase chain reaction (RT-PCR) was used to determine the presence of CPS III in different tissues (gill, muscle, liver, brain) of three different adult A. alcalica. RNA was extracted from dissected tissues with TriReagent (Sigma-Aldrich) to the manufacturers' guidelines. For cDNA synthesis, 1 µg of total RNA was reverse transcribed with random hexamers (Thermo Scientific) and superscript IV (Invitrogen). PCR was performed on 2 µl of the above cDNA with Promega PCR master mix and 0.5 mM of each primer (forward: CAGTGGGAGGTCAGATTGC, reverse: CTCACAGCGAAGCACAGGG). Gel electrophoresis of the PCR products determined the presence or absence of CPS III RNA.
In situ hybridization
For the production of antisense probes, complementary to the mRNA of CPS III to use in in situ hybridization, the above 399 bp PCR product was ligated into PGem-tEasy and transformed into the Escherichia coli strain DH5α. This was linearized and in vitro run-off transcription was used to incorporate a DIG-labelled UTP analogue. To determine the temporal expression of these proteins in A. alcalica, embryos were collected at different stages of development (2, 4 and 7 days post fertilization (between 15 and 20 for each stage)), fixed for 1 hour in MEMFA (0.1 M MOPS pH 7.4, 2 mM EGTA, 1 mM MgSO 4 , 3.7% formaldehyde) at room temperature and stored at −20°C in 100% methanol. For in situ hybridization, embryos were rehydrated and treated with 10 µg mg −1 proteinase K at room temperature. After post-fixation and a 2 h pre-hybridization, embryos were hybridized with the probe at 68°C in hybridization buffer (50% formamide (Ambion), 1 mg ml −1 total yeast RNA, 5×SSC, 100 µg ml −1 heparin, 1× denharts, 0.1% Tween-20, 0.1% CHAPS, 10 mM EDTA. Embryos were extensively washed at 68°C in 2×SSC + 0.1% Tween-20, 0.2×SSC + 0.1% Tween-20 and maleic acid buffer (MAB; 100 mM maleic acid, 150 mM NaCl, 0.1% Tween-20, pH 7.8). This was replaced with pre-incubation buffer (4× MAB, 10% BMB, 20% heat-treated lamb serum) for 2 h. Embryos were incubated overnight (rolling at 4°C) with fresh pre-incubation buffer and 1/2000 dilution of anti-DIG coupled with alkaline phosphatase (AP) (Roche). These were then visualized by the application of BM purple until staining had occurred.
Sequence analysis of CPS III
cDNA was produced from the RNA extracted from whole embryos using the above method for A. alcalica and A. grahami. Multiple primer pairs (electronic supplementary material, table S1) were used to amplify fragments of CPS III from the cDNA via PCR and the products sent for sequencing. The coding region of CPS III was then constructed using multiple alignments against the CPS I and III from other species. The amino acid sequence was then examined for potential changes which could predict the functional differences seen in Alcolapia. Phylogenetic analysis was also used to confirm the Alcolapia genes analysed here are CPS III (electronic supplementary material, figure S1). To determine potential changes in the promoter region, a 3500 bp section of genome (accession number NCBI: MW014910) upstream of the transcriptional site start of CPS from A. alcalica (unpublished genome), Oreochromis royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201200 niloticus (Nile tilapia), Xenopus tropicalis (western clawed frog) and Danio rerio (zebrafish) genomes was accessed on Ensembl, aligned and examined for binding sites specific to the muscle transcription factor Myod1 (E-boxes) which preferentially binds paired E-boxes in the enhancer regions of myogenic genes with the consensus motif CAG(G/C)TG, as well as E-boxes more broadly (CANNTG). The published genomes of O. niloticus, D. rerio and X. tropicalis were accessed using Ensembl, whereas the Alcolapia genome was constructed from whole-genome sequences.
CPS III expression is activated early in the skeletal muscle lineage in A. alcalica
Analysis of gene expression of CPS III in dissected tissues of three adult A. alcalica shows that transcripts were only detected in adult muscle (figure 1a). In situ hybridization methods on A. alcalica embryos at different stages were carried out to investigate whether this restricted muscle expression was established during development ( figure 1b-f ). Blue coloration indicates hybridization of the complementary RNA probe and shows the strongest expression in the developing somites along the body axis (black arrows). Expression was also detected in migratory muscle precursors (MMP; black arrowheads), which go on to form the body wall and limb musculature, and in the developing pectoral fin buds (white arrows). All regions of the embryo that show expression of CPS III are in the muscle lineage indicating that in A. alcalica CPS III expression is restricted to muscle tissues in both adults and the developing embryo.
Many muscle-specific genes are activated during development by the muscle-specific transcription factor, MyoD. The promoter region of CPS III
Convergent evolution in the adaptive function of CPS III
Sequence analysis of A. alcalica and A. grahami CPS III revealed a discrepancy in the catalytic triad compared to the published sequence for CPS III in A. grahami (accession number NCBI: AF119250). The coding region for A. alcalica and A. grahami was cloned and sequenced (accession numbers NCBI: MT119353, MT119354). Our data confirmed the error in the published sequence of A. grahami CPS III and shows Alcolapia species maintain a catalytic triad essential for catalysing the breakdown of glutamine (red boxes in figure 3). However, similar to terrestrial vertebrate CPS I which lack either one but usually both residues essential for binding glutamine for utilization by the catalytic triad (arrowheads in figure 3), Alcolapia also lack one of these residues (asterisk in figure 3). This amino acid sequence is consistent with a change in function permitting Alcolapia CPS III to bind and catalyse ammonia directly, an activity usually restricted to terrestrial vertebrate CPS I, as elucidated by extensive previous biochemical analyses [20,21].
Discussion
While most teleosts are ammonotelic, larval fish can convert ammonia to urea for excretion and to do so express the genes coding for the enzymes of the OUC, including CPS III [27]. Later these genes are silenced in most fish. In the rare cases where urea is produced in adult fish, the OUC enzymes royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201200 are expressed in the liver [23]; however, there are some reports of expression in non-hepatic tissues [28,29]. We report here the expression of CPS III in the muscle of adult A. alcalica, which is consistent with the detection of CPS III protein and enzyme activity in muscle of A. grahami [24]. We also find conserved changes to the amino acid sequence which explains the convergent evolution of A. alcalica and A. grahami CPS III function with CPS I in terrestrial vertebrates. This conserved change in both Alcolapia species suggests that the adaptations in the OUC are likely to have evolved in the ancestral species inhabiting palaeolake Orolongo during the period of changing aquatic conditions (over the past 10 000 years) that led to the extreme conditions currently found in Lakes Natron and Magadi.
Activation of CPS III in the myogenic lineage
We find that the expression of CPS III is activated in somites and in migratory muscle precursors that will form body wall and limb musculature (indeed expression is seen in developing limb buds). All skeletal muscle in the vertebrate body is derived from the somites, and these CPS III expression patterns are similar to those of muscle-specific genes like myosin, actins and troponins [30][31][32].
Muscle-specific expression of CPS III in A. alcalica embryos is a remarkable finding as most ureotelic species convert nitrogenous waste to urea in the liver [8,20]. The expression of CPS III, the first enzyme in the OUC, in muscle tissue is probably significant for supporting the high catabolism in a fish species with the highest recorded metabolic rate [33]. There are few reports of some OUC gene expression or enzyme activity in non-hepatic tissue including muscle [28,29,34]; nonetheless, other fish species only evoke the activity of the OUC when exposed to high external pH or during larval stages [13,14,35,36], and even then, urea production is never to the high level of activity occurring in Alcolapia [24]. There is some heterogeneity of the expression patterns of CPS III during the development of different species in the teleost lineage; for example, D. rerio has reported expression in the body [37], Oncorhynchus mykiss (rainbow trout) shows expression in the developing body but not in hepatic tissue [38] and C. gariepinus (African catfish) had CPS III expression detected in the dissected muscle from larvae [4]. The early and sustained expression of CPS III in the muscle lineage is at this point an observation unique to Alcolapia.
Skeletal muscle-specific gene expression is activated in cells of the myogenic lineage by a family of bHLH transcription factors, including MyoD [30]. MyoD binds specifically at paired E-boxes in the enhancers of myogenic genes with a preference for the consensus motif of CAG(G/C)TG [39,40]. MyoD is known to require the cooperative binding at two E-boxes in close proximity, to modulate transcription of myogenic genes [41]. The presence of a pair of E-boxes in Alcolapia, upstream of a gene which has switched to muscle-specific expression, is suggestive that MyoD is driving expression early in development. Enhancer modularity is a known mechanism for selectable variation [42] and although a single MyoD binding site does not define an enhancer, MyoD is known to interact with pioneer factors and histone deacetylases to open chromatin and activate gene transcription in the muscle lineage [40,43]. Experimental analysis to determine the activity of any regulatory sequences upstream of OUC genes in different species would shed light on the significance of putative transcription factor binding sites. This approach could also address another intriguing question as to the elements that drive the post-larval silencing of OUC genes in most fish species [37], an area with only minimal research, especially when compared to the well-characterized promoter region in mammalian species, for instance, Christoffels et al. [44]. A further instance of an extremophile organism redirecting the expression of a hepatic enzyme to muscle tissue occurs in the crucian carp [45]. Under conditions of anoxia, this species switches to anaerobic metabolism, producing ethanol as the end product of glycolysis [46,47]. This is associated with the expression of alcohol dehydrogenase in muscle [48]. Together with our findings, this potentially reveals an example of convergent evolution whereby the muscle becomes the site for detoxifying by-products of metabolism. Elucidating any mechanisms that may include modular enhancers that facilitate the adaptation of gene regulation in response to changing environmental conditions will be of significant interest.
Convergent evolution of adaptive CPS III function
CPS proteins catalyse the production of carbamoyl-phosphate as the first step in nitrogen detoxification by accepting either glutamine or ammonia as a nitrogen donor [17]. Teleost CPS III binds glutamine: the nitrogen source provided by the amide group of glutamine is catalysed by the conserved catalytic triad Cys-His-Glu in the glutamine amidotransferase (GAT) domain in the amino terminal part of CPS [20]. In royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 201200 terrestrial vertebrates, CPS I lacks the catalytic cysteine residue and only generates carbamoyl-phosphate in the presence of free ammonia [21]. Although CPS in Alcolapia shares the most sequence identity with fish CPS III (figure 3), its ammonia binding activity is more similar in function to terrestrial vertebrate CPS I [20,24]. This adaptation to preferentially bind ammonia over glutamine supports efficient waste management in a fish with an exceptionally high metabolic rate [33]. CPS I in terrestrial vertebrates have amino acid changes in the catalytic triad which explains their binding ammonia over glutamine; a reduction in glutamine binding capacity drives the use of ammonia [21]. Here, we show that Alcolapia maintain the catalytic triad, but (similar to mouse and human) lack one of the two residues required for efficient glutamine binding, weakening its affinity to glutamine and driving the use of ammonia as a primary substrate.
The interesting observation that bullfrog (Rana catesbeiana) CPS I retains the catalytic triad, but lacks the two additional conserved amino acids required for glutamine binding, has led to the suggestion that the change from preferential glutamine to ammonia binding originally evolved in the early tetrapod lineage [21]. A further frog species, the tree frog Litoria caerulea, retains its catalytic triad and only one of the two residues required for glutamine binding has been altered, weakening its affinity for glutamine and allowing for direct catabolism of ammonia [49]. Much the same as in Alcolapia, L. caerulea CPS I is still capable of using glutamine to some extent which lends further support to the notion that the evolutionary transition from CPS III to CPS I occurring in amphibians and the early tetrapod lineage. The changes in the protein sequence of Alcolapia CPS III represents a convergent evolution in this extremophile fish species, with acquired changes in functionally important domains which probably also evolved in early terrestrial vertebrate CPS I.
Conclusion
Alcolapia have acquired multiple adaptations that allow continued excretion of nitrogenous waste in a high pH environment. Among these is the novel expression of CPS in skeletal muscle, as well as the acquisition of mutations that change its function. Sequence evidence indicates that like terrestrial vertebrates, and unique among fish, Alcolapia CPS III is capable of binding and catalysing the breakdown of ammonia to carbamoyl-phosphate; a convergent evolution of CPS function. The mechanism by which the novel and unique expression of CPS in muscle evolved is probably a function of enhancer regions of A. alcalica and A. grahami that result in its regulation by muscle regulatory factors to direct CPS expression in the myogenic lineage during embryonic development. Environmentally driven adaptations have resulted in changes in both the expression and activity of CPS III in Alcolapia that underpin its ability to turn over nitrogenous waste in a challenging environment while maintaining a high metabolic rate.
Ethics. The research was approved by the University of York AWERB and under the Home Office licence for Dr. M.E.
Pownall (POF 245945).
Data accessibility. Sequence data has been made available on NCBI (accession numbers: MT119353, MT119354) and in the electronic supplementary material, files.
Authors' contributions. Experiments were designed by L.J.W. and M.E.P., work was carried out by L.J.W. and G.S. and the manuscript written and edited by all authors. L.J.W., J.J.D. and A.S. collected the fish from Lake Natron.
Competing interests. We declare we have no competing interests. Funding. This work was supported by the BBSRC as a studentship to L.J.W. (BB/M011151/1). Additional support for fieldwork was provided by the Fisheries Society of the British Isles (small research grant) and the Genetics Society (Heredity fieldwork grant).
|
v3-fos-license
|
2018-12-17T19:01:41.682Z
|
2013-09-14T00:00:00.000
|
58938170
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://academicjournals.org/journal/AJBM/article-full-text-pdf/383FEAD20877.pdf",
"pdf_hash": "4bb65871d61528fdad18ee73b4b74d1c13edd7cf",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46074",
"s2fieldsofstudy": [
"Business"
],
"sha1": "4bb65871d61528fdad18ee73b4b74d1c13edd7cf",
"year": 2013
}
|
pes2o/s2orc
|
The effect of marketing mix in attracting customers: Case study of Saderat Bank in Kermanshah Province
This study investigated the impact of marketing mix in attracting customers to Saderat Bank in Kermanshah Province. Questionnaire which included 30 questions was used to collect information in this research. The reliability of the questionnaire was calculated using Cronbach's alpha, and a value of 0.882 was obtained, greater than 0.7 which is the reliability of the questionnaire. The population used in this study is the customers of Saderat Bank in Kermanshah Province, with at least one account, interestfree loans and savings. 250 questionnaires were collected by stratified random sampling. The work has one main hypothesis and 5 subhypotheses. Pearson correlation test was used to test the hypotheses. It was established that factors in the marketing mix have a significant positive effect in absorbing customers. That means the bank has a significant positive effect.
INTRODUCTION
Progress and transformation in industries, institutions and companies has to do with their ability to deal with problems, activities, as well as competitors.Each institution should adopt policies with respect to long-term vision, mission, goals, opportunities, arrangements and using internal facilities of an external to develop comprehensive marketing (Industries, 1384), because in today's global business environment there is increasing complexity, rapid change and unexpected developments (Mason, 2007).
With the development of science in all fields, banks and financial markets have become competitive in recent years as seen in the development of their activities, creation of private banks and financial institutions and applying marketing techniques and strategies for attracting customers and increasing deposit.Using the marketing mix factors such as access to appropriate services and providing services to customers quickly and appropriately in a variety of services and advertising to attract customers, there is increase in financial institutions and banks.Marketing is one of the issues that is subject to change, due to market changes in consumption patterns and tastes of individuals.Population growth, urban expansion, changes in community structure, diversity of products, advance knowledge and generational changes are factors that will determine market variables (Lavak, 1382).
Each institution has the task of marketing managers by analyzing, planning, implementing and controlling effectively marketing programs in order to develop a superior competitive position in target markets.Marketing plan includes a process designed for predicting future events and determining strategies to achieve the objectives of an institute (Mnty and Trustee).
Institutions should try to obtain an appropriate share of the market by studying the market, applying marketing mix variables, using appropriate methods of distribution and supplying of goods and services and be aware of the campaign and identification of opportunities.They should attract more resources to deal with scientific creativity and innovation to meet customers' needs; and match resources to increase market share and take care of customers.Strengthening financial markets in the country for its economic development and saving of resources for the health of the economy seem to be necessary.The savings rate in the banking and credit system and financial institutions can lead to increased investment and economic growth.
In Iran, the main hub of the financial markets, banks and financial institutions is the main source of capital for buying products and services, granting loans and as the funding source for all economic units in the country.In banks and institutions, appropriate activities and effective use of marketing are very effective for achieving their goals.A significant number of banks and institutions need to make more use of marketing variables in order to increase resources to customers.Apart from these categories in which the bank is not required in this research, institutions have found the effect of competition with some of the marketing mix strategies for increasing their deposits and investments.
Marketing
In the 1960s, the term was common in marketing.It says everything starts with consumer's needs and demands.Marketing and market management, an important branch of knowledge management, is the main task of understanding people's needs and desires and help them through the process; a process where resources are exchanged.Society needs are increasing today more than ever, especially with the growing shortage of human and other resources.Managers are faced with limited resources available to meet those demands which are unlimited; but knowledge management is here to help the economy scientifically as well a set of skills and knowledge for the optimal use of limited resources.Marketing also needs to recognize the efforts put up by the exchange of resources (Venus, 1386).Marketing is a social and managerial process by which needs and desires of individuals and groups are provided through the production, supply and exchange of useful goods (Holm, 2006).
Marketing management can be defined as follows: "The analysis, planning, implementation and monitoring of programs to create, provide and maintain a profitable transactions process with the buyers, in order to achieve organizational goals (Cutler, 2000).‖Marketing management is the analysis, planning, implementation and controlling of programs to achieve organizational goals.It involves programs made to establish and maintain beneficial exchanges with buyers (Lavak, 1382).
"Marketing management opportunities, including analysis, planning, implementation, execution and monitoring of programs to establish and maintain a favorable exchange markets aim to achieve organizational goals.Thus marketing management or demand management, supply and demand caused by or in the form of motivation is essential (Alvdary, 1387).
According to the Marketing Association of America, ‗'marketing is the process of planning; the realization of an idea, pricing, advertisement and distribution of goods and services, where the exchange makes the individual and the organization in it a reality (Cutler, 2000;Belch and Belch, 2001).The art and science of marketing is to create or establish favorable conditions between supply and demand.The main task of marketing is to meet product and service needs of customers and focus on target market (Frank, 1994).
Marketing involves activities that provide a comprehensive definition.Marketing experts raise their own vision based on these activities.Some of the definitions of marketing involve a group of activities that take place in the market and others include the ways marketers have to comply with the definition.Table 1 shows some of these definitions.
The art of marketing entails carrying the correct amount and quality of product or service to meet the need of customers at the right place and time, and ensuring that customers benefit from its activities (Arto and Sample, 2005).
Today, advertisement is to be considered as part of marketing territory and all economic activities including manufacturing, distribution of a wide range of services, the management of sales and production and sales of goods and services.
In summary, the designing, manufacturing, packaging, distribution and sale of goods and services to consumers, which ultimately lead to customers' satisfaction play an important role (BolurianTehrani, 1376).In marketing services, field marketing is important.Service activities include features such as intangibility, indiscernible and being different and impossibility (Pickton and Broderick, 2001).The exchange of product marketing and marketing services with the different goods and services between the same characteristics such as inseparable, intangibility, lack of maintenance and service is different (Murrar, 1995).In recent years, branches and wide variety of services in the market over several service centers are more tangible (Table 2).
Marketing mix
This is a set of controllable elements of marketing tools and marketing strategies of a company in combining these elements.Cutler says that a set of marketing mix variables can be controlled by the marketing companies and institutions in their target market and its composition Understanding what people want and seek in a market and supply and provision of goods and services to meet their needs and achieve goals.
Cohen 1998
The marketing activities such as buying and selling of goods, transport and storage.
Baker 1998 A series of activities called the flow of commercial goods and services from producer to final consumer.
Goharian 1374
Marketing structure and demand for products and services is estimated to predict the spread.
Ranjbariyan 1378 Satisfy human needs and to define the process was considered with the market.On the other hand, the buyer and seller in a market where it is located.
Hosseini 1379 A set of human and economic activities conducted in order to satisfy the needs and demands of the people through the exchange process .
Alvdary 1383
Process in which groups of people, goods and benefits from production and exchange with others to meet their wants and needs.
Events in Iran 1386
Targeted marketing enabling the company to plan and execute pricing, promotion and distribution of products, services and ideas.are required for the reaction (Cutler, 2000).Elements of the marketing mix are a set of marketing tools for achieving the goals of the institute of marketing (HaKansson and Waluszewski, 2005).Marketers, in order to receive favorable responses from their target markets, use many tools.These tools comprise the marketing mix.In fact, it is a set of tools that institutions use to achieve their marketing goals.
McCarthy classified these tools into four major groups, called the 4P's of marketing: product, price, place and promotion (Harrell and Frazier, 1999).Decisions about future marketing by marketers should also affect the final consumer and commercial channels.Thus, despite the decision of institutions concerning a number of variables of the marketing mix and because it requires a long time, little can change in the short term in their marketing mix.Robert's statement to the seller regarding the 4P's vs 4C's of customer is shown in Table 3.
Based on the 4C's, for institutions to meet the needs of consumers, their products should be economical; they should consider comfort, convenience and effective communication; they should take customers' interest into account and try to charge them less.Customers should be expected to benefit from their products.Price should commensurate with the capabilities of the buyer.Their product should be available to customers purchasing it.Finally, promotions should be made available to potential consumers of such products (Mohammadian, 1382).The concept of marketing mix is defined as the organization's performance using a set of controllable variables and uncontrollable factors of the environment (Newson et al., 2000).
Marketing mix of traditional management models overcomes dynamic market, where the beggar works, alongside other methods of Anderson and the theoretical parameters of a system developed by the University of Copenhagen in Europe.Methods such as vision of a new product, functional vision are faced with such geographical perspective.Just a few of these models were able to maintain their survival against the 4P's (Pourhassan, 1376).The concept of marketing mix, for the first time in 1950, was introduced by Neil Bvrdn and
Product
Product is a physical object that is sold and has a palpable characteristic, a complex set of benefits that can be used to meet customer needs.
Price Includes issues such as discounts, list prices, credit, repayment term and conditions .The price is included in the price, product or service offered for sale and will determine the level of benefits.Price is the only element that does not include costs charged to the customers to buy products they take.
Promotion
Includes issues such as advertising, personal selling, sales promotion, public relations and direct marketing.Distribution channels are the most important questions about how an organization can optimize a connection between inner and outer channels.
Place Includes issues such as distribution channels, market coverage, product inventory, transportation and distribution sites.
became known as the 4P's (29).McCarthy, in the early 1960s, blends marketing with four variables known as the 4P's classification that included: product, price, place and promotion (30).
McCarthy has since created dramatic changes in the marketing mix, and the 4P's is still used a lot in literature as the main concept for coordinating many other aspects of marketing (31).Four elements of marketing mix are defined in Table 4: The most important element in the marketing mix is product.What makes our product marketable?For pricesensitive element of the marketing mix, customer is liable for the amounts paid to deliver the product.The third element is the distribution of all the activities that aim to deliver the product to the customer.The fourth element of the marketing mix is promotion, which is used to communicate with customers.This association is to encourage customers to buy products.Figure 1 shows the elements of the marketing mix.
History and implementation of marketing mix
Borden (1965) claims to be the first to have used the term -marketing mix‖ and that it was suggested to him by Culliton's (1948) description of a business executive as -mixer of ingredients‖.An executive is -a mixer of ingredients, who sometimes follows a recipe as he goes along, adapts a recipe to the available ingredients and experiments with or invents ingredients no one else has tried‖ (Mei, 2011).
The early marketing concept is similar to the notion of marketing mix, based on the idea of action parameters presented in the 1930s by Stackelberg (1939).Rasmussen (1955) then developed what became known as parameter theory.
He proposes that the four determinants of competition and sales are price, quality, service and advertising.Mickwitz (1959) applies this theory to the Product Life Cycle Concept.
Borden's original marketing mix had a set of 12 elements namely: product planning; pricing; branding; channels of distribution; personal selling; advertising; promotions; packaging; display; servicing; physical handling and fact finding and analysis.Frey (1961) suggests that marketing variables should be divided into two parts: the offering (product, packaging, brand, price and service) and the methods and tools (distribution channels, personal selling, advertising, sales promotion and publicity).On the other hand, Lazer and Kelly (1962) and Lazer et al. (1973) suggested three elements of marketing mix: the goods and services mix, the distribution mix and the communication mix.McCarthy (1964) refined Borden's (1965) idea further and defined marketing mix as a combination of all of the factors at a marketing manger's command to satisfy the target market.He regrouped Borden's 12 elements to four elements or 4Ps, namely product, price, promotion and place at a marketing manger's command to satisfy the target market (Mohammadian, 1382).
Especially in the 1980s onward, a number of researchers propose new ‗P' into the marketing mix.Judd (1987) proposes a fifth P (people).Booms and Bitner (1980) add 3 Ps (participants, physical evidence and process) to the original 4Ps to apply the marketing mix concept to service.Kotler (1986) adds political power and public opinion formation to the Ps concept.Baumgartner (1991) suggests the concept of 15 Ps.MaGrath (1986) suggests the addition of 3Ps (personnel, physical facilities and process management).Vignalis and Davis (1994) suggest the addition of S (service) to the marketing mix.Goldsmith (1999) suggests that there should be 8 Ps (product, price, place, promotion, participants, physical evidence, process and personalisation).Moller ( 2006) presents an up-to-date picture of the current standing in the debate around the mix as marketing paradigm and predominant marketing management tool by reviewing academic views from five marketing management sub-disciplines (consumer marketing, relationship marketing, services marketing, retail marketing and industrial marketing) and an emerging marketing (E-commerce) (Iranian Events, 1386).
Most researchers and writers that reviewed in these domains express serious doubts as to the role of the mix as marketing management tool in its original form; and therefore propose alternative approaches, which is adding new parameters to the original mix or replacing it with alternative frameworks altogether.
Use of marketing mix concept
Like many other concepts, marketing mix concept seems relatively simple, once it has been expressed.Before they were ever tagged with the nomenclature of "concept," the ideas involved were widely understood among marketers as a result of the growing knowledge about marketing and marketing procedures that came during the preceding half century.But once the ideas were reduced to a formal statement with an accompanying visual presentation, the concept of the mix has proved to be a helpful device in teaching, in business problem solving, and, generally, as an aid to thinking about marketing.First of all, it is helpful in giving an answer to the question often raised: "what is marketing?"A chart which shows the elements of the mix and the forces that bear on the mix helps to bring understanding of what marketing is.It helps to explain why in our dynamic world the thinking of management in all its functional areas must be oriented to the market.In recent years, the authors have kept an abbreviated chart showing the elements and forces of the marketing mix in front of their classes at all times.In case discussion, it has proved to be a handy device by which queries were raised as to whether the student has recognized the implications of any recommendation he might have made in the areas of the several elements of the mix.Referring to the forces, we can ask if all the pertinent market forces have been given due consideration.Continual reference to the mix chart makes the authors to feel that the students' understanding of marketing is strengthened.The constant presence and use of the chart leaves a deeper understanding that marketing is the devising of programs that successfully meet the forces of the market.In problem solving the marketing mix chart is a constant reminder of the following (Mei, 2011): 1) The fact that a problem seems to lie in one segment of the mix must be deliberated with constant thought regarding the effect of any change in that sector on other areas of marketing operations.The necessity of integration in marketing thinking is ever present.
2) The need to study carefully the market forces as they might bear on problems in hand.In short, the mix chart provides an ever ready checklist as to which areas to think when considering marketing questions or dealing with marketing problems.
Marketing mix resource allocation and planning challenges
Marketing mix resource allocation and planning has assumed prominence as companies have attempted to optimize spending across all marketing activities.That is no surprise, considering that senior marketing executives are under increasing pressure to help their organizations achieve organic sales growth with tighter, top downdriven budgets and short time horizons to deliver tangible payback on their marketing campaigns.With less influence over the size of their budgets, senior marketers must instead attempt to maximize the impact of the dollars they distribute for programs across multiple products, markets, channels, and specific customers, using an increasingly complex mix of new and traditional media.
As a result, companies have looked toward analytical and modeling techniques in an attempt to better link marketing investments to meaningful and measurable market responses (and, ideally, to one or more financial metrics).Packaged goods and pharmaceutical marketers, in particular, were among the pioneers in exploring marketing mix analytics and data-driven econometric models.Marketing scholars also have contributed to a more sophisticated body of analytical and modeling literature that offers both theoretical and substantive insights for marketing mix resource allocation decisions and planning practices.In many respects, marketing practitioners and researchers were early advocates for bringing analytics to business practice (Hosseini, 1384).
Nevertheless, changing customer dynamics and advances in media technology presents novel challenges.
Nowhere is the challenge more evident than in the domain of new media that originated in and is energized by the digital environment.The rapid and ongoing emergence of new digital channels-from the static online banner ads of the 1990s to the social media and mobile platforms of the current environment-has changed the way people consume information and has left marketers scrambling to address the new digital landscape.
According to a recent report by Hamilton, -digital marketing still lags the shift in consumer behavior‖ prompted by the Internet (Goharian, 1374).At the same time, the rise of digital communications channels has focused renewed attention on the efficiency and effectiveness of traditional media and the extent to which new media are a complement to or a substitute for television, print, and other established channels-all with an eye toward optimal allocation of marketing mix resources through marketing analytics.
-You have to be able to orchestrate a move toward emerging media,‖ says Greg Welch, head of the CMO practice at Spencer Stuart.-How do you take a traditional media budget and figure out not just how much to allocate to [new] media, but also how to measure it and how to defend it in front of your peer group?‖ (Iranian Events, 1386).Not surprisingly, many companies have adopted a measured approach to the inclusion of new media in their marketing communication programs until appropriate analytical and modeling techniques can provide better insight into their use.The description of marketing analytics contained in this book offers contemporary perspectives and practices that should provide direction for these marketing mix decisions.
Eighty-plus percent of U.S. consumers are online regularly, and 34% of their media time is spent online.Still, most marketers devote only approximately 5 to 10% of their advertising and promotion dollars to digital media (Murrar, 1995).
What are the likely reasons for the disconnection between consumer media usage and company media spending?Three are most commonly mentioned: (1) modest budgetary and organizational support for media experimentation, (2) limited business experience with and talent necessary to apply marketing analytics to new media, and (3) insufficient metrics and marketing analytics to measure the efficiency and effectiveness of new media alongside traditional media (Mei, 2011).
Hypotheses
The content of the main hypothesis of this study is as follows: Marketing mix elements and the relationship between bank customers are significant.Five sub-hypotheses in this regard are as follows:
Pour et al. 3277
1.There is a significant relationship between income of customers and their deposit in the bank.2-There is a significant relationship between providing quick and convenient services to the customers and their deposit in the bank.
3-There is a significant relationship among the variety of services and increase in the knowledge of customers and resources to attract customers to the bank.4 -The use of advertising to attract customers to the bank is significant.5 -Accelerating the transfer of facilities and resources to attract customers to bank is significant.
METHODS
Since in this study researchers sought to explore the relationship between combining elements of marketing and attracting customers to the bank in Kermanshah Province using survey method, the research is descriptive.
The population used in this study is the customers of the bank in Kermanshah Province (based on 14 cities), with at least one account, interest-free loans and savings.The formula is based on 230 samples in this study: And 1-p = q, and p values are not available if it is equal to 5 / 0 set.
And the value of d (the amount of allowable error) with respect to similar research has been done based on empirical research.If the survey is collecting data to estimate the value of d p in the interval 4 to 7% is acceptable.
In this study, using statistical formulas, selected number of samples is 230.
Data for this study were collected using two methods: A library method and two field method.
A library method: This method involves collecting information from literature and history books, dissertations, articles, databases and Internet sources.
Two field method: This includes the use of questionnaire and distribution of statistical information on the relationship between marketing mix and attracting more customers.The questionnaire consists of three questions as well as the first, second and fifth hypotheses.Similarly the second hypothesis has questions 6, 7, 8, 9, 10, 11 and 12; third hypothesis, 13, 14, 15, 16 and 17 questions; fourth hypothesis, 18, 19, 24, 25 and 30; fifth hypothesis, questions 27 and 28, each in five levels (very low, low, medium, high and very high).They have been measured in the questionnaire design of university teachers in the field of management science and marketing, and financial consultants were used.The questionnaires were distributed among 30 customers.These individuals were selected according to the researcher to identify and complete a questionnaire about the appropriateness of the research questions, including questions on proper and demystification of the comments that people were using for the current exchange of ideas and thought with interviews and discussions on each of the questions.In this study, to describe and analyze the collected data, descriptive and inferential statistics were used.
FINDINGS
This study examined the hypotheses in section 6 and the results confirm or reject them.
The first hypothesis test
First hypothesis: the relationship between income of customers and their deposit in the bank is significant.H 0. There is no significant relationship between income of customers and their deposit in the bank.H 1. There is significant relationship between income of customers and their deposit in the bank.
Since the Pearson correlation coefficient close to one is 878 / 0, H 0 is rejected and H 1 is accepted.There is significant relationship between income of customers and their deposit in the bank.The higher the income of customers, the more willing they would want to invest in the bank.
Testing the second hypothesis
Second hypothesis: There is a significant relationship between providing quick and convenient services to the customers and their deposit in the bank.H 0. There is no significant relationship between providing quick and convenient services to the customers and their deposit in the bank.H 1. There is a significant relationship between providing quick and convenient services to the customers and their deposit in the bank.
Since the H 0 is greater than 645/1, H 0 is rejected and is therefore against H 1, which is accepted.
Testing the third hypothesis
Third hypothesis: There is a significant relationship among the diversity of the customer service and increased awareness and resources to attract customers to the bank.H 0. There is no significant relationship among the diversity of the customer service and increased awareness and resources to attract customers to the bank.H 1. There is a significant relationship among the diversity of the customer service and increased awareness and resources to attract customers to the bank.
Considering the H 0 (674/17 = H 0 ) is larger than 645/1, the assumed H 0 is rejected and it is against the accepted H 1 .
The fourth hypothesis test
The fourth hypothesis: The use of advertising to attract customers to the bank is significant.H 0. The use of advertising to attract customers to the bank is not significant.H 1. The use of advertising to attract customers to the bank is significant Considering the H 0 (844/26 = H 0) obtained is larger than 645/1, H 0 is rejected and it is against H 1, which is accepted.
Testing the fifth hypothesis
The fifth hypothesis: Accelerating the transfer of facilities and resources to attract customers to the bank is significant.
H 0. Accelerating the transfer of facilities and resources to attract customers to the bank is not significant.H 1. Accelerating the transfer of facilities and resources to attract customers to the bank is significant.
Conclusion
Marketing involves a number of activities.To begin with, an organization may decide which of its target group of customers to be served.Once the target group is decided, the product is to be placed in the market by providing the appropriate product, price, place and promotion.These are to be combined or mixed in an appropriate proportion so as to achieve the marketing goal.Such mix of product, price, distribution and promotional efforts is known as ‗Marketing Mix' (Mei, 2011).
According to Kotler, -Marketing mix is the set of controllable variables that the firm can use to influence buyers' response‖.The controllable variables in this context refer to the 4P's [product, price, place (distribution) and promotion].Each firm strives to build up such a composition of 4‗P's, which can create highest level of consumer's satisfaction and at the same time meet its organizational objectives.Thus, this mix is assembled keeping in mind the needs of target customers, and it varies from one organization to another depending upon its available resources and marketing objectives (Iranian Events, 1386).
The major objective of this study is to investigate the effect of some of the marketing mix variables in attracting customers to the bank in Kermanshah Province, based on the primary hypothesis and the 5 secondary hypotheses proposed.
Based on the results of the first hypothesis, it is concluded that when there is more income, customers are more likely to deposit in the institution.Management and employees of the institution with high-income customers should implement plans and provide more benefits such as low-cost facilities in order to attract more resources.
According to the analysis and result of the second hypothesis, one can say that with the increasing competition between banks and financial institutions, management and employees should endeavor to shorten the time required to perform additional services to their customers.They should increase the number of staff in some branches which are more crowded and try to accelerate the delivery of services possible, as well as the payment of electricity and water bills.Payment facilities for customers and their physical presence should be reduced.
The third hypothesis shows there is a significant relationship among the diversity of the customer service and increased awareness and resources to attract customers to the bank.Banks also offer banking services to others banks as possible, which is now mostly done by the Institute of banking services.This leads to higher customers' deposit and particular variety of facilities and reduce the profits of the banks.This in turn would make bank customers to consider adopting appropriate policies of various facilities, and they are not forced to stop working with the institute.
Based on the result of the fourth hypothesis, the banks can diversify and expand their services through publicity and advertising, television messages and installation of banners on various sites to attract more customers.Some institutions are not aware that this is why higher deposit customers are with them.
The fifth hypothesis result indicates that accelerating the transfer of facilities and resources to attract customers to the bank is significant
SUGGESTIONS
From the assumptions and conclusions of this study based on the hypotheses, we conclude that there is a significant positive relationship to advance the goals of the bank.In connection with the study, the following suggestions are offered.
(1) Proposals such as increasing profits, deposits and facilities from agency managers and policymakers offered to customers should be so cheap so that they can attract more resources to the institution.
(2) Possible time of operation should be short and they should increase number of customer service staff and expertise over the counter at any branch to be able to offer faster service to customers.
(3) Institutions such as banks can provide complete management services for clients.(4) The institute should have broader campaign to engage in a variety of services and be more aware of its actual and potential customers.There should be advertising and television messages and banners installed at various sites to attract more customers because many customers will benefit from the variety of deposits and payments of services and facilities.
(5) The bank should have major facilities to get the most important target customers.The type of facilities and their benefits should be made clear to customers, and in this way facilitating customers to deposit more.(6) Considering that one of the important goals in the bank is the transfer of account, the credit facilities should be given to 55% of respondents who have account in the bank to absorb more resources.
LIMITATION
Although all of our hypotheses are supported, this study has a few limitations that present opportunities for further research.First, our survey respondents were chosen from a convenience sample and the representativeness of our sample may be questioned.Second, this model was tested for validity and reliability only in the context of luxury restaurants in Iran.Ideally, national relationship quality indexing should be conducted in different sectors simultaneously, and the model should be tested periodically.Only then can the results be compared with other countries' relationship quality indices.
Table 3 .
Component Model of 4Ps and 4Cs.
Table 4 .
Definitions of the four elements of marketing mix.
|
v3-fos-license
|
2021-02-24T06:16:39.741Z
|
2021-02-22T00:00:00.000
|
232018008
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1021/acs.jafc.0c06680",
"pdf_hash": "5d77d906c0884313291ec1b39615c29fcd1d2483",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46075",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"sha1": "f98ee31baeeb1bc236d900b18c48d986ca5dc4ca",
"year": 2021
}
|
pes2o/s2orc
|
Build-Up of a 3D Organogel Network within the Bilayer Shell of Nanoliposomes. A Novel Delivery System for Vitamin D3: Preparation, Characterization, and Physicochemical Stability
The inherent thermodynamic instability of liposomes during production and storage has limited their widespread applications. Therefore, a novel structure of food-grade nanoliposomes stabilized by a 3D organogel network within the bilayer shell was developed through the extrusion process and successfully applied to encapsulate vitamin D3. A huge flocculation and a significant reduction of zeta potential (−17 mV) were observed in control nanoliposomes (without the organogel shell) after 2 months of storage at 4 °C, while the sample with a gelled bilayer showed excellent stability with a particle diameter of 105 nm and a high negative zeta potential (−63.4 mV), even after 3 months. The development of spherical vesicles was confirmed by TEM. Interestingly, the gelled bilayer shell led to improved stability against osmotically active divalent salt ions. Electron paramagnetic resonance confirmed the higher rigidity of the shell bilayer upon gelation. The novel liposome offered a dramatic increase in encapsulation efficiency and loading of vitamin D3 compared to those of control.
■ INTRODUCTION
Liposomes are vesicular structures composed of phospholipids that self-assemble into one or more concentric bilayers by dispersing in an aqueous medium. 1 Among the lipid-based delivery systems, liposomes are considered as the most efficient carriers in the formulation of pharmaceutical and cosmetic products to encapsulate unstable active multi-components, to enhance the oral bioavailability of poorly water-soluble compounds, to provide a controlled release, and to extend the circulation lifetime of compounds. 2,3 This fact can be attributed to their structural flexibility, particle size, chemical composition, and fluidity or permeability of the lipid bilayer versatility. Moreover, liposomes are biodegradable and biocompatible due to the structure and physicochemical similarity to the cell membrane phospholipids, causing no harmful effects on the human health. 4 However, the application of liposomes in the food industry is still limited by their poor stability over a long storage period due to degradation, fusion, aggregation, or sedimentation and high tendency to lose entrapped compounds during storage as a function of osmotic pressure in the presence of food components or additives, such as sugars or salts. 5−7 The physical stability of liposomes strongly depends on molecular ordering, packing, and dynamics of acyl chains in the bilayer, charge intensity, as well as the physical state−gel or fluid−and composition of lipids. 8−10 Therefore, many efforts have been made to overcome these challenging tasks, including liposome coating with hydrogel networks, 11 surface modification of vesicles with polymeric matrices using an electrostatic layer-by-layer technique, 12,13 and compositional change of bilayer membranes by sterols, 14 polyethylene glycol, 15 and emulsifiers. 16 Moreover, the same type and concentration of lipid materials with different liposome preparation methods lead to different properties, such as storage stability, encapsulation efficiency, and bilayer permeability. Therefore, many disadvantages still exist preventing further application and industrialization of food fortification with liposome structures. Thus, designing alternative types of liposomes that can make them appropriate for food formulations is still of crucial demand.
On the other hand, organogels have been considered promising types of gel structures known as novel delivery systems over the past few years. 17,18 Organogels are selfstanding, thermoreversible, and viscoelastic 3D networks which are developed by the self-assembly of gelator molecules that immobilize the continuous organic phase through hydrogen bonding, hydrophobic interactions, van der Waals forces, ionic interactions, or covalent bonding. 19,20 The most commonly used approach for creating organogels is direct dispersion of gelator molecules in an organic liquid at temperatures above their melting points, followed by cooling to lower temperatures. 21 Organogels exhibit inherent physical and chemical stability properties which are beneficial for longer shelf-life requirements such as delivery of bioactive agents compared to other polymer gels. Moreover, their lipid medium is well suitable for improving the bioavailability of lipid-soluble bioactive materials, and their gel network could offer a sustained release behavior and a desirable protection for the encapsulated compounds. 22 Although there are several works on the potential of organogels for delivery applications in pharmaceutical and cosmetic applications, 23,24 there are only a few examples of organogel applications in bulk or emulsified forms to increase the bioaccessibility of poorly water-soluble bioactive components in food systems. 25,26 Vitamin D is a fatsoluble vitamin that can be produced in the skin by sunlight exposure. Vitamin D 3 (cholecalciferol) , as an active form of vitamin D, is essential to control calcium and phosphorus absorption in the human body. 27,28 The deficiency of vitamin D 3 is a worldwide concern that increases the risk of diabetes, obesity, cardiovascular diseases, and cancers. 29 Therefore, food fortification by this micronutrient has gained increasing attention recently. Vitamins D 3 is easily susceptible to isomerization and degradation into its inactive forms due to light, oxygen, high temperatures, and acid exposure. This fact leads to a significant reduction of vitamin D 3 functionality and biological properties. 30 For this purpose, several colloidal delivery systems (i.e., emulsion, solid lipid nanoparticles, nanostructured lipid carriers, and so forth) have been proposed for protecting vitamin D 3 toward various harmful conditions, improving the oral bioavailability and delivery efficiency of vitamin D 3 , and making it soluble in aqueous systems. 31−36 However, many disadvantages still exist, which inhibits further application and industrialization of food fortification using encapsulated vitamin D 3. These limitations may include lowloading capacity, poor long-term stability, and loss of stability under certain ionic compositions and ingredient interactions. 37 We hypothesized that the incorporation of an organogel network in the liposome structure would offer a highly stable delivery system of hydrophobic bioactive components. To the best of our knowledge, there are no published data on the build-up of gel structures inside bilayers for enhancing the liposome stability and encapsulation efficiency. Therefore, the aim of this work was to assess the effect of oleogelation within the lipid bilayer shell of liposomes looking for high encapsulation efficiency and improved physical stability to apply in the food formulations. In this work, nanoliposomes were prepared by an extrusion process combined with the thinfilm hydration method. Vitamin D 3 was used as a model hydrophobic bioactive agent and 3-palmitoyl-sn-glycerol was used as a low molecular weight organogelator due to its limited solubility in water and its manufacturing considerations.
Nanoliposome Production. Nanoliposomes were produced by a thin-film hydration method modified from Lopes et al. 38 In brief, control liposomes were prepared by dissolving PC (300 mg) and CHO (7.5 mg) in 10 mL of chloroform. For encapsulation efficiency and in vitro digestion experiments, vitamin D 3 (0.6 mg/L) was added to the mixture. For EPR experiments, a spin probe was added to the solution at a concentration of 1%. Then, the organic solvent was removed using a rotary evaporator at a temperature of 60°C until a thin lipid film was formed. This process was followed by 10 min of vacuum treatment at controlled reduced pressure and under a nitrogen stream for 2 min to make sure that no trace of organic solvent remained. For the development of an organogel network within the bilayer shell of nanoliposomes, soybean oil (8.5 mg) and 3palmitoyl-sn-glycerol (1.5 mg) were added to the mixture, and the other processes were the same as the preparation of control liposomes. To keep the properties of the liposome intact and prevent the oxidation of phospholipids until their use, they were stored in a freezer at a temperature of −80°C. Subsequently, the lipid bilayers were hydrated by adding 10 mL of Milli-Q water and then a rotary evaporator (without vacuum) was used to form multilamellar large vesicles (MLVs). To obtain homogeneous small unilamellar vesicles (SUVs), the sample was subjected to extrusion by passing the suspension through the 400, 200, and 100 nm polycarbonate membranes sequentially. After enough extrusion steps through the membrane with the help of a Thermobarrel Extruder (Northern Lipids, Vancouver, Canada) under nitrogen pressures up to 55 bar at 60°C, a normal unimodal distribution was obtained. The obtained nanoliposomes were stored at 4°C prior to further use.
Physical Properties of the Bulk Organogel. To determine the mechanical and viscoelastic properties of the organogel developed in the lipid bilayer of nanoliposomes, a sample of organogel was prepared by dissolving 3-palmitoyl-sn-glycerol (15% w/w) in soybean oil at 60°C, followed by cooling to room temperature. A texture analyzer (Texture Analyzer, TA Plus, Stable Microsystems, Surrey, UK) with a load cell of 30 kg and a cylindrical probe was used to determine the mechanical properties of the organogel after 24 h of storage at 25°C, as described by Giacomozzi et al. 39 with some modifications. Viscosity and dynamic rheological measurements were also performed using an MCR 302 controlled stress/strain rheometer (Anton Paar, Graz, Austria) equipped with a parallel plate geometry. 26 The strain sweep test from 0.002 to 1% at a constant frequency of 1 Hz was performed to determine the linear viscoelastic region (LVR) of the organogel. Then, the frequency sweep (0.01−10 Hz) and the temperature ramp from 5 to 80°C at the rate of 2°C/min and 1 Hz frequency were carried out inside the LVR region. Particle Size. The particle size and polydispersity index (PDI) were obtained by means of a dynamic light scattering (DLS) device (Zeta Sizer Nano, ZS-90 Malvern Instruments Ltd., UK). The experiments were performed at 25°C in quasi-backscattering configuration (scattering angle, θ = 173°) using the radiation from the red line of a He−Ne laser (wavelength, λ = 632 nm), using a refractive index of 1.459. Samples were diluted at 1:50 ratio in Milli-Q water, and the obtained results were reported as intensity-weighted.
Zeta Potential Measurement. The zeta potential (ZP) experiment was carried out using the laser Doppler velocimetry (LDV) technique in a Zetasizer Nano device (ZS-90 Malvern Instruments Ltd., UK) that measures the electrophoretic mobility of the sample from the speed of the particles. A DTS 1060 cuvette with a polycarbonate capillary was used, and the measurements were made at a constant temperature of 25°C after dilution of the liposomes in Milli-Q water (ratio, 1:50).
Stability Measurement. Storage Stability. The mean vesicle size, PDI, and ZP of empty nanoliposomes were determined at scheduled time intervals (day 1, 6, 18, 36, 48, 60, and 90) during a 3 month storage period at 4°C. 40−42 Their physical stability was also monitored through observation for any visual instabilities, such as fusion and aggregation.
Salt and Sugar Stability. The stability of nanoliposomes against food ingredients or additives, such as salts and sugars, is a crucial aspect for their food applications as delivery systems. For this reason, 0−20% (w/v) sucrose and glucose, as well as 0−5% chloride salts of sodium, potassium, and calcium ions, were explored for their effects on vesicle size and macroscopic stability. 5 Lipid Bilayer Fluidity. In order to study the membrane fluidity, nanoliposome samples were labeled using the spin label 4palmitamido-TEMPO, which is located in the middle part of the bilayer. EPR spectra were recorded in the temperature range interval from 15 to 60°C and at the X-band microwave frequency of 9.85 GHz using a Bruker EMX-Plus spectrometer (Germany) with temperature control by nitrogen circulation. W 0 , in Gauss (G), and heights of the mid-and high-field lines, h 0 and h −1 , respectively, were obtained from each absorption spectrum. The rotational correlation time (Τ R ) was calculated according to 43 Morphological Studies. Transmission electron microscopy (TEM) was used for determining the nanoliposome microstructure. One drop (10 μL) of each sample was deposited on a carbon-coated copper grid and allowed to dry for 60 s. Then, the grids were stained with a drop of 2% uranyl acetate solution for 50 s and the excess stain was wicked away with a piece of filter paper. The air-dried samples were observed using a TEM (Jeol JEM-1400, Jeol Ltd., Tokyo, Japan) at an acceleration voltage of 120 kV.
Encapsulation Efficiency and Loading Capacity. Vitamin D 3 is a hydrophobic molecule, hence the concept was that during encapsulation in nanoliposomes, it would be deposited within the lipid bilayer. To measure the encapsulation efficiency (EE, %), a certain amount of each kind of loaded nanoliposomes was washed three times with phosphate-buffered saline (PBS) to make sure that free vitamin D 3 was not detected in the supernatant . The remaining pellets (loaded liposomes) were dissolved in ethanol to promote liposomal membrane lysis and then the suspension was studied by UV spectrophotometry using an UV−vis spectrophotometer (Jasco, V-630, Japan) at 264 nm. Unloaded nanoliposomes were also investigated as controls. 33 The respective EE and loading capacity (%) were calculated using the following equations Statistical Analysis. Each experiment was performed at least in triplicate. Statistical analysis was conducted using SAS software (ver. 9.1.3, SAS Institute Inc., Cary, NC, US). Analysis of variance was performed using one-way analysis of variance (ANOVA). The results are expressed as mean values ± SD. The significance level was set at P ≤ 0.05.
■ RESULTS AND DISCUSSION
Mechanical and Viscoelastic Properties of the Bulk Organogel. According to the texture studies, hardness, adhesiveness, and cohesiveness of the organogel network were 141.32 ± 2.83 g, 684.99 g.s, and 0.38, respectively. As shown in Figure 1a, G′ values were always higher than the G″ values in the whole frequency range applied, and the plateau region of the mechanical spectrum is always noticed in the LVR region, indicating the predominant elastic gel-like behavior of the organogel. This finding was in accordance with the result reported by Rocha et al., 44 who found such elastic behavior for the organogel developed from sugarcane or candelilla wax in soybean oil. According to the temperature ramp test, at temperatures lower than 60°C, G′ was greater than G″, which confirmed the presence of a strong gel network. Increasing the temperature led to noticeably changed viscoelastic properties as the loss modulus was higher than the storage modulus, indicating the predominant viscous behavior of the sample due to the melting of the threedimensional network organogel. Moreover, the evolution of viscosity with shear rate (Figure 1c) showed the pronounced non-Newtonian shear-thinning nature. 45 The high values of viscosity confirmed the successful development of a stable gel structure as a consequence of the intermolecular junction zones through non-covalent interactions.
Build-Up of the Supramolecular 3-D Gel Structure within the Lipid Bilayer Shell of Nanoliposomes. The aim of this work was to build-up a 3-D supramolecular gel structure within the lipid bilayers, generating a nanoliposome with a gelled shell, which can likely improve the loading and prolonged stability of loaded liposomes. It is generally accepted that the exposure of liposomes to detergents, close to the critical micellar concentration (CMC), leads to disruption, instantaneous destabilization of liposomes, and subsequent leakage and loss of encapsulated materials into the water phase. 46 Our expectation was that the organogel structure developed within the bilayers remained stable upon detergent treatment, leading to a complete or partial stability of the lipid membranes. In order to determine this hypothesis, we added Triton X-100 at the concentrations of 20 and 30 mM into the nanoliposome dispersions and then monitored the following changes in particle size immediately and about 3 days after exposure. Regardless of the Triton concentration and exposure time (Figure 2), the control nanoliposome dispersion was not able to withstand the intercalation of detergent into the lipid bilayer, resulting in a complete disturbance of the lipid bilayers and a noticeable decrease in the vesicle diameter from the initial size (102.5 nm). Therefore, upon destabilization of control liposomes, only one particle size population around 7− 8 nm, which corresponded to Triton X-100 micelles filled with phospholipids, was observed at both Triton concentration and exposure time studied, as shown in Figure 2. These changes imply that the addition of detergent promotes opening up and fragmentation of the vesicles, leading to the formation of Triton−phospholipid micelles and finally the complete solubilization of the bilayers by the detergent micelles. 47 By addition of 20 mM Triton X-100 to the nanoliposome sample with a gelled shell structure, both small (∼10−20 nm) and intermediate (60−100 nm) particles were observed ( Figure 2a). The small particles related to the mixed lipid−detergent micelles and the intermediate size represented unsolubilized membrane vesicles. This trend was also observed at a higher concentration (30 mM) of Triton X-100 (Figure 2b). Regardless of the Triton X-100 concentration, the intermediate vesicle population still remained stable after 3 days of exposure in the liposome sample stabilized by the organogel structure within the bilayer, as shown in Figure 2c,d. These results confirmed the higher prolonged stability of liposomes in the presence of a gelled lipid shell. This effect can be explained by the fact that the organogel network makes a strong scaffold in the lipid bilayer, which provides more stability to nanoliposomes against fragmentation compared to the control ones toward detergent digestion.
Characterization and Storage Stability of Nanoliposomes. The DLS technique evaluates the apparent hydrodynamic diameter of particles in a colloidal system. Figure 3 shows the intensity-weighted diameter and the PDI of empty nanoliposomes. Fresh control samples and those stabilized with a gelled lipid shell showed mean diameters around 102 nm. Thus, the presence of the organogel network within the bilayer shell did not significantly (P ≥ 0.05) affect the mean hydrodynamic diameter of fresh samples. The PDI values, which determine the degree of size homogeneity, were 0.085 and 0.084 for the control sample and the sample with a gelled lipid bilayer shell, respectively. These small values of PDI (<0.3) indicate a very narrow size population of nanoliposomes 48 in both fresh samples. These results are similar to those previously reported by Kakami et al. 49 and Yusuf and Casey, 50 who reported 128 and 140 nm for nanoliposomes obtained by the extrusion process, respectively. According to Figure 3, the storage time did not significantly affect the size and PDI values for the control sample up to 2 months of storage. However, the clear evidence of agglomerated and flocculated particles was observed from day 64 in control liposomes (Figure 4), which made sampling afterward impossible as the measurement of such samples by DLS does not provide reliable results due to the high level of inhomogeneity. This visual instability was in good agreement with zeta potential results that were studied in detail below. In the case of nanoliposomes with a gelled bilayer shell, the particle size and PDI did not change during the first month of storage. Although the particle size and PDI exhibited slight increases from the beginning after 38 days, this sample remained physically stable for more than 3 months with no color change and any obvious agglomeration or flocculation. The small value of PDI at the end of storage time (0.114) also represented a monodispersed distribution and high level of homogeneity, hence excellent liposome stability ( Figure 3).
Zeta potential is a significant indicator to predict the physical stability of colloidal suspensions. 10 In this regard, the range of −30 to +30 mV shows the instability of a colloid, and the degree of instability rises when the zeta potential approaches zero. 8 The potential values of fresh control samples and those with the gelled shell structure were −52 and −65 mV, respectively ( Figure 5). This can be explained considering that the PC used in this work was negatively charged in its original state. Moreover, the possible presence of FFAs in the organogelator and soybean oil might contribute to the observed more negative charge in nanoliposomes stabilized by the organogel network within the lipid bilayer shell. 26 These high negative surface charges were indicative of high strong stability of freshly prepared liposomes. As clearly seen from Figure 5, the electronegative zeta potential values remained relatively constant for the control sample up to 60 days. However, during the next 10 days, the zeta potential showed a sharp decrease to −17 mV and then reached −9 mV at the end of storage (90 days). This trend to neutralization led to the agglomeration of large particles, followed by breakdown of the system as discussed previously. On the contrary, the zeta potential of liposomes stabilized with the gel structure within the lipid bilayer remained highly negative (−63 mV) during the entire storage time (Figure 5), suggesting high electrostatic repulsion and excellent stability of vesicle structures. Our findings proved that the development of a supramolecular organogel structure between lipid bilayers can improve the long-term stability of liposomes, which was also in line with visual observation and particle size measurement.
Salt and Sugar Stability of Nanoliposomes. One of the main limitations of liposomes as a carrier in food applications is their poor stability and relatively high semi-permeability of membranes due to osmotic pressure toward food components or additives such as sugars or salt. Thus, the effect of different salts and sugars on nanoliposome suspension stability was examined by incubating them at room temperature for 120 min, as shown in Figure 6a,b. In control liposomes, the presence of 0.1% monovalent potassium and sodium cation salts already led to small alterations in liposomal sizes (<10%), which increased up to 1.5% salt concentration (Figure 6a). A similar trend was observed in the sample containing an organogel structure between the bilayer shell but with a lower size reduction. This small decrease in vesicle size by addition of NaCl or KCl is in agreement with the findings of Frenzel and Steffen-Heins. 5 There are two hypotheses to explain the salt effect. First, the mechanism of cation adsorption onto the surface of bilayer, which creates a change in the head group charge, leading to a change in the curvature of bilayer due to the electrostatic interactions and therefore the reduction of liposome size. 5,51 Second, due to the increase of osmotic force gradient across liposome membranes, some of the water molecules are transferred from the inner aqueous phase to the outer aqueous phase to adjust the external excess ion concentration, resulting in liposome shrinkage and hence a decrease in their size. 9,52 In addition, the head group dehydration in the presence of low concentration of salt ions resulted in an imbalance of hydrophobic and electrostatic attractions and interfacial tension, which were responsible for system membrane stability, leading to the squeeze of alkyl chains and a decrease of liposome size. 5 For salt concentrations higher than 2%, there was an increase in the particle diameter (Figure 6a), indicating some liposome aggregation due to screening of the electrostatic repulsion between them by the cations. 53 Other researchers have also observed a similar phenomenon with other types of liposomes. 51,54 It should be noted that there was no significant size increase and aggregation to produce visible clusters. Therefore, both liposomes remained stable in all ranges of monovalent salt concentrations. However, after 1 day of storage at 4°C, a white sediment at the bottom of control samples and a slight increase in turbidity of the sample stabilized with an organogel structure between the bilayer were observed. These effects may indicate relatively better stability of the latter sample due to the lower permeability of ions through the membrane stabilized with an organogel network. As shown in Figure 6a, even small concentrations of magnesium cations resulted in immediate breakdown of control liposomes. In contrast, the liposome sample stabilized with an organogel network between the bilayer shell remained completely stable in the presence of magnesium salts up to 5%, which clearly indicated that the gel network avoided divalent cation interaction with phosphate residues within bilayers. This property permits the application of this novel structure of liposomes in dairy products containing high divalent ion concentrations.
In the presence of sugars, the changes in liposome size were similar for both samples. Sucrose and glucose reduced the particle size around 10%, which remained constant up to 20% sugar concentration. This phenomenon was related to the osmotic activity of sugars, as previously discussed for salts. Lipid Bilayer Fluidity. The lipid bilayer fluidity of liposomes describes the molecular ordering and dynamics of phospholipid alkyl chains in the membrane, which are generally dependent on its composition. 55 EPR is a useful spectroscopic technique for detecting changes in the membrane fluidity of liposomes. 56,57 In EPR, there is a direct connection between the spin label mobility and the viscosity of its surrounded area to explain the gel (less mobility) or liquid− crystalline (high mobility) phases. 58 The solubilization of the spin probe did not change the zeta potential and the particle size of liposomes (data not shown). The experimental ESR spectra of 4-palmitamido-TEMPO at different temperatures, ranging between 15 and 60°C, are shown in Figure 7a. In the presence of a gelled bilayer shell in nanoliposomes (Figure 7a right side), much broader and anisotropic ESR spectra were obtained, demonstrating a large reduction in the rotational motion of the probe. Moreover, the differences between two extreme positions increased in a similar way as the temperature decreased, implying that the presence of the organogel network in the bilayers restricted the phospholipid chain mobility in membranes. Figure 7b presents the rotational correlation time which is sensitive to the rotational motional freedom close to the polar head groups and hence the viscosity of the hydrophobic region of the bilayer. Since T R is inversely related to the fluidity, a significant decrease in this parameter was reported for both samples by increasing the temperature (Figure 7b). As previously reported, the membrane fluidity of liposomes similarly increased when the temperature rises. 5,59 In addition, control samples showed a significant increase in the mobility at temperatures above phase transitions of PC (35°C). As shown in Figure 7b (right side), liposome samples stabilized with the organogel network between bilayers had a higher T R , suggesting the higher rigidity and a slower rotational motion of the alkyl chains in the hydrophobic part of the bilayer than the control sample. This effect is related to the formation of a 3D gel network within the bilayer shell by hydrogen bonding between the free hydroxyl groups of 3-palmitoyl-sn-glycerol with the ester carbonyl group of soybean oil, leading to the tight packing of phospholipid chains in the gel phase.
Morphological Studies. As TEM observations need very low sampling capacity, the general trend cannot be easily obtained, but it can provide useful evidence about the morphology, size, integrity, and homogeneity of liposomes which are important for manipulating the liposome characteristics for delivery applications. 60 The microstructures of the nanoliposome suspensions are presented in Figure 8. In fresh control samples, spherical-to elliptical-shaped particles with a rather low size distribution were seen. However, there was evidence of vesicles' tendency to aggregate (Figure 8a). The spherical structure and well-separated unilamellar vesicles were also visually observed in fresh samples stabilized with the organogel network between the lipid bilayer ( Figure 8b). It seems that the presence of a gel structure may have altered the optimum curvature of the lipid bilayer, thereby favoring the formation of equally stable vesicles. According to TEM images before staining (date not shown), the bilayer thickness values of liposomes were 3.2 and 3.6 nm for the control sample and the stabilized one with an organogel structure between the lipid bilayer shell, respectively. Therefore, the development of a 3D supramolecular gel network within the bilayer can increase the membrane thickness while keeping the vesicle size constant. In TEM micrographs of control liposomes after 3 months of storage (Figure 8c), the formation of agglomerated particles and the evidence of membrane fusion were observed which are likely be attributed to the high fluidity of the membranes. As clearly seen from Figure 8d, no significant differences were observed over time in the microstructure of liposomes stabilized with an organogel network between the lipid bilayer, suggesting its longer storage stability than control liposomes.
In brief, TEM results confirmed the results of particle size measurements conducted by DLS. However, there were some differences in the liposome size determined by TEM and DLS, which can be attributed to the differences in the sample preparation methods (e.g., staining and drying), as well as to different physical principles of the two techniques, 26 The EE of vitamin D 3 was 36 and 71% for the control sample and those incorporated with the gel network between the bilayer, respectively. These effects may have been a result of higher fluidity of membranes in the control sample which led to easier membrane fusion and vitamin D 3 leakage. In contrast, the more ordered structures of lipid membranes with higher rigidity in the presence of an organogel network between the bilayer shell contributed to an improved encapsulation of vitamin D 3 within the hydrophobic area of liposomes. The high efficiency of the lipid gelled structure to encapsulate the bioactive component was also reported previously in colloidal dispersion. 26 Compared to the previous report on vitamin D 3 encapsulation in liposomes using a thin-film hydration− sonication method at the same concentrations of PC and CHO, which reported an EE around 57%, 33 the obtained EE in the present work for the novel structure of liposomes stabilized with the organogel network was 1.25-fold greater. Although Mohammadi et al. 62 reported more than 93% EE for vitamin D 3 in nanoliposomes by the thin-film hydration−sonication method, which is comparable to that obtained in this study, their fabrication method included three stages including thinfilm hydration, homogenization, and sonication, and the obtained nanoliposomes had lower physical stability during storage.
Moreover, the high effective load (0.68%) of vitamin D 3 in liposomes stabilized with the organogel structure between the bilayer (Table 1) also confirmed the significant potential of the 3D organogel network to protect and embed the hydrophobic molecules. In fact, the presence of a gel network between the bilayer shell resulted in less fluid membranes and higher molecular packing density, leading to an increase in membrane thickness and loading capacity. These results showed the promising potential of stabilized nanoliposomes with an organogel shell structure to encapsulate hydrophobic bioactive materials with high encapsulation efficiency, which was obtained by complex interactions including hydrogen bonding and hydrophobic interactions. 63 Consequently, this approach of formulating food-grade phospholipid nanostructures could be potentially valuable for a wide range of applications in the efficient delivery of food and pharmaceutical bioactive compounds.
|
v3-fos-license
|
2018-04-03T02:21:40.346Z
|
2017-12-11T00:00:00.000
|
33265530
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0189316&type=printable",
"pdf_hash": "6c1073b94a42ac52b0bb37ed59279ab9a9f4054c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46079",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "6c1073b94a42ac52b0bb37ed59279ab9a9f4054c",
"year": 2017
}
|
pes2o/s2orc
|
A novel indel variant in LDLR responsible for familial hypercholesterolemia in a Chinese family
Familial hypercholesterolemia (FH) is an inherited disorder characterized by elevation of serum cholesterol bound to low-density lipoprotein. Mutations in LDLR are the major factors responsible for FH. In this study, we recruited a four-generation Chinese family with FH and identified the clinical features of hypercholesterolemia. All affected individuals shared a novel indel mutation (c.1885_1889delinsGATCATCAACC) in exon 13 of LDLR. The mutation segregated with the hypercholesterolemia phenotype in the family. To analyze the function of the indel, we established stable clones of mutant and wild-type LDLR in Hep G2 cells. The mutant LDLR was retained in the endoplasmic reticulum (ER) and failed to glycosylate via the Golgi. Moreover, the membrane LDLR was reduced and lost the ability to take up LDL. Our data also expand the spectrum of known LDLR mutations.
Introduction
Familial hypercholesterolemia (FH, MIM 143890) is an inherited disorder characterized by elevated serum low-density lipoprotein (LDL) cholesterol levels, which result in excess deposition of cholesterol in tissues, leading to accelerated atherosclerosis and increased risk of premature coronary heart disease [1]. FH is primarily an autosomal dominant disorder, commonly caused by the mutations in the low-density lipoprotein receptor (LDLR), its ligand apoB (APOB), or proprotein convertase subtilisin-kexin type 9 (PCSK9) genes [2][3][4]. Recently, the apolipoprotein E (AOPE) mutation was found to be associated with dominant FH [5,6]. An autosomal recessive form of FH usually is caused by loss-of-function mutations in LDLRAP1, which encodes a protein required for clathrin-mediated internalization of the LDL receptor [7]. The prevalence of heterozygous FH is estimated to be~1:200-300 and that of homozygous FH is about 1:160,000-300,000 [8]. Without appropriate preventive efforts, approximately 85% of males and 50% of females with FH will suffer a coronary event before they reach 65 years old [9]. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Mutations in the LDLR gene have been reported across the world. They are the major causes of FH, which has an autosomal dominant pattern [10]. LDLR, located at chromosome 19p13.2, is composed of 18 exons spanning 45 kb. The transcript of LDLR is 5.3 kb long, which encodes a peptide of 860 residues [11]. The LDL receptor mediates the endocytosis of cholesterol-rich LDL and thus maintains the plasma level of LDL [12]. Based on the LDLR protein domain that the mutations are localized, there are five broad classes of mutation, among which, class 2 mutations prevent proper transport to the Golgi body needed for modifications to the receptor [1].
We recruited a four-generation Chinese family with FH. We identified a novel indel variant, specifically c.1885_1889delinsGATCATCAACC, in exon 13 of LDLR after direct sequencing. This change led to a newly formed mutant p.Phe629_ser630delinsAspHisGlnPro, which is a class 2 mutation. The mutant LDLR failed to transport to the Golgi body, and the LDLR in cytomembrane was reduced. Meanwhile, the mutant LDLR lost the ability of uptake LDL.
Study subjects
We recruited a four-generation Han Chinese family with FH from the Affiliated Hospital of Qingdao University in 2015. In total, 14 family members participated in this study, including 4 affected individuals and 10 unaffected individuals (Fig 1). For the proband, the wall of the carotid artery was thickened and the lumen was narrowed. His whole heart was enlarged, the thickness of each ventricular wall was decreased, and systolic function of the heart had declined visibly. The left ventricular ejection fraction was 0.40. Although the proband had taken statins and Ezetimibe, his plasma cholesterol and LDLC still did not achieve the target value.
We also recruited 100 unrelated Chinese individuals as study controls. Hypercholesterolemia-related examinations were administered by the Affiliated Hospital of Qingdao University. Peripheral blood samples were collected for DNA analysis. Informed consent was obtained from all participants. The research was consistent with the tenets of the Declaration of Helsinki and approved by the Affiliated Hospital of Qingdao University. Written informed consent was obtained from each patient, and we obtained written informed consent from the guardians on behalf of the children (Fig 1, individual IV-3) enrolled in our study.
Mutation screening and sequence analysis
We extracted genomic DNA from 500 μL of peripheral blood using a TIANamp Blood DNA Midi Kit (Tiangen, Beijing, China). After performing genomic polymerase chain reaction (PCR), we sequenced the coding exons and their flanking intronic sequences of LDLR (Gen-Bank NM_000527.4) for pathogenic mutations in the family members. The primers used in PCR have been described in an earlier report [13]. We screened for mutations in LDLR by direct sequencing. To verify the mutation, we separated heterozygous alleles by cloning the affected fragment into an EGFP-N1 vector. The fragment was amplified by PCR using forward primer 5 0 -TGAAATCTCGATGGAGTGGGTCCCATC-3 0 and reverse primer 5 0 -CTGTAGCTA GACCAAAATCACCTATTTTTACTG-3 0 and then cloned into EGFP-N1 vector. Plasmids were extracted from colonies and sequenced using the same primer. To confirm the novel mutation in LDLR, we also examined the mutation in the 100 unrelated controls. We performed the analysis of amino acid conservation around the mutation site using a CLC DNA Workbench (QIAGEN Bioinformatics, Germany).
Lentivirus construction and infection
We amplified LDLR cDNA by PCR using the forward primer 5 0 -GCAGGTCGACTCTAGAGGA TCGCCACCATGGGGCCCTGGGGCTGGAAATTG-3 0 and reverse primer 5 0 -ATAGCGCTACCC GGGGATCCCGCCACGTCATCCTCCAGACTGAC-3 0 . The LDLR fragment was cloned into a BamH1 site of GV416 lentivirus vector (Genechem, Shanghai, China) using an In-Fusion Cloning Kit (Takara, Dalian, China). The mutant construct was obtained by site-directed mutagenesis. The recombinant lentivirus with the LDLR coding sequence was produced by cotransfection of 293T cells with the plasmids PSPAX2 and PMD2G using Lipofectamine 3000 (Invitrogen, Carlsbad, CA, USA). Lentivirus-containing supernatant was harvested 72 h after transfection and filtered through 0.22 μm cellulose acetate filters (Millipore, Billerica, MA, USA). We concentrated the recombinant lentiviruses by ultracentrifugation (2 h at 50,000 × g).
Lentivirus was transduced into the hepatoma cell line Hep G2 using the cationic polymer, Polybrene (8 mg/ml; Sigma, St. Louis, MO, USA). We obtained stable clones using antibiotic selection for two weeks. The control lentiviral transfer vector, designated Lenti-GFP, stably expressed GFP, whereas the LDLR lentiviral transfer vector, Lenti-GFP-LDLR, stably expressed GFP and LDLR. In our experiment, all cells were divided into three groups and designated control (transduced with GFP only), WT (transduced with GFP and wild type LDLR), and Mut groups (transduced with GFP and mutant LDLR).
Membrane protein analysis
Briefly, we washed the cells twice with a phosphate-buffered solution (PBS) containing 1 mM magnesium chloride and 1.3 mM calcium chloride (PBS2+) and incubated for 1 h at 4˚C with 0.25 mg/ml Sulfo-NHS-SS-Biotin (Pierce, Dallas, USA) diluted in PBS2+. The cells were harvested and lysed in RIPA buffer (1% NP-40, 0.5% deoxycholate, 0.1% SDS) (Beyotime, Shanghai, China) containing a protease inhibitor cocktail (Roche, Basel, Switzerland) with brief sonication. After centrifugation at 12,000 × g for 10 min, we incubated the supernatant for 1 h with streptavidin agarose (Pierce) at room temperature, followed by washing and incubation in 20% 2-mercaptoetha diluted in a sample buffer (2% SDS, 62.5 mM Tris-Cl, pH 6.8, 10% glycerol) for another 1 h. We then detected the surface LDLR and LDLR in the whole cell lysate (total LDLR) by Western blot using an anti-FLAG antibody (Sigma). Protein β-actin was detected as internal control using the anti-β-actin antibody (Sigma).
Subcellular localization
Hep G2 cells stably expressing wild-type and mutant LDLR grown on glass coverslips were fixed with 4% paraformaldehyde (Sigma Aldrich, Shanghai, China), permeabilized with 0.1% Triton-X 100, and blocked with 5% bovine serum albumin (BSA). Mouse anti-FLAG antibody (M2 antibody, Sigma) and rabbit anti-Calnexin (CST) antibody incubations were carried out in 5% BSA at a dilution of 1:200. After incubation, we washed the cells five times with PBS, followed by incubation with Cy5 conjugated anti-mouse immunoglobulin G (IgG) and cyanine dye 3 (Cy3) conjugated anti-rabbit IgG (Jackson ImmunoResearch, West Grove, PA, USA). After six final washes with PBS, coverslips were mounted in 60% glycerol. All samples were imaged using a Leica SPE confocal microscopy (Buffalo Grove, IL, USA).
LDL uptake analysis
We incubated Hep G2 cells with 10 μg/ml DiI-LDL in medium for 2 h. After incubation, we washed the cells three times with DPBS+ 0.3% BSA and then fixed with 4% paraformaldehyde (Sigma Aldrich). We analyzed all samples using the Leica SPE confocal microscopy and analyzed the fluorescence signal using ImageJ software (https://imagej.nih.gov/ij/).
Clinical features
The proband was a man age 52 years old (Fig 1, individual III-1), who was diagnosed with hypercholesterolemia upon suffering a myocardial infarction 15 years ago. A fast plasma test showed him to have high levels of triglycerides (TC) (9.34 mmol/L) and LDL-C (7.88 mmol/ L). He also had tendinous xanthomata and corneal arcus. He had taken statins since his myocardial infarction. His mother died at the age of 41, and she had a history of chest pain. She used to have tendinous xanthomata, a frequent clinical feature of FH. No other detailed information was available. The proband's aunt and one of his female cousins also had tendinous xanthomata and hypercholesterolemia. One of his nephews (Fig 1, individual IV-3) had tendinous xanthomas on his elbow and hypercholesterolemia lasting one year (Table 1).
Mutation confirmation in LDLR
A novel variant, c.1885_1889delinsGATCATCAACC, was found in exon 13 of LDLR by direct sequencing in the proband (Fig 2A). To confirm the mutation, we separated heterozygous alleles by cloning into the EGFP-N1 vector and sequencing (Fig 2A). The indel led to a newly formed mutant p.Phe629_ser630delinsAspHisGlnPro in the phylogenetically conserved region ( Fig 2B). The only other variants detected were several nonpathogenic SNPs. The mutation was confirmed in all affected individuals but was not detected in unaffected family members or in 100 unrelated Chinese controls. There are no hot-spot mutations in LDLR, although the ratio of the variants in exon 4 was relatively high. The mutation in the present study was located in the sixth class B repeat, which is a class 2 mutation responsible for FH (Fig 2C). The functional characterization of the mutation in LDLR was investigated in the following study.
The indel mutation causes defection in LDLR trafficking
To confirm the novel class 2 mutation in LDLR, we established lentivirus constructs of LDLR. FLAG-tagged wild type and mutant LDLR were expressed in Hep G2 cells and detected using anti-FLAG antibody. Western blotting showed that the molecular weight of recombinant mutant LDLR was 40 kD less than the wild type ( Fig 3A). The mutant LDLR in the membrane was also reduced compared to the wild type ( Fig 3A, Fig 3B). This suggested that the mutation may have affected the glycosylation of LDLR. We also investigated the subcellular localization of recombinant LDLR. The results showed that the mutant LDLR was stuck in ER and failed to move to the cell membrane ( Fig 3C). Taken together, our findings suggest that the mutant LDLR could not be glycosylated in the Golgi apparatus and that it was retained in the ER. As a result, the molecular weight of mutant LDLR was lower than that of the wild type and less protein was transported to the cell membrane.
Mutant LDLR depresses the uptake of LDL
To analyze the pathogenicity of the mutation, p.Phe629_ser630delinsAspHisGlnPro, we performed a LDL uptake assay to assess the function of mutant LDLR. The mutation in FLAGtagged LDLR predominantly disrupted the uptake of LDL (Fig 4A). The mutant retained 30% activity relative to the wild type ( Fig 4B). Our results confirmed that the novel loss-of-function mutation in LDLR was the pathogenic cause of the FH in the Chinese family.
Discussion
FH is usually considered an autosomal dominant disorder [1,14]. It occurs in two clinical forms: homozygous and heterozygous. Homozygous FH accounts for only a small portion of cases. Among the pathogenic genes, mutations in LDLR are the major causes of FH [15,16].
To date, there have been more than 1,288 different variants reported in patients with FH. Most of the mutations are exonic substitutions and no hot spots have been found [17]. About 20% of mutations are not located in the exon region. In this paper, we report a novel FH-associated indel mutation in LDLR, which disrupted trafficking of LDL receptor. The mutant LDLR was stuck in the ER and the protein transported to the membrane was reduced. As a result, the mutant LDLR lost the function of uptake LDL. LDLR is a cell surface receptor mainly expressed in bronchial epithelial cells and in the adrenal gland and cortex tissue [18]. In liver cells, LDLR is inserted into the cell membrane and regulates the LDL by endocytosis of cholesterol-rich LDL [19,20]. LDLR undergoes posttranslational modifications in the Golgi apparatus whereby O-linked sugars are added. As a result, the molecular weight of LDLR increases from 120 to 160 kDa [21]. The indel mutation identified in the present study resulted in the production of an approximate 120 kDa mutant protein. We speculate that the mutant protein fails to undergo glycosylation, which may be caused by the defect in ER-to-Golgi trafficking.
LDLR mutations can be divided into five classes based on the biochemical and functional studies of LDLR [22]. Class 1 mutations prevent the synthesis of undetectable LDLR. Class 2 mutations cause the mutant LDLR to block completely (class 2A) or partially (class 2B) block the ER. Class 3 mutations cause the LDLR to fail to bind LDL. Class 4 mutations result in mutant LDLR that cannot internalize LDL. The LDLR with class 5 mutations fail to release LDL into the endosome [23,24]. The mutation in the present study encoded LDLR protein with defective transport from the ER to the Golgi apparatus, which was partially blocked ( Fig 3C). It decreased the amount of LDLR in the membrane and the endocytosis of LDL.
The proband showed only moderately high levels of cholesterol, for which he had taken Atorvastatin at 40 mg per day and Ezetimibe at 10 mg per day. He also followed a special diet and gave up smoking. The proband, the proband's aunt, one of his female cousins, and one of his nephews all had tendinous xanthomata on their elbows. Except for the proband, none of them had a corneal arcus.
In Conclusion, we identified a novel indel mutation, c.1885_1889delinsGATCATCAACC, in the FH family. It was a class 2 mutation that led to the mutant LDLR blockage in ER and the endocytosis of LDL was reduced. We expanded the spectrum of LDLR mutations, and the indel should be considered a novel candidate mutation site causing FH.
|
v3-fos-license
|
2017-04-19T08:35:22.255Z
|
2017-04-18T00:00:00.000
|
1947113
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/AA679B9E1E3BC1FB09E85C5B9F6E95C6/S2048679017000106a.pdf/div-class-title-effect-of-enzyme-supplements-on-macronutrient-digestibility-by-healthy-adult-dogs-div.pdf",
"pdf_hash": "b27f050740053a669ffaf3c5a49235b2723b3204",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46080",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "9a4b175d650c510a7fd0df1a366dbc3abec9132f",
"year": 2017
}
|
pes2o/s2orc
|
Effect of enzyme supplements on macronutrient digestibility by healthy adult dogs
Some enzyme supplement products claim benefits for healthy dogs to compensate for alleged suboptimal production of endogenous enzymes and the loss of enzymes in commercial pet foods secondary to processing. The objective of the current study was to determine macronutrient and energy digestibility by healthy adult dogs fed a commercial maintenance diet with or without supplementation with plant- and animal-origin enzyme products at the dosage recommended by their respective manufacturers. A group of fourteen healthy neutered adult Beagle dogs (average age 8 years) was divided into two equal groups and fed the basal diet alone and then with either the plant- or animal-origin enzyme supplement in three consecutive 10-d periods; the treatment groups received the opposite enzyme supplement in the third period. Digestibility in each period was performed by the total faecal collection method. Serum trypsin-like immunoreactivity (TLI) was measured at the end of each trial. Data were analysed by repeated measures and the α level of significance was set at 0·05. There were no differences in energy and nutrient digestibility between enzyme treatments. When comparing basal with enzyme supplementation, fat digestibility was higher for the basal diet compared with the animal-origin enzyme treatment, which could be a period effect and was not biologically significant (94·7 v. 93·5 %). Serum TLI was not affected by supplementation with either enzyme product. Exogenous enzyme supplementation did not significantly increase digestibility of a typical commercial dry diet in healthy adult dogs and routine use of such products is not recommended.
Digestive enzyme replacement therapy is an efficacious, evidence-based treatment for dogs and cats with exocrine pancreatic insufficiency (EPI) (1) . Most clinicians recommend only the use of animal-origin digestive enzyme replacement products that are mixtures of amylase, protease and lipase sourced from porcine pancreas; however, owners sometimes use plant-origin products as a substitute due to cost. In addition, the use of both animal-and plant-origin enzyme supplement products for all pets, including those without EPI, is advocated by some veterinarians and is a common practice among pet owners. A frequent claim is that this practice will compensate for the alleged suboptimal production of endogenous enzymes by the pancreas of healthy dogs as well as the loss of enzymes present in commercial pet foods secondary to processing and cooking. Several benefits are claimed by advocates of this practice, ranging from improvement in digestibility of nutrients to support of the immune system. Although some enzyme products of animal origin have been approved as drugs by the United States Food and Drug Administration, plant-origin enzyme products are Abbreviations: AAFCO, Association of American Feed Control Officials; CP, crude protein; EE, ether extract; EPI, exocrine pancreatic insufficiency; GE, gross energy; TLI, trypsin-like immunoreactivity. typically sold as nutritional supplements for which drug claims for direct health benefits are disallowed. In that case, label claims are therefore usually vague, and regulations that apply to medications are not relevant; thus, there are no studies required to prove safety or efficacy (2) .
To our knowledge, there are no data available regarding the impact of the use of products containing lipase, protease and amylase, of either plant or animal origin, on nutrient digestibility by healthy dogs. While reports of the efficacy of digestive enzyme therapy in dogs with EPI have been published, the control groups in these studies have consisted of dogs with uncontrolled EPI rather than dogs with normal exocrine pancreatic function (3)(4)(5)(6)(7) . Further, the effect of oral enzyme supplements on serum concentrations of trypsin-like immunoreactivity (TLI) has not been documented. This test is used to diagnose EPI in animals with compatible clinical signs, and is sensitive and specific for exocrine pancreatic function (8) . Thus, it is important to establish whether there is an effect of these products on serum TLI concentrations to clarify interpretation of the test results in patients receiving these supplements.
The objective of the study is to measure macronutrient and energy digestibility by healthy dogs fed a commercial dry canine diet with or without supplementation with exogenous digestive enzymes (of both plant and animal origin) at the dosage recommended by their respective manufacturers, and to determine the effect of enzyme supplementation on serum TLI concentrations. We hypothesise that there will be no effect of exogenous enzyme supplementation on macronutrient and energy digestibility or TLI values.
Animals and design
The study was approved by the Universitat Autonoma de Barcelona ethics committee (CEEAH 1467).
A total of fourteen healthy neutered adult dogs (eight males and six females; median age 8 years, range 4-11 years, weight 13·2 (SD 2·92) kg) were used for this study, divided into two groups of seven dogs according to their sex and body weight. The dogs were kept at the experimental kennels of the Facultat de Veterinaria, Universitat Autònoma de Barcelona and underwent veterinary examination before and after the trial.
The dogs were housed individually in protected covered runs with free access to clean and fresh water; their energy requirements are known from previous trials and food was supplied in adequate amounts to satisfy these requirements and maintain a stable body weight. Three 10-d experimental periods were carried out. In the first period a basal digestibility trial of the commercial dry maintenance canine diet (Nestlé Purina Pro Plan Medium Adult Chicken and Rice, Canine, dry; Nestlé Purina) without enzyme supplement added was conducted including all dogs. In periods 2 and 3, dogs were divided into two groups and received the plant-origin enzyme supplement (product A (Prozyme All-Natural Enzyme Supplement Original Formula for Dogs and Cats; PBI/ Gordon Corporation); α-amylase from Aspergillus oryzae 2000 SKB/g, cellulase from A. niger 50 CU/g, lipase from A. niger 30 FIP/g, bromelain from pineapple stem and fruit 8 GDU/g) or the animal-origin enzyme supplement (product B (Pancrezyme, Virbac Animal Health); lipase from porcine pancreas 71 400 USP units/2·8 g, protease from porcine pancreas 388 000 USP units/2·8 g, amylase from porcine pancreas 460 000 USP units/2·8 g) at the doses recommended by the manufacturer (1/2 teaspoon (2·5 g) per cup of food and 1 teaspoon (2·8 g) per meal for products A and B, respectively). In period 2, one group of dogs received product A and the other group received product B and in period 3 the treatments were switched. At the beginning of the study and at the end of each digestibility period, dogs were weighed and fasted for 12 h and blood was collected, processed and submitted to a commercial laboratory (IDEXX Laboratorios, Barcelona, Spain) for analysis of serum TLI.
Digestibility trial protocol
The digestibility protocol is adapted from the official method of the Association of American Feed Control Officials (AAFCO) Dog and Cat Food Metabolizable Energy Protocols (9) . Each 10-d digestibility trial included 5 d for adaptation and 5 d for total collection of faeces. Daily food intake was recorded. The same batch of diet and enzymes was used for all trials. The dogs were fed once daily at the same time of day. During the collection period, the faeces were collected twice daily, weighed and frozen. After the 5-d collection period, the faeces were weighed again and dried in an oven at 50-60°C until constant weight was reached (3-5 d) to determine DM. After drying, the faeces were ground and mixed and a representative sample of each was taken and frozen at −20°C until analysis. A representative sample of the basal diet was collected on days 1, 5 and 10 of each trial and ground and stored at 5°C prior to analysis.
Sample analysis
The chemical composition of the diet was determined according to the following methods of the AOAC (10) : DM (934.01), ash (942.05), CP (988.05), crude fibre (950.02) and hydrolysed EE (920.39). Hydrolysed EE, DM and CP were also analysed in the faeces. GE was determined in food and excreta using an adiabatic bomb calorimeter (IKA-Kalorimeter system C4000; Janke-Kunkel). To calculate digestible energy, the GE digestibility percentage was multiplied by the GE of the food. The metabolisable energy (ME) of the experimental diet was calculated from the digestible energy and the CP content of the diet according to the National Research Council (11) proposed equation: ME (kcal/g) = DE -(1·04 × g CP).
Statistical analysis
The number of animals per group was chosen according to AAFCO Dog and Cat Food Metabolizable Energy Protocols (9) , which require at least six dogs to perform a digestibility test. Statistical analysis was done using SAS 9.3 (SAS Institute, Inc.). Digestibility values and serum TLI were compared for basal diets and product A v. B using repeated measures. The model included treatment as a fixed effect and dog as a random effect. A model including period as a fixed effect and dog as a random effect was also used to analyse the effect of period. Data are presented as means with their standard errors unless otherwise stated. The α level of significance was set at 0·05. Mean separation for multiple comparisons was done using Tukey's correction.
Results
The dogs maintained stable body weights throughout the experiment (13·4 (SE 2·97) kg). The chemical composition and energy content for the basal diet (averaged from three samples) are presented in Table 1. Total tract apparent digestibility coefficients of DM, organic matter, CP, EE and GE are presented in Table 2. There were no differences in macronutrient and energy digestibility between enzyme treatments. When comparing enzyme-supplemented v. basal diet digestibility coefficients, EE (crude fat) digestibility was higher for the basal diet compared with the animal-origin enzyme treatment. Serum TLI was not affected by supplementation with either enzyme product (25·1, 23·3 and 24·8 µg/l for basal diet and products A and B, respectively, P = 0·682) and was within normal reference ranges (2·5-50 µg/l) at all times.
Discussion
This study is the first to report comparisons of the digestibility coefficients of energy and macronutrients of a maintenance diet with or without supplementation with two different enzyme products (one plant and one animal origin) by adult healthy dogs.
Data regarding exogenous digestive enzyme supplementation in healthy dogs are scarce, especially compared with other species such as pigs and poultry, where their use is routine (12) . Most of the published studies in dogs have assessed the use of carbohydrases (rather than mixtures of amylase, protease and lipase) when using ingredients with antinutritional factors and high fibre content, in order to improve nutrient availability. The authors of one study compared diets based on rice, sorghum and maize with or without an enzyme mixture at 1 ml per ton (including xylanase, α-amylase, β-glucanase, hemicellulase, pectinase and endoglucanase) in dogs and showed no impact of enzyme treatment on digestibility of protein, fat and energy (13) . Similarly, Pacheco et al. (14) , when comparing diets with different amounts of full-fat rice bran, found that the inclusion of a mixture of carbohydrases, phytase and protease (0·4 and 0·8 g/kg of diet) did not affect digestibility values. Sá et al. (15) added a mixture of carbohydrases and phytase to canine diets including wheat bran, preand post-extrusion, and found no significant effect at the inclusion levels used. Another group reported no effects of exogenous protease and cellulose on canine digestibility of diets based on poultry meal v. soyabean meal (16) ; however, Félix et al. (17) , comparing diets also based on poultry and soyabean meals, found that the inclusion of mannanase at 0·01 % resulted in improved macronutrient digestibility. Overall, digestive enzyme supplementation (mainly carbohydrases) in feed does not seem to have a marked effect on canine nutrient and energy digestibility.
This is similar to our findings, although our study used commercial products providing mixtures of α-amylase, cellulase, lipase and bromelain (plant-origin product) or lipase, protease and amylase (animal-origin product). The dosages used were those recommended by the manufacturer: half a teaspoon (2·5 g) per cup for senior dogs (they recommend a quarter teaspoon for young adults) for the plant-origin product and one teaspoon (2·8 g) per meal for the animal-based one, without incubation time (as per instructions of the manufacturer). The only effect documented was a slightly higher crude fat digestibility for the basal diet compared with the animal-origin enzyme treatment, which is probably a period effect due to slightly lower values in period 2, and not biologically significant (94·7 v. 93·5 %). This result was expected, since the basal diet used is comparable with many maintenance canine diets in its formulation and processing which help ensure adequate digestibility and bioavailability of nutrients.
Improved digestibility with the use of animal-based digestive enzymes in dogs with EPI, in comparison with untreated dogs with EPI, has been documented (3)(4)(5)(6)(7) ; however, our results do not support their efficacy in healthy pets at the recommended dosages. It is unknown if higher dosages or a different protocol (i.e. pre-incubation) would have resulted in a positive effect.
Our study is the first to assess the effect of exogenous digestive enzyme supplementation on serum TLI in dogs. At the dosages recommended by the manufacturer, the inclusion of either plant-or animal-origin enzyme supplements did not affect TLI values, which remained normal throughout the study. Veterinarians might prescribe these enzymes before any testing is performed (especially in dogs with severe disease) and some pet owners may pre-emptively utilise them for their pets with non-specific diarrhoea. Our results show that TLI is unaffected by enzyme supplementation at the recommended dose; thus, this test can be reliable in patients (11) .
receiving these supplements, especially if they do not have EPI. The effect of enzyme supplementation in TLI of dogs that do have EPI is still unknown.
In conclusion, the supplementation of a maintenance canine dry diet with recommended doses of exogenous digestive enzymes, plant or animal origin, does not result in improvements of digestibility of protein, fat or energy in healthy adult dogs and does not affect serum TLI concentrations in these individuals. Thus, their routine use in healthy pets is not recommended. Mean values within a row with unlike superscript letters were significantly different (α = 0·05). * Mean separation was done using Tukey's correction.
|
v3-fos-license
|
2021-11-06T06:16:41.771Z
|
2021-11-04T00:00:00.000
|
233835087
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-256665/latest.pdf",
"pdf_hash": "a7a1a9ac56612d091f4c403198ef539331ff38a0",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46085",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "5e187679e96ba867b7eefce211737a5cb573ee63",
"year": 2021
}
|
pes2o/s2orc
|
Heavy Metals in the Liver, Kidney, Brain, and Muscle: Health Risk Assessment for the Consumption of Edible Parts of Birds from the Chahnimeh Reservoirs Sistan (Iran)
The concentrations of four heavy metals, zinc (Zn), lead (Pb), nickel (Ni), and cadmium (Cd), were determined in the liver, kidney, muscle, and brain of nine species of birds from the Chahnimeh Sistan from Iran to assess the metal levels and the potential risk to birds and to the people who eat them. Significantly higher levels of all metals were found in the brain than in the other tissues of other birds. There were no significant gender differences in heavy metals in all tissue. The levels of Pb, Cd, Ni, and Zn in the liver and kidney varied as a function of feeding habitats; the median levels were significantly higher in invertebrate predators than they were for fish predators and omnivorous species. Short-distance migrant birds had significantly higher median levels of heavy metals in the liver and kidney than long-distance migrant birds. Ni levels in the liver and kidney tissues in 56% of birds were higher than the critical threshold levels for effects in birds. Our data indicate that environmental exposures to metals were higher in the wintering populations of birds in the Chahnimeh of Sistan from Iran than elsewhere. Concentrations of Zn, Pb, and Cd in a small percentage of birds were above toxicity levels. However, 56% of liver and kidney samples for nickel were above toxicity levels. Determining the exposure frequency and daily intake of birds, the hazard quotient for edible tissues (kidney, liver, and muscle) of these birds showed that their consumption may provide health risk to people consuming them.
Introduction
Aquatic environments accumulate pollutants from runoff and atmospheric deposition. While aquatic habitats are dynamic, they have a limited capacity to accept man-made waste without adverse effects on biota. With further technology advancement and the increased development of industries, the volume of waste imported into water areas will likely increase. Heavy metals are pollutants of concern due to their toxicity, persistence, and accumulation in the tissues of living organisms. Generally, the main heavy metals of concern in the environment are from pesticides, chemical fertilizers, electroplating, preparation of paint, coal production, oil combustion, pigments, batteries, photovoltaic cells, greenhouse gas production processes, vehicles, synthetic plastic, extraction from foundry mines, leather product, urban waste incinerators, and industrial waste [1]. Besides heavy metals deriving from different industrial and agricultural sources, rocks and volcanoes are an additional source [2]. The increase of heavy metals in the biotic and abiotic environment is of great concern because of their adverse human health effects [3]. Small quantities of heavy metals such as lead, cadmium, and chromium and high concentrations of essential elements such as copper and zinc, in living tissues, have caused major concerns due to their serious health effects in birds [4].
Birds are well suited for biomonitoring because their biology is well-known, they have a relatively long lifespan (up to a dozen or more years), and they feed on different levels of the food chain, depending on the species. Birds are therefore one of the best indicators for evaluating heavy metals in the environment [4,5]. Birds are exposed to environmental pollutants from direct contact with contaminated water and food. Studies show that heavy metals accumulate in the organs of birds, especially waterfowl and other bird species that depend on rivers and other aquatic habitats to collect their food. High levels can be harmful and toxic to their reproduction and survival [6]. Also, birds are used as an indicator of environmental pollution on local, regional, and global scales [5]. Local species (that feed locally) can be compared with those that migrate in (and therefore represent contamination over a larger geographical area) [7].
The process of bioaccumulation of heavy metals in birds is very complex and influenced by many factors, including climate, geographical conditions, physicochemical differences, and the mobility and bioavailability of metals [8]. Behavioral factors such as migration, foraging methods, grit collection, and position in the food chain influence exposure as well [9][10][11]. Metals are absorbed in the body, enter the blood circulation, and then exhibit different levels in tissues in relation to reaction to lipids, solubility, and transport in different specific cells [9]. Distribution and concentration of metals in various organs and tissues are influenced by various host characteristics, such as body nutritional status, weight, size, sex, homeostatic mechanisms of genetics, and interaction with nutrients or micronutrients [9,10,12].
Because of the key role the liver and kidney play in detoxification processes, heavy metals such as cadmium (Cd), lead (Pb), nickel (Ni), and mercury (Hg) have been studied most extensively because of their toxicity [13,14]. The levels of Pb are examined in the bone or brain because of their accumulation over a lifetime and the effect they have on the nervous system [15,16]. In recent years, human activities that increased the levels of heavy metals, such as intense agriculture, leakage of contaminated water to groundwater sources, drainage, and hunting, have posed a serious threat to wildlife [17].
Increased anthropogenic pollution has resulted in increased levels of organic matter, nutrients, and heavy metals in water, sediment [18][19][20][21], and fishes [22,23] from Chahnimeh, Iran. Some of the pollutants coming from agricultural and industrial activities in Iran and Afghanistan have run off into the Helmand River, which supplies water to the Hamoun International Wetland and to human-used wells [22]. The amount of heavy metal contamination in birds in this area has not been studied.
The objective of this study is to assess heavy metal levels in birds wintering in the Chahnimeh reservoirs of the Sistan region in eastern Iran. We determined the levels of Cd, Pb, Ni, and Zn in the brain, liver, kidney, and muscle from nine species of birds in Chahnimeh, in the Sistan region in Eastern Iran. We examined metal differences as a function of migration, sex, species, and feeding habits using the liver, kidney, brain, and muscle samples. We also compared the levels to those published in the literature and examined the risk of metals for endangered species of waterfowl in the Chahnimeh of Sistan. These birds were given to us for studies after the Environmental Protection Agency removed them from fishermen who had hunted them illegally. Although sample sizes per species are low, this represents the first metal data of its kind from this region and provides the first risk assessment for humans eating these birds.
Analytical Procedure
Birds were thawed, and liver, kidney, brain, and pectoral muscle tissues were collected. Samples (1-3 g wet weight) were placed into 150 mL Erlenmeyer flasks; 10 mL 65% HNO 3 (Suprapure, Merck, Darmstadt, Germany) was added to the Erlenmeyer flasks and was slowly digested overnight after 5 mL HClO 4 ; 70% was added to each sample (Suprapure, Merck, Darmstadt, Germany) [24]. For digestion, we used a hot plate (sand bath) at the first step at 200 °C, for about 6 h or until the solutions were clear after cooling. In the second step, each sample was transferred to polyethylene bottles, and deionized water was added until the sample equaled 25 mL. In each set of eight samples, one control sample was prepared and analyzed. Then the solution was filtered using a 0.45-µm nitrocellulose membrane filter. A Shimadzu AA 680 flame atomic absorption spectrophotometer was used for determining the concentrations of heavy metals. The detection limits for Cd, Pb, Ni, and Zn were 0.09, 0.04, 0.06, and 0.09 µg/g respectively. Also, the obtained recoveries for Cd, Pb, Ni, and Zn averaged 88%, 90%, 95, and 105% respectively.
Quality Control
Procedural blanks and certified reference material (CRMs, e.g., DOLT-2 (fish liver) and DORM-2) (fish muscles) were included in each sample batch. To determine the detection limit of heavy metals in a sample, blank samples were injected three times for analysis, and the result was 3 times the standard deviation of the procedural blanks (0.08, 0.05, 0.07, and 0.1 μg/g dw in Cd, Ni, Pb, and Zn respectively). The precision and accuracy of the applied analytical method were determined based on CRMs, e.g., DOLT-2 and DORM-2, heavy metal in sample. The results of our CRMs measurements were a good estimate of the real values. In each sample batch, procedural blanks and certified reference material DOLT-2 and DORM-2 were included. For each matrix, the analyses of three blank samples and of three reagent blanks were performed. To estimate the accuracy and precision of the chemical analysis, sample blanks, standard blanks, and three analytical duplicates with the concentration of 1.2 μg/g were injected, and their mean and its 95% confidence interval were calculated. Quantification was based on multi-level calibration on the concentrations of 0.1, 0.5, 3, 15, 50, and 100 µg/g; and then the standard calibration curve was drawn with 99% accuracy. Two certified reference materials (DOLT-2 and DORM-2 from National Research Council Canada, Institute for National) were included for QA/QC to check digestion efficiency and measurement accuracy. The certified values for the reference materials amounted to Zn = 87 ± 2.5, Pb = 0.24 ± 0.3, Cd = 21.8 ± 5, and Ni = 1.3 ± 0.12, and the certified values for the used material amounted to Zn = 88 ± 60, Pb = 0.23 ± 0.4, Cd = 21.58 ± 3, and Ni = 1.2 ± 0.13 (6 replications for 0.8 g sample with the recovery between 88 and 105%). The method's accuracy, understood as the degree of compatibility of results of multiple analyses of the same sample, reached up to 8% (relative standard deviation RSD). All concentrations are expressed in µg/g of dw.
Statistical Analysis
For data analysis, we used SPSS (version 20.0). The data were tested for normality using a Kolmogorov-Smirnov test.
To determine normal distribution and homogeneity of variance of heavy metals levels in the tissue samples, we used the Kolmogorov-Smirnov test, and data were not normal. To normalize our data, we use log-transformation (log 10 ), and, after normalizing all data, we used parametric statistics. To test differences in total heavy metal level of samples among groups, we performed a one-way ANOVA, and then the Duncan's post hoc test for differences in level between areas was used. Spearman's rank correlation coefficients were used to test for correlation among various heavy metals from birds. A P value < 0.05 indicated statistical significance.
Risk Assessment
To assess the health effects and compare them with standards, we converted g/dry weight to g/wet weight. The dry weight/ wet weight ratio was assumed to be approximately 0.3 for all species [25,26]. In this study, the target hazard quotient (THQ) was computed according to the guidelines of the US Environmental Protection Agency, and the level of absorption of heavy metals was considered equal to the absorption of ingestion (assuming that cooking does not affect the level of metals) (USEPA 1989). Furthermore, because of the lack of an oral reference dose (RfDo) for Pb, the value is specified as the permissible tolerable daily intake (PTDI) suggested by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) 2013).
In this study, we calculated the THQ from the following equation: When the target hazard quotient (THQ) is > 1, systemic effect may occur, and in fact, the THQ is the ratio between exposure and reference dose [27]. The reference dose (RfDo) (µg/g/day) is an estimate with uncertainty of the daily exposure of human populations, including sensitive subgroups, without an appreciable risk of deleterious effects during a lifetime. The RfDo values used in this study were 0.001, 0.02, 0.004, and 0.3 for Cd, Ni, Pb, and Zn respectively. The exposure frequency (EF) in this study is about 182.5, the exposure duration (ED) is 72 years, the meal size (MS) is about 95 g [28] and 20 g for kidney and liver [29]. C is the metal concentration (µg/g w.wt) [30,31]. The body weight (BW) is 70 kg [32] and EF × ED = AT (average time): [33] Also, we calculated the estimated daily intake (EDI) and estimated weekly intake (EWI) based on daily and weekly consumption of birds (including liver and kidney muscle).
The estimated daily intake and estimated weekly intake were calculated as follows:
Total Heavy Metal Concentrations in the Liver, Kidney, and Brain and Muscles in Wild Birds from Iran
The levels of heavy metals in the brain, liver, kidney, and pectoral muscle are shown in Tables 1 and 2. The highest median toxic concentrations were of Ni, followed by Pb and Cd; the kidney and liver had the highest levels of Ni. The brain had the highest concentration of Pb (2.7 µg/g dw). For Zn (an essential element), the levels were the highest in the brain (34.50 µg/g dw), followed by the kidney (21.30 µg/g dw), liver (7.30 µg/g dw), and muscle (7 µg/g dw). Studies have shown that there is a homeostatic regulation of the intracellular essential metals in birds [34][35][36][37][38][39].
We considered < 6 µg/g dw Pb in the liver and/or kidney to be indicative of "background" Pb exposure; individuals were considered to be "Pb exposed" when concentrations exceeded 6 µg/g dw in the liver or kidney and were "Pb poisoned" when kidney levels exceeded 20 µg/g dw or when liver levels exceeded 30 µg/g dw [52]. Birds such as shovelers (Anas clypeata), greylag geese (Anser anser), snow geese, brant geese (Branta bernicla), mallards, and black ducks from Northern California, the USA [53], Canada (gosling) [54], four wetland in Spain [55], and northern Idaho, USA [56], had levels of Pb in the livers higher than the threshold level of threat exposure to Pb in the livers. But birds in the Kanibarazan wetland [43], Miyankaleh, and Gomishan wetlands [45] from Iran; Eastern Poland [47]; Donana National Park, Spain [36]; Illinois River [57]; and Eastern Austria [51] were > 5 µg/g dw, indicating the possibility of Pb toxicity.
Concentrations of Cd > 3 μg/g dw and > 8 μg/g dw in the liver and kidney suggest toxic exposure [59], and levels greater than 40 μg/g dw and 100 μg/g dw in the liver and kidney, respectively, indicate toxicities [60]. In this study, except for one black-winged stilt, Cd concentrations of livers were far below the estimated toxic threshold; Cd concentration in one moorhen and one marsh sandpiper were far below the toxicity level [59]. In birds from Iran, the mean cadmium concentrations were 0.43-3.94 µg/g dw in the liver and 0.47-7.47 µg/g dw in the kidney. The concentrations of Cd in liver were (1) similar to those found in birds from Ebro Delta, Spain [55]; Lake Biwa and Mie Izum coast, Japan [61]; and the Chesapeake Bay, USA [35]; (2) were much lower than those observed from Pacific northwest Canada [34], Chaun, Northeast Siberia, Russia [49]; and (3) were much higher than those observed from Zator and Milicz, Poland [62], Mississippi flyway [63], Eastern Poland [47], and an Illinois river [57]. The concentration Cd in kidney was similar to those found in birds from Donana National Park, Spain [36], and Illinois river, USA [57], and were lower than the Zator and Milicz, Poland [62]; Chaun Northeast Siberia, Russia [49]; and Pacific Northwest Canada [34] and were higher than Lake Biwa and Mie Izum coast, Japan [61]; a wetland in Northwestern Poland [58]; Kanibarazan wetland, Iran [43]; and Gomishan and Miyankaleh, Iran [45].
According to studies, Ni concentrations > 10 μg/g dw in the kidney and > 3 μg/g dw in the liver are toxic in wild birds [64]. In this study, 56% of Ni concentrations in the liver and 56% of Ni concentrations in the kidneys were higher than the toxicity level. In birds, Ni concentrations in the liver and kidney are seldom studied. Concentrations of Ni in livers of birds in this study were higher than those from Connecticut, USA [65]; Gdansk Bay, Poland [66]; San Francisco Bay, USA [67]; Jamaica Bay, USA [68]; Wrangel Island, Russia (Hui 1998); and Florida Lake from South Africa [69]. Concentrations of Ni in the kidney of birds in this study were higher than those from Southwest Atlantic coast, France [2]; Gdansk Bay of the Baltic Sea, Poland [70]; and a wetland in Northwestern Poland [58]. Table 1 The concentrations of trace metals (µg/g dw) in the brain and liver, kidney, and brain and muscle of waterfowl from the Chahnimeh of Sistan In birds, Pb concentrations in the brain >5 µg/g dw are indicative of poisoning [15], and concentrations >16 µg/g dw indicate an advanced state of exposure in birds [71]. In this study, none of the levels of Pb in the brains was higher than the toxic limit threshold.
Variation Among Organs
In this study, the levels of heavy metals in muscle tissue were lower than in other tissues, and our results agree with other studies that reported that muscle tissue was not an active tissue for accumulating these heavy metals. Also, in this study, the brains of birds had the highest concentration of metals, except for Ni (P < 0.05). The level of metal a body absorbs and accumulates depends on the level of exposure, the chemical form of an element, the interaction with other elements, and physiological factors of the bird species (Gochfeld and Burger 1987). The accumulation of pollutants in the internal organs of their bodies is affected by the contaminant level of the food and water ingested. Although the liver and kidney are sites of detoxification, they reflect long-term bioaccumulation [5], while the muscle and brain are sites of accumulation but not of detoxification [72].
If birds are exposed to high concentrations of Pb and Cd, these elements will be accumulating in high concentrations in the brains of these birds, such as in white-tailed eagle and scavenging gulls. Brain tissue levels are related to dietary contamination [70,73]. Relatively low (up to 0.4 ppm wet wt) lead (Pb), but not cadmium (Cd), levels were recorded in the brain of pelagic seabirds [74,75]. Redknobbed coots (Fulica cristata) from industrialized and polluted regions of South Africa had Pb levels in the brain that increased to 25 ppm dw -2 and 4 times as much as in the kidneys and liver [69]. These studies on the accumulation of heavy metals in the brain of birds should be further compared to other studies of birds, both the same and other species. Different adaptations of birds to the environment, as well as the reaction and function of the brain against different contaminants, can be one of the factors affecting the absorption of contaminants in birds' brains. There are very few studies of the levels of heavy metals in the brain tissue of birds. Compared to other studies, the level of heavy metals in brain tissues in this study was higher than other studies from other parts of the world, including Zator and Milicz, Poland [62]; a wetland in Northwestern Poland [58]; Gdansk Bay Baltic Sea, Poland [70]; Nilgiris, Tamil Nadu, India [76]; a lagoon of Marano, Italy [77]; BjØrØya and Jan Mayen Artic [78]; and Pomeranian Bay, Poland [79].
The highest Ni levels were found in the kidneys, the liver and muscles showed slightly lower levels, and the lowest levels were found in the brain (Figures 1 and 2). A significant difference was observed in Ni levels between the kidney and the liver, brain, and muscles (P < 0.05).
Relationship Between Metal Levels, Feeding Habits, and Migration Status
The most important factors that affect the concentration of metals among different species are diet and feeding habits [80]. Diet varies between different bird species depending on the foraging strategies and diet preferences. One of the key pathways for metals to enter the body of birds is through food, water, and by eating sediment, lead shot, and grit (nonfood items). The direct consumption of soil contaminated with metals is a major cause of increased contamination in their bodies, even if the contaminant levels in plants or their prey has not increased [11].
In our study, birds were divided into four groups, invertebrate predator, fish predator, fish and crab predator, and omnivore to examine the effects of type of food on metal levels, using published data [80,81]. In the fourth group, we had only the Eurasian spoonbill (n = 2), so it was excluded from the statistical tests. Diet type had a significant effect on the levels of Zn, Pb, Cd, and Ni in the kidney and liver, with invertebrate species having higher concentrations than fish predators and omnivores (P < 0.05). There were no statistically significant differences for brain and muscle levels for any of the metals examined.
In a study in Shadegan wetland from Iran on mercury pollution in three species of waders, black-winged stilt had higher levels of mercury in the feathers, liver, kidneys, and muscles than other birds in the study [82]. The reason for the increase in mercury in this bird compared to other birds was that its long legs allowed access to deeper water and stilts could hunt larger prey than invertebrates. Similarly, other authors found higher heavy metal levels in the larger species that had access to deeper sections of the water and could hunt larger prey [83]. In the present study, the reason for the increase in metals in the various organs of blackwinged stilt, marsh sandpiper, and northern lapwing was that they fed on agricultural lands irrigated by farmers (and thus were exposed to contaminants in the water). We, and others [81], suggest that these species feed more on agricultural lands than do other species, remaining on the water for several days, rather than on the shores of the Chahnimeh from Sistan. Perhaps the use of chemical fertilizers and pesticides in agricultural lands has increased the exposure of birds to metals. This difference in metal concentration is most likely due to metal biogeochemical behavior, diet, and accidental ingestion of fine soil and sediment particle. However, it is impossible to separate soil selection/soil digestion from diet. Certainly, these two exposure pathways are very effective in concentrating these metals because other metals are correlated with accidental ingestion of fine soil and sediment particle [84]. In our study of heavy metals, birds that are invertebrate predators compared to birds that are predators Table 2 The concentrations of trace metals (µg/g dw) in the brain and liver, kidney, and brain and muscle of waterfowl from the Chahnimeh of Sistan and effect habitat birds * Significant difference between the concentrations of Zn, Pb, Cd, and Ni in the tissues of the liver and kidney of invertebrate predator with omnivores and fish predator (P < 0. at higher trophic levels had higher concentrations of heavy metals in the liver [85][86][87]. Birds of Chahnimeh reservoirs were divided into 2 groups of long-distance migrants, and local migrants that only go to the northern rivers and wetlands of Iran and do not leave Iran. It is noteworthy that there were differences in metal levels between the internal organs of the kidney and liver for all four elements studied, but there was no statistically significant difference between the two groups of birds for brain and muscle tissue ( Table 3). The birds in the southern wetlands Heavy Metals in the Liver, Kidney, Brain, and Muscle: Health Risk Assessment for the Consumption… from Iran migrate to northern wetlands in the provinces Gilan and Mazandarn in the southern Caspian Sea to avoid the hot summer months in south and southeast Iran [81,88]. Heavy metal levels are high in this region of Iran, Caspian Sea, in fishes, macroalga, sediment, and water [89][90][91][92]. High levels of heavy metals in the south Caspian Sea might explain the high level of these heavy metals in local migrants.
Lower median concentrations of heavy metals (Cd, Pb, Ni, and Zn) in the liver and kidney were detected in the long-distance migrant birds than in the local migrants (P < 0.05) (Figure 1). Low usage of heavy metals and pesticides in breeding regions birds (Siberia or Eastern Europe) [88] that have migrated out of Iran might explain lower heavy metals in these birds.
Correlations Among Heavy Metals
All four elements in this study were positively correlated with each other within organs (P > 0.001, r > 0.603), but none of the elements was positively correlated with the other elements among tissue. This shows that the pathways and sources of entry for the elements studied are similar, but the pathways for accumulation of these elements and the reactions of different organs of the body to these elements are very different. A positive correlation between levels of Zn and Cd in the body of birds may protect them from the effects of increasing Cd in the body [38,48]. Positive correlations of Pb or Cd with other elements in tissues have been reported in birds from Korea [93,94], Cory's shearwater (Calonectris diomedea), and black-backed gulls (Larus fuscus) from England [95]; seabirds from Chaun, northeast Siberia, Russia [49]; and feral pigeons (Columba livia) from Korea [61].
Health Risk Thresholds
One of the non-essential element in the body is Pb that can cause neurotoxicity, nephrotoxicity, and other health effects [96]. Both the Spanish legislation and Australian National Health and Medical Research Council (ANHMRC) proposed 2.0 µg/g ww as the maximum permitted level of Pb in food [97,98]. The median level of Pb in muscle tissue in 6 species of birds (except for Eurasian spoonbill, great crested grebe, and moorhen) was lower than the Spanish legislation and ANHMRC guidelines. The median level of Pb in the liver and kidney of birds was higher than the levels allowed in the Spanish legislation and ANHMRC guidelines (except for cormorant); Eurasian spoonbill also had higher level in the kidney than these guideline (Fig. 3). The action level for human health is 1.7 µg/g ww Pb [99] (Fig. 3). In contrast to these maximum permitted levels for Pb, the Institute of Turkish Standards for Food (ITSF) and the European Commission (EC) introduced the permissible threshold level of 0.1 and Table 3 The concentrations of trace metals (µg/g dw) in the brain and liver, kidney, and brain and muscle of waterfowl from the Chahnimeh of Sistan and effect migration water fowl * Significant difference between the concentrations of Zn, Pb, Cd, and Ni in the tissues of the liver and kidney of long-distance migrants and local migration (P < 0.05) 0.5 µg/g ww, respectively [100,101]. The median level in flesh muscle, liver, and kidney of all birds in this study was clearly higher than these guidelines, and according to these two guidelines, the health of the people of this region is endangered by consuming the muscle and especially the liver of these birds. The maximum permitted Cd level of the ANHMRC, USFDA, and Western Australian authorities was 2, 3.7, and 5.5 µg/g ww, respectively. In our study, none of the birds exceeded this median level of Cd in muscle, but levels of Cd in liver of northern lapwing, moorhen, marsh sandpiper, and black-winged stilt were higher than the threshold levels suggested by the ANHMRC, USFDA, and Western Australian authorities [97,98]. Cadmium levels in the kidney were higher than the ANHMRC threshold in all birds except the cormorant and the Eurasian spoonbill. Also, the great crested grebe, with a Cd level of 4 µg/g ww, was higher than both the ANHMRC and USFDA guidance, and the rest of the birds were higher than all three guideline ANHMRC, USFDA and Western Australian authorities (Fig. 3). In contrast to these maximum permitted levels, the Spanish legislation and EC threshold are 1 and 0.05 µg/g ww, respectively [97,98]. In this study, the levels of Cd in the muscle, liver, and kidney of all birds were greater than these thresholds.
Liver
The permissible limit of Ni in food by the US Food and Drug Administration is 10 µg/g ww [99]. According to this guideline, the levels of Ni in the muscle, liver, and kidney of all birds, except cormorant, were higher than the permissible limit. The permissible limit of the FAO Ni is 13 µg/g ww in food [102], and the levels of Ni in muscle of birds were lower than this limit, except for the liver in black-winged stilt, marsh sandpiper, moorhen, and northern lapwing (13 µg/g ww), and the levels in the kidney of all birds (except cormorant and Eurasian spoonbill) were higher than the FAO guideline (Fig. 3). The Food and Nutrition Board (FNB) [103] introduced the permissible limit of Ni as 4 µg/g ww. Accordingly, the levels of all muscle, liver, and kidney in all birds in the present study were higher than this limit, and the consumption of edible parts of all birds poses a threat for the heath of people in this region.
The ANHMRC and WHO introduced an acceptable limit of 1000 µg/g ww for Zn in food [104,105]. The level of Zn in the muscle, liver, and kidney in all birds of Zabol Chahnimeh reservoirs were below this toxic threshold [97] [106] (Fig. 3).
Health Risk from Consuming Birds in Chahnimeh Reservoir
In our study, the HQ for any of the metals in the muscle for most of birds was < 1, but the ∑HQ was > 1 in moorhen birds ( Table 4). The HQ of Pb in liver was > 1, but in the other birds it wasn't; the ∑HQ was not higher than 1 for any other birds (Table 4). In the edible parts, the level of HQ was high, and except cormorant, in other birds, the level ∑HQ was between 1.24 and 4, which was due to the high level of HQ in the kidneys and muscle of birds in this region (Fig. 4). The ∑HQ of each metal we examined was > 1, suggesting that people would experience health risks from consumption of birds from the Chahnimeh reservoirs (Fig. 4). On the other hand, values of the ∑HQ index for total exposure were > 1 for birds, indicating that the estimated exposure is a major health concern. Studies in the wetlands of northern Iran showed that the pochard is not suitable for consumption. [107].
Estimated Human Daily and Weekly Toxic Elements Intake from Birds
Different metals in different concentrations have different effects on organisms, and some metals can show toxic effects even in low concentrations [29]. In this study, we [108]. Also, the PTWI according to the guidelines of FAO/WHO ( 2011) is 35 and 7000 µg/kg body weight/week for nickel and zinc, equaling 2450 and 490,000 µg/week for a 70 kg person, respectively [109].
According to Table 4, none of the bird organs had levels of Zn, Pb, Cd, and Ni that were higher than the level of PTWI70. In this study, the EWI of Pb in edible parts of birds B, MS, M, N, and E was higher than PTWI, and this is due to the high level of EWI in the liver of these birds, while the level of lead in the muscle tissue of all birds was within the allowable range for PTDI, PTWI, and PTWI 70.
For Cd, the level EWI in the edible parts was higher than PTWI in all birds, and the EDI level in edible parts birds G, B, MS, M, and N was higher than the PTWT, which is due to the high level of EWI and EDI in the muscle and kidneys of these birds (Table 4).
In this study of Ni, the level EWI in muscle and edible parts was higher than PTWI in all birds, and the level of EWI was higher in B, MS, M, and N of PTDI. The EDI in the birds B, N, M, and MS was higher than the level of PTWI. Except C and E, in all birds, the EWI in kidney was higher of the PTWI, and also the level of Ni in the liver of birds B, Ms, N, and M level of EWI was higher of the PTWI ( Table 4). The level of EDI of Ni in edible parts B, MS, N, and M was higher than the PTWI, and also the level of Ni in kidney M and MS was higher than the PTWI, and finally, the data indicate that the level of EWI in edible parts B, MS, M, and N was higher than the PTDI ( Table 4). The results of this study show that people in this area should not use "edible" parts of the birds examined, and the use of wild birds as daily and weekly food is a serious threat to the inhabitants of this area. This is contrary to the results obtained for birds in the wetlands of northern Iran, where EDI and EWI were within the permissible range and did not pose a threat to the people of the region [107,110].
Conclusion
In this study, the levels of Cd, Pb, Ni, and Zn were investigated in birds of Chahnimeh of Sistan from Iran. The level of all heavy metals (except nickel) in the brains of birds was higher than the levels in other tissues. Differences in metal levels as a function of feeding habitat and migration were Fig. 4 Estimated potential health risks for Zn, Pb, Cd, and Ni via consumption of the liver, kidney, and muscles collected (edible parts). Hazard quotients (HQ) and ∑ HQ = HQ Pb + HQ Cd + HQ Ni + HQ Zn observed only in the kidney and liver tissues of birds. The levels of heavy metals in some birds were higher than the effect level threshold; 56% of the liver and kidney samples of these birds were above the threat level. The results of this study show that birds in Chahnimeh of Sistan pose a risk to humans from heavy metal contamination. The data show that human consumption (using EDI, EWI, and HQ) of the edible tissues of birds is not suitable: people of the region should avoid eating the edible tissues of wild birds and should particularly avoid eating kidney and liver tissue.
|
v3-fos-license
|
2020-08-23T12:18:39.854Z
|
2020-08-19T00:00:00.000
|
221244223
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ccr.20200403.20.pdf",
"pdf_hash": "a2456b3014a9b2476afddb96a2a1acc7b236293f",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46086",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "a2456b3014a9b2476afddb96a2a1acc7b236293f",
"year": 2020
}
|
pes2o/s2orc
|
Role of the Ghanaian Clinical Pharmacist in Providing Evidence-based Pharmacotherapy for Heart Failure Patients: The Way Forward
Background: Heart failure is extensively characterized as a disorder arising from a complex interaction between impaired ventricular performance and neurohormonal activation. In order to achieve optimal therapeutic outcomes, all heart failure patients must be managed by a multidisciplinary team of healthcare providers, using evidence-based Pharmacotherapy. Purpose: The aim of this article is to assess the clinical role of the Ghanaian Pharmacists in optimizing Pharmacotherapy for heart failure patients based on internationally established clinical roles of Pharmacists. Methods: A literature search was conducted via google scholar using the “search engine” terms: Pharmacist, Clinical Role, Heart failure to look for all studies published in English. The search revealed a total of 98 studies. Only the studies that discussed the role of the clinical Pharmacists specific to heart failure or generally for patients with cardiovascular diseases were included; All other studies were excluded. A Total of 54 studies were used for data analysis. Clinical Pharmacists who are somehow involved in the management of heart failure patients were interviewed to ascertain their roles as members of a multidisciplinary team and their responses documented. Conclusions: A Multidisciplinary team approach including a Clinical Pharmacists with expertise in cardiovascular therapeutics, is required in the management of heart failure patients in order to improve therapeutic outcomes. The current clinical role of the Ghanaian Pharmacist in the management of heart failure patients is substandard.
Introduction
Heart Failure is a low cardiac output disease that can either manifests itself acutely or chronically and is characterized as a disorder arising from a complex interaction between impaired ventricular performance and neurohormonal activation [1]. Clinical Pharmacists possess advanced training, certification and experience in a specific practice setting and or disease; and are usually important members of a multidisciplinary team of healthcare providers [2][3][4][5] Clinical Pharmacists caring for heart failure patients provide optimized evidence-based Pharmacotherapy which improves therapeutic outcomes [6] Despite available evidence supporting the mortality benefits provided by some prescribed therapies for heart failure, it is well documented that, these therapeutic options are not optimally prescribed in real clinical practice [7]. This creates an opportunity for Clinical pharmacists to recommend drug therapy interventions that will maximize outcomes. Clinical Pharmacists Responsibilities in the management of heart failure patients are diverse and well documented in the literature [8]. Although each practice environment creates a peculiar opportunity for different types of clinical pharmacist's interventions, there are a few important aspects of services that appear to be consistently performed across different practice settings [9]. The aim of this article is to assess the clinical role of the Ghanaian Pharmacists in optimizing Pharmacotherapy for heart failure patients based on the internationally established clinical roles for Pharmacists.
Methods
A literature search was conducted via google scholar using the "search engine" terms: Pharmacist, Clinical Role, Heart failure looking for all studies published in English. The search revealed a total of 98 studies. Only the studies that discussed the role of the clinical Pharmacists specific to heart failure or generally in patients with cardiovascular diseases were included, all other studies were excluded. A Total of 54 studies were used for data analysis. Clinical Pharmacists who are somehow involved in the management of heart failure patients were interviewed to ascertain their roles as members of a multidisciplinary team and their responses documented.
Results
As it is common with all clinical Pharmacists in general, the aim of the heart failure clinical pharmacist is to identify and resolve any drug therapy problems associated with anti-heart failure Pharmacotherapy. Table 1. Summarizes the internationally established the general drug therapy problems categories with pertinent examples in heart failure patients [10][11][12][13]. Several studies have been conducted to assess the impact of clinical Pharmacists interventions on outcomes in the management of heart failure patients. These studies described the role of pharmacists in the management of patients with heart failure; and discussed various services performed by pharmacists in diverse spectrum in different practice settings, using several outcomes measure. The content of Table 2 depicts a summary of some of the relevant trials evaluating Pharmacists intervention in heart failure [13][14][15][16][17][18][19][20][21][22][23][24][25][26]. Jain et al. [15] Outpatient clinic service Before and after intervention comparison a) Dose titration of heart failure medications based on a protocol b) By pharmacist or nurse a) Improvement in drug prescribing rate b) Improvement in target dose b) achievement c) Improvement in symptoms
Intentionally established roles of the clinical Pharmacists in the management of heart failure patients
Clinical Pharmacists Provide evidence-based Pharmaceutical care for patients with heart failure through numerous drug therapy interventions [27].
Medication reconciliation and education
Medication reconciliation is a process of comparing patients medications orders to all of the medications that they may have been taking in order to avoid errors such as omissions, duplications, dosing errors and interactions especially during transition of care [28]. Medication reconciliation and education constitute 2 major responsibilities of clinical Pharmacists that are now established to positively impact clinical outcomes of patients with different diseases including heart failure patients [29].
Complex medication regimens for heart failure coupled with other comorbidies increase the likelihood of medication reconciliation discrepancies. Clinical Pharmacists leading the medication reconciliation process, perform medication reviews, communicate prescribing errors with the cardiologist, prepare written overviews of discharge medications and communicate with community pharmacists and patients primary care physicians about their medications in order to establish continuum of care so as to significantly reduce medication discrepancies [30][31][32][33][34][35][36][37][38][39][40].
Medication initiation, dosage titration, adjustment and monitoring.
Although several evidence-based clinical practice guidelines have established that treatment of heart failure patients with certain drug therapies improve mortality [41][42][43][44][45], these therapies are suboptimally prescribed. Therefore, under these compromising circumstances, Clinical Pharmacists seize the opportunity to initiate therapies that are omitted, recommend titration of improper dosages, make recommendations for adjustment of dosages of certain therapies, implement therapeutic drug monitoring protocols for some therapies; all based on functional integrity of certain vital organs [46]. This approach definitely optimizes therapeutic outcomes.
Post Hospital discharge follow up clinic or home visit The involvement of clinical Pharmacists in the management of heart failure patients in the "outpatients" or "post-discharge" settings, is perhaps the most researched and documented [47]. During post hospital discharge follow up visits, Clinical Pharmacists perform an assessment of the patient's knowledge about prescribed medications, screen for all possible interactions, adverse drug reactions and ease of access to prescribed heart failure medication. This type of patient centered care has the potential to reduce the rate of hospital readmission due to decompensated heart failure, promotes patients' compliance, and enhance medication safety and effectiveness.
Assessment of the current role of the Ghanaian Clinical Pharmacists in the management of heart failure patients
Currently there are a very limited number of Ghanaian clinical Pharmacists with expertise in cardiovascular therapeutics providing evidence-based Pharmacotherapy for heart failure patients. The depth of clinical services provided by Ghanaian Pharmacists to heart failure patients is considered substandard (20%) as compared to international standards (Table 3). This very limited clinical role will definitely not yield any clinically meaningful and measurable therapeutic outcomes.
Limitations
Since we limited our search engine to only studies published in the English, is it highly likely that we might have missed out on pertinent studies published in non-English languages which could have potentially add more scientific value to the content of this manuscript. Also, the nature of this research did not allow us to assess the impact of the substandard role of the Ghana clinical on morbidity and mortality in heart failure patients. Further research is required in this area.
Conclusion
A Multidisciplinary team approach including a Clinical Pharmacists with expertise in cardiovascular therapeutics, is required in the management of heart failure patients in order to improve therapeutic outcomes. The current clinical role of the Ghanaian Pharmacist in the management of heart failure patients is substandard.
Source of Funding
Not applicable.
Data Availability Statement
Not applicable.
Authors Contributions
The research idea was coined by MMDM and accepted by all authors putting the topic through a re-wording analysis. BBA and KA conducted a very comprehensive literature search and the selected studies were reviewed, synthetized and analyzed by all authors. The manuscript was written by MMDM, revised and approved by all authors.
Declaration of Conflict of Interest
The authors declare that they have no competing interests.
|
v3-fos-license
|
2018-04-03T00:23:08.597Z
|
1997-08-08T00:00:00.000
|
11319676
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/272/32/20146.full.pdf",
"pdf_hash": "dbb13a9faef84d791afa82938847a412417fe3a1",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46087",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "ed87b8a51e3966dc95b0a53d1291ab77cad582a0",
"year": 1997
}
|
pes2o/s2orc
|
Biosynthesis of Archaeosine, a Novel Derivative of 7-Deazaguanosine Specific to Archaeal tRNA, Proceeds via a Pathway Involving Base Replacement on the tRNA Polynucleotide Chain*
Archaeosine is a novel derivative of 7-deazaguanosine found in transfer RNAs of most organisms exclusively in the archaeal phylogenetic lineage and is present in the D-loop at position 15. We show that this modification is formed by a posttranscriptional base replacement reaction, catalyzed by a new tRNA-guanine transglycosylase (TGT), which has been isolated fromHaloferax volcanii and purified nearly to homogeneity. The molecular weight of the enzyme was estimated to be 78 kDa by SDS-gel electrophoresis. The enzyme can insert free 7-cyano-7-deazaguanine (preQ0 base) in vitro at position 15 of anH. volcanii tRNA T7 transcript, replacing the guanine originally located at that position without breakage of the phosphodiester backbone. Since archaeosine base and 7-aminomethyl-7-deazaguanine (preQ1 base) were not incorporated into tRNA by this enzyme, preQ0 base appears to be the actual substrate for the TGT of H. volcanii, a conclusion supported by characterization of preQ0 base in an acid-soluble extract of H. volcanii cells. Thus, this novel TGT in H. volcanii is a key enzyme for the biosynthetic pathway leading to archaeosine in archaeal tRNAs.
A variety of modified nucleosides has been found in tRNA (1,2), but their functions and, in particular, their biosynthetic pathways are still largely unknown (3). Many modified nucleosides are highly conserved with respect to their sequence locations in tRNA (4), and some are characteristic of the evolutionary origin (2,5), namely, archaea, bacteria, or eukarya (6). Perhaps the most phylogenetically specific nucleoside in tRNA is archaeosine, which occurs only in archaeal tRNA at position 15, a site that is not modified in tRNAs from the other two primary domains (7). Archaeosine was first discovered by Kilpatrick and Walker (8) during sequencing of tRNA from Thermoplasma acidophilum, and it was subsequently shown to be present in many archaeal species (9); in the most extensively studied archaeal tRNA, from Haloferax volcanii, archaeosine occurs in tRNAs specifying more than 15 amino acids (10). Subsequently, the structure of archaeosine was determined to be the non-purine, non-pyrimidine nucleoside 7-formamidino-7-deazaguanosine ( Fig. 1A) (11).
The only other known examples of tRNA nucleosides with 7-deazaguanosine structures are the members of the Q 1 nucleoside (12) (Fig. 1E) family (13), which includes precursors in its biosynthesis, such as 7-cyano-7-deazaguanine (preQ 0 ; Fig. 1D) (14), 7-aminomethyl-7-deazaguanine (preQ 1 ; Fig. 1C) (15), and oQ (16) from bacterial tRNAs, and mannosyl and galactosyl derivatives of Q (17,18) from mammalian tRNAs. In contrast to archaeosine, members of the Q nucleoside family are located at the first position of the anticodon (position 34) in bacterial and eukaryotic tRNAs that are specific for only four amino acids (Tyr, His, Asp, and Asn) (19). The key enzyme in the biosynthesis of the Q nucleoside in tRNA is tRNA-guanine transglycosylase (TGT; EC 2.4.2.29), which catalyzes a base-exchange reaction by cleavage of the N-C glycosidic bond at position 34 (20). In bacteria, TGT catalyzes the exchange of guanine at position 34 in tRNA with either guanine base, preQ 1 base, or preQ 0 base (20,21). preQ 1 base is presumed to be synthesized de novo from GTP (1) and was identified as the physiological substrate of Escherichia coli TGT (21). After incorporation of preQ 1 into tRNA, it is further modified to oQ by transfer of the ribosyl moiety from S-adenosylmethionine (22), then finally to yield Q in the polynucleotide chain (23). In contrast, in eukarya, TGT can incorporate fully modified Q base into the first position of the anticodon by a base-replacement reaction (24,25). Animals cannot synthesize Q-related compounds de novo and must obtain Q base as a nutrient from their diet or gut flora (26,27).
Here we report the isolation of a new type of TGT from H. volcanii; it catalyzes the incorporation of preQ 0 base into position 15 of tRNA, replacing guanine originally located at that site. Further, we have demonstrated that free preQ 0 base is present in H. volcanii cells, implying that TGT utilizes preQ 0 * This work was supported in part by a grant-in-aid for specially promoted research from the Ministry of Education, Science and Culture of Japan and by National Institutes of Health Grant GM 29812. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 as a substrate leading to the biosynthesis of archaeosine in archaeal tRNAs.
Assay of Guanine Exchange Reaction-Exchange between guanine and various 7-deazaguanine analogues, catalyzed by TGT, was assayed as described previously (20) except that the final ionic condition of the reaction mixture was 1.5 M KCl and 1.5 M NaCl. The 7-deazaguanines were synthesized as described previously: preQ 0 (28), preQ 1 (29), and archaeosine base (30).
Purification of H. volcanii tRNA-Guanine Transglycosylase-Frozen H. volcanii cells (100 g) were suspended in 200 ml of buffer A (50 mM Hepes (pH 7.5), 10% glycerol, 1.0 mM dithiothreitol, and 0.5 mM phenylmethylsulfonyl fluoride) plus DNase I (2.5 g/ml), and were broken by sonication. The S-100 fraction was obtained by centrifugation at 105,000 ϫ g for 1 h, dialyzed against buffer A, and then adsorbed onto a DEAE-Sepharose FF column (2.5 ϫ 20 cm) (Pharmacia Biotech Inc.), which was eluted by a linear gradient of NaCl from 0.02 to 0.5 M in buffer A. The eluate containing the active fraction was brought to 40% ammonium sulfate and then applied to a Butyl-Sepharose FF column (2.5 ϫ 20 cm) (Pharmacia), which was eluted with a linear gradient of ammonium sulfate from 40 to 0% in buffer A. The active fraction was next applied to a Butyl-Sepharose 4B column (1.5 ϫ 15 cm) (Pharmacia) and eluted as described above for the Butyl-Sepharose FF column. The active fraction was then applied to a Superdex 200 column (1.6 cm ϫ 60 cm) (Pharmacia), and then eluted with buffer A containing 300 mM NaCl. Finally, the TGT fraction was applied to a Mono Q column (0.50 ϫ 5 cm) (Pharmacia) and eluted with a linear gradient of NaCl from 300 mM to 1 M. This TGT fraction was stable for at least 1 month when stored at 4°C. The activity of the enzyme was monitored by incorporation of [8-14 C]guanine into unfractionated E. coli tRNA (20). Amino acid sequences of peptide fragments generated by digestion with lysylpeptidase were determined as described previously (31).
Construction of a Plasmid Clone Containing the Gene for H. volcanii tRNA Lys (CUU) and Preparation of its T7 Transcript-Two synthetic DNA oligomers, namely Lys-FOR (5Ј-TAATACGACTCACTATAGGGC-CGGTAGCTCAGTTAGGCAGAGCGTCTGACTCTT-3Ј) and Lys-REV (5Ј-TGGTGGGCCGGACGCGATTTGAACACGCGACCGTCTGATTAA-GAGTCAGACGCTCTGCCTA-3Ј) were annealed via the complementary region, and both of the 3Ј ends were extended by Tth DNA polymerase (Toyobo). After extension, two synthetic DNA primers, namely T7 (5Ј-TAATACGACTCACTATA-3Ј) and Halo-Lys3Ј (5Ј-CCTGGTGGGC-CGGACGCGATTT-3Ј) were added, and a polymerase chain reaction was performed to yield the gene for H. volcanii tRNA Lys (CUU) containing the promoter sequence for T7 RNA polymerase (Takara). We cloned the product of a polymerase chain reaction in pUC19; digestion of plasmid DNA with MvaI generated a CCA end for the tRNA gene, which was transcribed in vitro using T7 RNA polymerase (32).
Sequencing of the T7 Transcript into Which preQ 0 Base Was Incorporated-For preparation of the T7 transcript into which preQ 0 base was incorporated, 200 l of a reaction mixture containing 300 pmol of T7 transcript, 20 l of TGT (15 units) (20), and 5 nmol of preQ 0 base (under ionic conditions of the guanine exchange reaction; see above) were incubated at 37°C for 1.5 h. The sequence of the RNA was determined as described elsewhere (33,34).
Characterization of Modified Nucleotides by Post-labeling-A reaction mixture containing T7 transcript and TGT in the presence of preQ 0 base or an aliquot of acid-soluble extract of H. volcanii was incubated at 37°C for 1.5 h. After digestion of the T7 transcript by RNase T2, the preQ 0 nucleotide was analyzed by post-labeling using T4 polynucleotide kinase and [␥-32 P]ATP (21,35). The enzymes used (RNase T2, T4 polynucleotide kinase, and yeast hexokinase) were inactivated by phenol extraction instead of boiling. After incubation with nuclease P1, the digestion product was applied to a cellulose thin layer plate (20 ϫ 20 cm) and was subjected to two-dimensional chromatography (15).
Preparation of an Acid Extract of H. volcanii Cells for Detection of preQ 0 Base-H. volcanii cells were suspended in a solution of 0.2 M formic acid and shaken for 2 h at 4°C. After centrifugation, the supernatant was filtered through a Millipore filter. After neutralization with NaOH, soluble substances were extracted with tetrahydrofuran. The organic phase was evaporated, and the material was used for the identification of preQ 0 base.
RESULTS
Purification of tRNA-Guanine Transglycosylase from H. volcanii-E. coli TGT can be assayed by its ability to incorporate [8-14 C]guanine into Q-unmodified tRNAs (typically unfractionated yeast tRNA, which constitutively lacks Q, is used) by replacing the guanine base located at the first position of the anticodon (20). By analogy with E. coli TGT, we searched for such an enzymatic activity in a crude extract of H. volcanii using E. coli tRNA as a substrate (see below). H. volcanii TGT was purified to near homogeneity following successive column chromatographies. Table I shows the recovery and the purification factor at each step, and Fig. 2 shows the pattern of SDS-polyacrylamide gel electrophoresis at each step. The molecular mass of the enzyme was deduced to be 78 kDa from a profile of the gel (Fig. 2, lane 6). Like E. coli and eukaryotic TGT, H. volcanii TGT does not require ATP for the base replacement reaction. High salt concentration (approximately 2.4 (CUU) and that of T7 promoter upstream of the gene. Its T7 transcript (Fig. 3A) was found to be a good substrate for the enzyme (Fig. 3B). The labeled T7 transcript was isolated, and the site at which [8-14 C]guanine had been incorporated was determined by RNA sequencing to be position 15, 2 the exclusive location of archaeosine nucleotide in archaeal tRNA. This result suggested that the enzymatic activity is involved in the biosynthesis of archaeosine nucleotide in tRNA. Unfractionated tRNA from E. coli was also found to be a good TGT substrate, whereas unfractionated H. volcanii, yeast, and bovine tRNAs were not (Fig. 3B), although we did not quantitatively measure the efficiency of unfractionated E. coli tRNA and of the T7 Lys transcript as substrates. These results further suggest that position 15 of H. volcanii tRNAs is fully modified to archaeosine nucleotide.
preQ 0 Base May Be the Physiological Substrate for H. volcanii tRNA-Guanine Transglycosylase-The ability of various bases to serve as substrates for incorporation into tRNA by H. volcanii TGT was examined using the procedure of Okada et al. (21). First, the T7 transcript was labeled with [8-14 C]guanine by incubation with TGT. To a reaction mixture that contained this 8-14 C-labeled tRNA and the TGT enzyme, we added various 7-deazaguanine bases and monitored the decrease in acidinsoluble radioactivity of the tRNA due to release of [8-14 C]guanine by replacement with the added base (Fig. 4). Unexpectedly, neither archaeosine base itself (Fig. 1B), nor preQ 1 base (Fig. 1C), which is the physiological substrate for E. coli TGT (21), were incorporated into the tRNA transcript. Among 7-deazaguanine derivatives, only preQ 0 base (Fig. 1D) was efficiently incorporated. We attribute the small amount of apparent archaeosine base incorporation into tRNA to be due to l. An aliquot (100 l) was taken at the times specified and the radioactivity of its acid-insoluble precipitate was measured. A control experiment was performed without a tRNA substrate.
FIG. 4. Substrate specificity for bases monitored by release of [8-14 C]guanine from labeled tRNA Lys (CUU) by H. volcanii tRNAguanine transglycosylase.
The reaction mixture that contained 300 pmol of [8-14 C]guanine-labeled T7 transcript and the enzyme (15 units) with or without 6 nmol of each base in a final volume of 1,500 l was prepared. After incubation at 37°C, an aliquot of 350 l was taken at the times specified and the radioactivity of the acid-insoluble precipitate was measured. f, control; q, guanine; Ç, preQ 0 base; E, preQ 1 base; å, archaeosine base. preQ 0 base, and not archaeosine base, since approximately 20% of archaeosine base is chemically converted to preQ 0 base after incubation of the reaction mixture under the conditions used. Furthermore, the nucleotide at position 15 of the tRNA product after incubation with archaeosine base was found to be preQ 0 nucleotide by RNA sequencing 2 (see "Discussion").
preQ 0 Base Is Incorporated at Position 15 of tRNA-To investigate whether preQ 0 base is directly incorporated into tRNA, as well as whether incorporation occurs at position 15 in the D-loop, the sequence of the D-loop region in the T7 transcript after incubation with preQ 0 base was determined by the post-labeling method (33,34). The RNA was subjected to partial digestion with alkali and the 5Ј ends of resultant RNA fragments were labeled by using polynucleotide kinase and [␥-32 P]ATP, followed by separation by electrophoresis in a polyacrylamide gel (Fig. 5A). RNA was extracted from each band in the gel and digested with nuclease P1. The resultant 32 Plabeled nucleotide 5Ј-monophosphate was analyzed by thinlayer chromatography. Fig. 5B shows clearly that preQ 0 base was incorporated at position 15 of the tRNA, and also shows that more than 90% of the nucleotide at position 15 is a preQ 0 nucleotide, indicating that the base-replacement reaction by H. volcanii TGT was efficient under the present conditions.
Evidence for the Occurrence of Free preQ 0 Base in H. volcanii Cells-If preQ 0 base is the physiological substrate for H. volcanii TGT, free preQ 0 base could be present in H. volcanii cells. To test this hypothesis, we prepared an acid-soluble extract of H. volcanii and incubated an aliquot of the extract with the T7 transcript of H. volcanii tRNA Lys (CUU) and H. volcanii TGT under the same conditions described in Fig. 4. After the reaction, we analyzed modified nucleotides in the treated tRNA using the post-labeling method (21,35). As shown in Fig. 6, preQ 0 5Ј-monophosphate was detected in the tRNA transcript following incubation in the presence of the acid-soluble extract (Fig. 6B), but it was not detected following incubation with the enzyme alone (Fig. 6C). Further, similar acid treatment of isolated H. volcanii tRNA did not release preQ 0 by the criterion of failure of the T7 transcript to incorporate preQ 0 when incubated with the extract and TGT. 2 Although archaeosine base is unstable under conditions of high temperature and high salt (see above), archaeosine appears stable when present as a nucleotide in intact tRNA (10). These results suggest that free preQ 0 base is present in H. volcanii cells and that it may serve as the physiological substrate for H. volcanii TGT (see "Discussion" ; Fig. 7A).
The normal growth medium for H. volcanii (10) contains Tryptone, which, as a whole meat extract, is a source of Q nucleoside and, therefore, a potential source of preQ 0 . To rule out the possibility that H. volcanii may not synthesize archaeosine de novo, tRNA was isolated from cells grown in a chemically defined (Q-free) medium (36) and analyzed for archaeosine; archaeosine content in tRNA from cells grown in the normal growth medium and in chemically defined growth medium was identical. 2 H. volcanii and E. coli tRNA-Guanine Transglycosylases Are Evolutionarily Related-Recently, the complete genome sequence of the methanogenic archaeon, Methanococcus jannaschii, has been reported (37). Among 1738 protein-coding genes predicted is a putative M. jannaschii TGT gene (MJ#0436) that exhibits 30% identity to E. coli TGT (38). We determined the amino acid sequences of three peptide fragments, generated from purified H. volcanii TGT by digestion with lysylpeptidase, and compared them with the sequence of the putative M. jannaschii TGT. As shown in Fig. 8, fragments 1 and 2 from H. volcanii TGT appear to be closely related to the M. jannaschii sequence, with identities of 53.5 and 38.5%, respectively, although the C-terminal portion of fragment 3 diverges from that in M. jannaschii. These results suggest that the H. volcanii tRNA-guanine transglycosylase characterized here is the counterpart of the putative TGT whose sequence is present in M. jannaschii (37). DISCUSSION
tRNA-Guanine Transglycosylase in H. volcanii Has Different
Substrate Specificities from That of E. coli-It is well established that TGT is involved in biosynthesis of Q nucleotide in E. coli (Fig. 1E) by exchange of guanine at position 34 by preQ 1 base in tRNAs specific for Tyr, Asp, Asn, and His ((20, 21); see Introduction). The resultant preQ 1 nucleotide in tRNA is then modified to the epoxide oQ by the S-adenosylmethionine-requiring enzyme QueA (22), and finally, oQ is converted to Q by an unknown vitamin B 12 -dependent enzyme (23). These processes are schematically represented in Fig. 7B. In the present study, we provide evidence that, in contrast with the primary substrate of bacterial TGT (preQ 1 ), preQ 0 base is the normal substrate for H. volcanii TGT. Presumably, the incorporated preQ 0 base then is further converted to archaeosine by (net) addition of ammonia, at the polynucleotide level (Fig. 7A). Therefore, both E. coli and H. volcanii TGTs catalyze a very similar reaction, namely, the exchange of guanine base in a polynucleotide chain with a free 7-deazaguanine derivative; however, their actual substrates (in terms of base, tRNAs, and the site of replacement in tRNA) are different.
Functional Implications of 7-Deazaguanosine Nucleosides-Archaeosine is present at position 15 (D-loop) in most archaeal tRNAs (7), whereas Q and its derivatives are present at position 34 (first position of the anticodon) of four specific tRNAs in bacteria and eukarya (19) (see Introduction). Accordingly, these conserved differences in structure and sequence location suggest differences in function. Q has been proposed to be involved in codon recognition (39) and has been shown to prevent stop codon readthrough in tobacco mosaic virus RNA in a codon context-dependent manner (40). A correlation between the presence of Q-undermodified tRNAs and frameshifts of some retroviruses including human immunodeficiency virus was proposed (41). Other functional implications of Q, such as in virulence of Shigella (42), signal transduction (43), ubiquitindependent proteolytic pathway (44), and tumor differentiation (45)(46)(47)(48), have also been suggested.
The functional role of archaeosine has not been established, but has been proposed to involve enhanced stabilization of tRNA tertiary structure as a consequence of the unique charged imidino side chain (11). Earlier work has demonstrated that hydrogen bonding interactions between G-15 in the D-loop and C-48 in the T-loop, stabilized by stacking with purine-59, constitute a generally conserved mechanism for stabilization of the universal folded L-shape of tRNA (49,50). These structural features (G-15, C-48, purine-59) are basically met by nearly all reported archaeosine-containing tRNA sequences (7), to which would be added the strong potential for electrostatic interactions between phosphate and the "arginine fork" imidino side chain of archaeosine.
Interestingly, precursor bases used as substrates for both bacterial and archaeal TGTs participate in analogous biosynthetic pathways. Free preQ 1 base has been isolated from E. coli (21), and here we provide evidence for the presence of free preQ 0 base in H. volcanii. We believe that this free preQ 0 base is likely to be the precursor exchanged into tRNA in the normal biosynthetic pathway leading to archaeosine, although, at present, we cannot strictly exclude the possibility that free preQ 0 base detected is instead derived from archaeosine in tRNA. In E. coli, preQ 1 base is synthesized from GTP (13), possibly via preQ 0 (51), although there is presently no direct evidence for any precursor-product relationship between these two 7-deazaguanine bases. Presumably a similar pathway is present in H. volcanii for biosynthesis of preQ 0 base from GTP. The key substrates following base replacement at the tRNA level, then, are preQ 1 nucleotide (leading to queuosine in E. coli) and preQ 0 nucleotide (leading to archaeosine in H. volcanii). It is noted that preQ 0 nucleoside is present in tRNA of certain mutants of E. coli (51), the meaning of which has not yet been rationalized (14,21). The occurrence of these 7-deazaguanine precursor bases in both primary phylogenetic domains, archaea and bacteria, prompts us to speculate a more general role for them in cellular functions. In this respect, more detailed characterization of free preQ 0 base (and possibly free preQ 1 base) in H. volcanii cells is required.
Structural Requirements of Bacterial and Archaeal TGT Enzymes for tRNA Substrates and Their Evolutionary Implications-tRNA structural requirements for enzyme recognition remain to be identified. Preliminary experiments 2 showed that an 18 nucleotide minihelix containing the D-loop and D-stem of H. volcanii tRNA Lys (CUU) does not serve as a substrate for H. volcanii TGT, implying the existence of higher order recognition elements for the archaeal TGT. By contrast, bacterial TGT recognizes the anticodon loop sequence U 33 -G 34 -U 35 , which is the minimum requirement for recognition by the enzyme, and minihelices containing this triplet sequence are good substrates for the enzyme (52,53). By x-ray crystallography, the tRNA-guanine transglycosylase from Zymomonas mobilis has been determined to be an irregular (␣/) 8 barrel with a tightly attached C-terminal zinccontaining subdomain (54). Further, the structure of Z. mobilis TGT in complex with preQ 1 suggests a binding mode for tRNA where the phosphate backbone interacts with the zinc subdomain and the U 33 -G 34 -U 35 sequence is recognized by the barrel. The zinc binding motif (CXCX 2 CX 25 H) is highly conserved in prokaryotic TGTs known so far (52), and the homologous region in M. jannaschii is (CXCX 2 CX 22 H). These results demonstrate a structural and functional conservation of the archaeal and bacterial/eukaryotic TGT binding mode with tRNA, despite archaeal modification of the D-loop and bacterial/eukaryotic modification of the anticodon loop. The utilization of 7-deazaguanine derivatives for tRNA processing by interrelated TGT enzymes suggests an evolutionarily fundamental role for 7-deazaguanine.
In contrast to bacterial TGT (52, 55), productive recognition of tRNA by eukaryotic TGT requires not only the U 33 -G 34 -U 35 sequence of the anticodon loop but also a correctly folded tRNA architecture (56). In addition, eukaryotic TGT is believed to be a heterodimer, although this is not conclusive at present (44,57). More detailed examination of the substrate recognition properties of TGTs from archaea, bacteria, and eukaryotes will elucidate the domain structures of these proteins for the tRNA binding site, as well as further define their evolutionary relationship.
|
v3-fos-license
|
2022-07-09T15:22:47.955Z
|
2022-07-07T00:00:00.000
|
250363194
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-3417/12/14/6873/pdf?version=1657277636",
"pdf_hash": "29c392b8ad3c21012e62e365fc4ec0bbbee0e0cc",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46088",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "d6f236802ee3468d87f6af2d6f6b3e65493e800c",
"year": 2022
}
|
pes2o/s2orc
|
Fast 3D Analytical Affine Transformation for Polygon ‐ Based Computer ‐ Generated Holograms
: We present a fast 3D analytical affine transformation (F3DAAT) method to obtain poly ‐ gon ‐ based computer ‐ generated holograms (CGHs). CGHs consisting of tens of thousands of trian ‐ gles from 3D objects are obtained by this method. We have attempted a revised method based on previous 3D affine transformation methods. In order to improve computational efficiency, we have derived and analyzed our proposed affine transformation matrix. We show that we have further increased the computational efficiency compared with previous affine methods. We also have added flat shading to improve the reconstructed image quality. A 3D object from a 3D camera is reconstructed holographically by numerical and optical experiments.
Introduction
With the rapid development of display and computer technology, 3D display technology has made great progress [1,2]. The current 3D display technologies generally include binocular parallax display [3], volume 3D display [4], light field display [5] and holographic 3D display. Holographic 3D display is a technology to reconstruct 3D wavefront by using light wave diffraction information based on the principle of wave optics. Because the wave-front information of the 3D scene is reconstructed, it can provide all the depth cues required by the human eye [6][7][8]. The wave-front recording process of holography can be completed and simulated by a computer to generate the so-called computergenerated holograms (CGHs). The use of CGHs avoids the setup of complicated optical paths in optical holography [9,10], as long as the mathematical description of the 3D scene is obtained and transformed into the wave-front distribution in the hologram plane by algorithms. Therefore, the algorithm of calculating CGHs is key, because it directly determines the computational efficiency of the hologram and the quality of the reconstructed image [11,12].
Indeed, the mathematical description of a 3D scene can also be expressed in many forms. According to the geometric information of its surface, it can be discretely expressed as a collection of point sources, giving the so-called point-based method [13]. The wavefront information of the whole 3D object is obtained by calculating and then adding the light field distribution of the spherical wave emitted by each point light source in the hologram plane. However, the required number of point sources is usually as high as millions, leading to computational bottlenecks. By using a look-up tables (LUTs) [14][15][16] and graphic processing units (GPUs) [17,18], we can accelerate the calculations greatly. Another popular CGH calculation method is the use of polygons (usually represented by triangle meshes) to approximate the surface of a 3D scene, giving to the polygon-based method [19][20][21][22][23][24][25][26][27][28][29][30][31][32]. Compared with the point-based method, the number of polygons can be greatly reduced, and the mature theory of computer graphics can be used to make the reconstructed scene more realistic. There are other methods to decompose 3D objects, such as the layer-based [33,34] or line-segment based method [35].
In polygon-based methods, the propagation of light field can be calculated for each polygon by using the angular spectrum theory if the polygon is parallel to the hologram plane [6]. The essence of polygon-based methods is the diffraction calculation between non-parallel planes. Polygon-based methods can be divided into two categories; one is the traditional method based on sampling [8]. This method needs to sample each tilted polygon that is not parallel to the hologram in the spatial and frequency domains. Due to the polygon rotation, sampling in the spatial and frequency domains creates uneven sampling intervals between the two domains. Time-consuming interpolation is needed to alleviate sampling distortion. The other method is the analytical method [8]. Unlike the traditional method, sampling in the spatial domain to obtain the spectrum of a tilted polygon is not needed. Instead, the spectrum of a tilted polygon can be expressed in terms of the spectrum of a unit triangle (also called primitive triangle), which is known analytically, through the 2D affine transform [23]. Therefore, only sampling is needed in the frequency domain for each polygon, bypassing the need for interpolation in the traditional method. Compared with the traditional method, the analytical method effectively reduces the amount of calculations. However, shaping and texture mapping are not easily included in the analytical methods as compared with the traditional methods [8].
Pan et al., [25,26] have developed an analytical method utilizing 3D affine transformation. They have defined a pseudo inverse matrix to map the spatial triangle with a three-dimensional right primitive triangle (which has an analytical spectrum expression). The major issue of the technique is the inaccuracy caused by the inversion of the pseudo matrix. Zhang et al., [28,29] have derived a correct analytical expression in the context of 2D affine transformation [23] and proposed a method to achieve 3D transformation from an arbitrary triangle to a primitive triangle through a 3D rotation of the arbitrary triangle and the use of a 2D affine matrix [29]. Although this method avoids the time-consuming calculation of the pseudo inverse matrix, the process is rather complex. Zhang et al., [8] also have proposed a method called the fast 3D affine transformation (F3DAT) method by translating the primitive triangle to avoid the use of the pseudo inverse matrix to improve the computational efficiency.
In the wave-optics based approach, it can provide accurate depth cues. However, view-dependent properties' rendering requires additional calculations. Rendering technology of computer graphics makes the reconstructed 3D scene more realistic [36][37][38]. Matsushima et al., have discussed the methods of shading and texturing [20][21][22] and created a large-scale full-color CGH successfully [39]. Subsequently, many methods have been proposed to improve the quality of reconstructed images through texturing [36,37], shading [22,38], and resolution improvement [40]. Additionally, the use of the silhouette method for hidden surface removal has been described [41].
In this study, based on the three-dimensional affine theory [25,29], we present a fast 3D analytical affine transformation (F3DAAT) method to obtain a full-analytical spectrum of a spatial triangle. We obtain the analytical expression of a 3D affine matrix algebraically, and the spectrum of tilted triangles can be obtained directly. Compared with previous methods, we show improved computational efficiency. In addition, in order to improve the image quality, we add flat shading to make the reconstructed image more realistic. We also demonstrate reconstructed 3D objects composed of tens of thousands of polygons numerically as well as the use of a spatial light modulator (SLM) for optical reconstruction.
In Section 2, we briefly introduce the basic principle of the polygon-based method. In Section 3, we present the theory of F3DAAT. In Section 4, we demonstrate the reconstruction of 3D objects numerically and optically. The computational efficiency is also compared with previous methods. The results of adding flat shading are also illustrated. Figure 1 shows the basic principle of the polygon-based method. The surface of a 3D object is discretized into many polygons (usually triangles). The hologram is assumed to be on the plane 0. The total complex field distribution on the hologram plane , , 0 can be expressed as the superposition of the polygon fields from each polygon, , , :
Conventional Polygon-Based Method
where is the number of polygons and , , is the complex field on the hologram plane from the polygon. Since the complex field distribution of each triangle on the hologram can be expressed by the inverse Fourier transform of its spectrum: where ℱ • represent the inverse Fourier transform and , , represents the spectrum of the polygon on the hologram. Therefore, the key to the polygon-based method is how to obtain the spectrum of each triangle on the hologram plane. As shown in Figure 2, a polygon on the source coordinate system , , is not necessarily parallel to the hologram plane , ; one needs to rotate the polygon to parallel local coordinate system , , to be parallel to the hologram plane in order to calculate the diffracted field toward the hologram through standard diffraction theory [6]. The spatial frequencies , can be expressed as: where are the elements of the rotation matrix. , (corresponding to , in Equation (10a,b) in Ref. [8]) are the spatial frequencies corresponding to source coordinates , , . , , (corresponding to , , in Equation (10a,b) in Ref. [8]) are spatial frequencies corresponding to parallel local coordinates , , . Upon a differential operation in Equation (3a), we have: where with being the wavelength of the light source. Since the spatial frequencies , are uniformly distributed, we have , . For simplicity, let us assume Δ 1, Δ 1, and Equation (4) then can be rewritten as follows: Coordinate systems of the traditional polygon-based method: source coordinate system , , , parallel local coordinate system , , and hologram plane , . Adapted from Zhang et al. [8].
Since Δ (the derivative of ) is not constant, Equation (5) indicates that for an arbitrary set of spatial frequencies , corresponding to the parallel local coordinates, spatial frequency , after rotation has highly nonlinear properties. Because uniform sampling is necessary for the FFT to work correctly in the traditional method, interpolation process is required for spatial frequencies , after rotation. This procedure would add substantially to the computational time [30]. Comparing with the traditional method, the analytical method avoids the use of the FFT in obtaining the spectrum of the polygon in the source coordinates and the interpolation process is therefore not required. The analytical method can obtain the analytical expression of the tilted triangle spectrum directly by using affine transformation and a given spectrum expression of a primitive triangle. In this paper, we analytically solve the affine matrix, which avoids the most timeconsuming steps in previous methods [8,25,29] and further improves the computational efficiency over the previous methods.
Theory
The aim of the polygon-based method is the calculation of the polygon field in the plane of the hologram. However, we cannot obtain the polygon field or its spectrum directly by using standard diffraction theory, which is valid only between parallel planes. One of the conventional approaches to solve this problem is to map the desired light field or its spectrum using affine relations. The traditional affine transformation method is based on procedures such as rotation and translation to establish the relationship between the input and output coordinates, with the output represented by a set of known inputs and affine relations. Hence, the essence of affine transformation method is a mapping method, and the core problem of affine transformation is to find the affine matrix. In our proposed theory, we will find the affine matrix algebraically in a universal way.
In the traditional 3D affine transformation algorithm, there is a global coordinate system , , as the output coordinate system, as shown in Figure 3. The hologram plane , is located in 0, and the tilted triangle Π with vertexes , , , , , and , , is located in the global coordinate system. We now define the affine coordinates , , ) as our input coordinates, as shown in Figure 4. A primitive triangle Δ with vertexes , , , , , and , , is in the affine coordinate system, and the analytical expression of the primitive triangle spectrum is obtained by using two-dimensional Fourier transform for 0. The spectrum of the tilted triangle is finally obtained from the affine relationship between the primitive triangle Δ and the tilted triangle Π with the analytical expression of the primitive triangle spectrum derived from the Fourier transform. , , is the coordinate vector of primitive triangle Δ in the affine coordinates and the superscript denotes the transpose operation.
is a 3 × 3 matrix and ⃗ is a 3 × 1 vector. We let , , = 0,0,0 , , , = 1,0,0 and , , = 0, 1, 0 . Therefore, we can write in terms of matrix multiplication as shown in Equation (6), where is the affine transformation matrix and and represent matrices consisting of vertexes of tilted triangle Π and primitive triangle Δ, respectively. By calculating the twelve parameters of affine matrix , we can uniquely determine the 3D affine transformation: The key to the affine transformation algorithm is to find the elements of affine matrix . There is no inverse for 4 × 3 matrix , because inverse only exists for square matrices. The affine matrix has been found by using the pseudo inverse matrix of [25]. The accurate method is to avoid the use of pseudo matrices and to find the affine transformation matrix through direct calculation of . There are twelve unknown elements in affine matrix , and so we have to solve twelve equations to get these twelve elements. However, each matrix and only contain three vertexes in Equation (6). From this, we can only get nine equations to solve for the nine elements; the remaining elements in the affine matrix cannot be determined. In light of this, we introduce a new set of vertexes , , in matrix and a 4×1 vector , , , in matrix to determine all the unknown twelve elements of the affine matrix . Therefore, we extend and to a 4 4 matrix and , respectively. In light of this, Equation (6) becomes: In order to simplify calculations and improve the calculation efficiency, the best choice is let 0 and , , = 0,0,1 . Then, becomes Through simple calculations, we can obtain det 1, indicating the existence of the inverse matrix of . Then, we can find the affine transformation matrix by . Note that the vertex , , only represents a point located in the primitive coordinate system independent of the primitive triangle, and the value of , , and must be chosen such that det 0 to make sure the inverse of exists. ~ can take any value; different values of ~ make a different affine transformation matrix and the Jacobian determinant also changes at the same time. Again, we have chosen Equation (8) for ease of calculations and there is no effect on the final results that we end up solving. According to the above discussion, the affine transformation matrix can be obtained by Equation (7): For primitive triangle Δ, assuming that its surface function has strength of one within the triangle and zero outside, its spectrum analytical expression , can be obtained (see Appendix A): where , are the spatial frequencies corresponding to affine coordinates and . The singular points 0 , 0 or 0 have been discussed in Appendix A.
Through affine transformation, the spectrum distribution of tilted triangle Π on the hologram plane is (see the derivation of this equation in Appendix B): , ⃗ ⃗ e , where , , again are spatial frequencies of the global coordinate system, ⃗ , , is the surface function of the tilted triangle ⃗= , , , ⃗= , , and the Jacobian determinant . Then, the spectrum distribution on the hologram is: , , , which has been derived in Appendix C. Note that for an arbitrary tilted polygon, we can set an arbitrary point instead of point , , of the polygon. For a different choice, it will change the affine transform matrix, but the Jacobian determinant in Equation (12) also changes at the same time, giving the same result in Equation (12) As shown in Figure 5a, the zero frequency in the global coordinate system is , , 0, 0, . According to the spatial frequency relationship given by Equation (28) in Appendix C, as shown in Figure 5b, the zero frequency in the affine coordinate system , , is , , , , : From the above equation, we can see that the zero frequency in the affine coordinate system has a frequency offset, and this offset will be represented in the reconstruction image with a phase factor. To eliminate this offset, as shown in Figure 5c, we can subtract this frequency offset Δ , Δ , Δ according to where , , 33 . Therefore, the spatial frequencies in the "offset" affine coordinate system can be rewritten as The spectrum of a tilted triangle on the hologram plane now has been completely analyzed, and we can directly obtain the spectrum distribution through the initial parameters. For a single tilted triangle, we have , , , as in Equation (2), and for a 3D object we use Equation (1). The complex field distribution of a 3D object reconstructed by the hologram can be expressed as the superposition of a tilted triangle complex field on reconstructed image plane : where is the distance between the reconstructed image plane and the hologram plane 0. Through the above steps, we can obtain the reconstructed complex field distribution of the 3D object through only one Fourier transform. In the next section, we will verify our proposed method through numerical simulations and optical experiments.
Numerical Reconstruction
Based on the 3D mesh in Figure 1, the Stanford bunny consists of 59,996 polygons, and we have reconstructed the bunny using our proposed method. The actual size of the bunny was 3.11 × 3.08 × 2.41 mm 3 . We have increased its size to 6.56 × 6.49 × 5.08 mm 3 before the generation of the hologram. In order to improve the computational efficiency and image quality, we have implemented back-face culling by judging normal. The normal vector of a hologram is ⃗, and ⃗ is the normal vector of a tilted triangle. If ⃗ ⃗ 0, the tilted triangle will be calculated, and tilted triangles that do not meet the condition will be discarded.
After back-face culling, the bunny only contains 31,724 polygons. However, in Section 3 the result of Equation (12) is based on the assumption that the amplitude distribution of a triangle is a unit constant and the reconstructed results will lack realism. As shown in Figure 6, in order to reproduce the details of the 3D object, we assign the surface function of the tilted triangle , as follows: where ⃗, ⃗, ⃗ are the components of unit normal vector ⃗ of the xyz-axis in the global coordinate system, and represents ambient reflected light. , and are the direction cosine values of illumination directions. In our case, 0.2, 60°, 60°, 5°. The surface function usually refers to the strength information, and the amplitude is expressed as , . The surface function of the titled triangle , is a constant for each tilted triangle according to Equation (17), so that we can let , (I is a constant here) and by using the property of Fourier transform: √ , ℱ √ , , the spectrum in the hologram of a tilted triangle with added flat shading and the elimination of the "offset" is based on Equation (12), together with Equation (15), we have: Figure 7 shows the numerical reconstructions of the bunny based on our proposed F3DAAT method. Figure 7a is the result of calculating Equation (12), and we can see that because the amplitude of each mesh is the same constant the reconstruction result is a lack of realism. Additionally, the bunny has self-occlusion, so that back-face culling by judging normal will have some errors for some polygons. Due to the wrong judgment, visible polygons and invisible polygons are superimposed together and the reconstructed part of the image is shown in the red box on Figure 7a. Figure 7b,c are the reconstruction results based on Equation (18), and from these two reconstructed images we can clearly see the details of various parts of the bunny. Figure 7b focuses on the bunny's leg, shown in the yellow box, and Figure 7c focuses on the bunny's ear, shown in the yellow box. We have also scanned a human face called "Alex" with a 3D camera, and the 3D mesh of "Alex" is shown in Figure 8a, which consists of 49,272 meshes. In this case, there is no need to use back-face culling, because the data is the result from actual image scanning. We have calculated the CGH of "Alex", and its holographic reconstructed image is shown in Figure 8b.
Comparison with Previous Methods
The Pseudo Inverse Matrix Method by Pan et al. [25,26] sets up an affine transformation matrix that contains all the information on the transformation. However, they have defined the primitive triangle located at 0, i.e., the plane of the source local coordinate system, leading to the concept of a pseudo inverse matrix to perform matrix inversion.
The introduction of the pseudo inverse matrix has produced calculation errors and slowed down the calculation speed. Zhang et al. [29] have introduced a Full Analytical 3D Affine Method to avoid the use of the pseudo inverse matrix. The method includes three core steps: rotation transformation for the tilted triangle until it is parallel to the hologram plane, 2D affine transform of the rotated triangle and finally the computation of the field distribution on the hologram by using the angular spectrum (AS) method for diffraction. Zhang et al. [8] also have proposed a Fast 3-D Affine Transformation (F3DAT) method based on the 3D affine transformation by Pan et al., to improve the computation efficiency. In the method, they have defined the primitive triangle located at 0, allowing the affine matrix to be fully inverted. The result of the F3DAT method provides a faster calculation time compared with that of the pseudo inverse matrix method and full analytical 3D affine method.
In the present proposed fast 3D analytical affine transformation (F3DAAT) method, we have obtained the affine matrix directly and derived an analytical expression of the spectrum of the primitive triangle. The more meshes that are calculated the more time F3DAAT will save. We have generated the holograms with a resolution of 1024×1024, and the hardware includes Intel Core i7-11700 @ 4.8GHz, 16G-byte RAM under the environment of MATLAB 2018b.
The Stanford bunny consisting of 31,724 meshes (after back-face culling) takes 893 s for the calculation, and the calculation of "Alex" of 49,272 meshes takes about 1288 s (See Figure 9). Additionally, shown in Figure 9 where we have used "Alex", we can see that the computational efficiency of F3DAAT has increased by almost two times compared with the previous methods. The calculation of the four methods is based on the same hardware condition and only CPU is used for the calculation. In one of the most recent studies, Wang et al., [43] proposed a polygon-based method using LUTs (look-up tables) with principal component analysis to speed up the calculation of CGHs. However, the method in the process of pre-computing the affine matrix still needs to solve a pseudo inverse matrix, and our proposed method is more general and efficient for solving the mapping relationship between the two coordinate systems, the global and the affine coordinate systems.
Optical Experiment
The optical experiment is shown in Figure 10. We have reconstructed 3D objects by loading the phase of the CGHs from the computer onto the spatial light modulator (SLM). The SLM in our experiment is a HOLOEYE PLUTO2 (NIR-011) phase-only SLM with a resolution of 1920 × 1080 (Full HD 1080p) and a pixel pitch of 8 μm and the active area is 15.36 mm × 8.64 mm. The laser is a green light with a wavelength of 532 nm. The spatial filter is used to generate a collimated light for the illumination of the hologram. The polarizer is for adjusting the polarization state of the light to work with the phase-only SLM. A camera (MMRY UC900C Charge-coupled Device) is used to receive the reconstructed image in the image plane of the imaging lens (focal length is 150 mm). Optical reconstructions of the bunny and the face of "Alex" are shown in Figure 11a,b, respectively.
Conclusions
In conclusion, we have proposed an improved algorithm to obtain a full-analytical spectrum of a tilted triangle based on 3D affine transformation. Our method avoids the time-consuming steps such as the need to solve for the pseudo inverse matrix or the complex process of 3D rotation and transformation. We have verified our method by calculating complex 3D objects composed of tens of thousands of meshes. In addition, we have added flat shading for realistic image presentation. We have successfully obtained the reconstructed images by numerical and optical reconstructions. Through comparison, it is found that our method improves the computational efficiency by about two times compared with the previous affine methods.
Appendix C. Derivation of the Analytical Spectrum Expression of Tilted Triangle on the Hologram
According to affine transformation, ⃗ ⃗ ⃗ . Therefore, the coordinate relationships between the global coordinate system , , and affine coordinate system , , are: , , .
(A9)
Additionally, the relationships between the spatial frequencies of the global coordinate system , , and the affine coordinate system , , are , , , , . We can write as follows: , , .
|
v3-fos-license
|
2021-06-27T13:22:42.823Z
|
2021-06-23T00:00:00.000
|
235650149
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "5c3197da203888dcbe5899c62aee43a4a6fe543c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46089",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "b069bbc7b7fbd0fb1e952ad8d5ac70951c259ed2",
"year": 2022
}
|
pes2o/s2orc
|
Structural Insight into Molecular Inhibitory Mechanism of InsP6 on African Swine Fever Virus mRNA-Decapping Enzyme g5Rp
ABSTRACT Removal of 5′ cap on cellular mRNAs by the African swine fever virus (ASFV) decapping enzyme g5R protein (g5Rp) is beneficial to viral gene expression during the early stages of infection. As the only nucleoside diphosphate-linked moiety X (Nudix) decapping enzyme encoded in the ASFV genome, g5Rp works in both the degradation of cellular mRNA and the hydrolyzation of the diphosphoinositol polyphosphates. Here, we report the structures of dimeric g5Rp and its complex with inositol hexakisphosphate (InsP6). The two g5Rp protomers interact head to head to form a dimer, and the dimeric interface is formed by extensive polar and nonpolar interactions. Each protomer is composed of a unique N-terminal helical domain and a C-terminal classic Nudix domain. As g5Rp is an mRNA-decapping enzyme, we identified key residues, including K8, K94, K95, K98, K175, R221, and K243 located on the substrate RNA binding interfaces of g5Rp which are important to RNA binding and decapping enzyme activity. Furthermore, the g5Rp-mediated mRNA decapping was inhibited by InsP6. The g5Rp-InsP6 complex structure showed that the InsP6 molecules occupy the same regions that primarily mediate g5Rp-RNA interaction, elucidating the roles of InsP6 in the regulation of the viral decapping activity of g5Rp in mRNA degradation. Collectively, these results provide the structural basis of interaction between RNA and g5Rp and highlight the inhibitory mechanism of InsP6 on mRNA decapping by g5Rp. IMPORTANCE ASF is a highly contagious hemorrhagic viral disease in domestic pigs which causes high mortality. Currently, there are still no effective vaccines or specific drugs available against this particular virus. The protein g5Rp is the only viral mRNA-decapping enzyme, playing an essential role in the machinery assembly of mRNA regulation and translation initiation. In this study, we solved the crystal structures of g5Rp dimer and complex with InsP6. Structure-based mutagenesis studies revealed critical residues involved in a candidate RNA binding region, which also play pivotal roles in complex with InsP6. Notably, InsP6 can inhibit g5Rp activity by competitively blocking the binding of substrate mRNA to the enzyme. Our structure-function studies provide the basis for potential anti-ASFV inhibitor designs targeting the critical enzyme.
the sole member of the Asfarviridae, a family of African swine fever-like viruses that are relatively independent of the host cell transcriptional machinery for viral replication (3,4). The ASFV infection of domestic swine can result in various disease forms, ranging from highly lethal to subclinical depending on the contributing viral and host factors (5). Since 2018, ASFV has spread into China and led to a high mortality rate in domestic pigs (6,7). Currently, there are still no effective vaccines or specific drugs available against this particular virus (8,9).
During an ASFV infection, protein synthesis in the host cell is inhibited as a result of a massive degradation of host cellular mRNAs in the cytoplasm of infected cells (10,11). As part of its strategy to inhibit host cellular translation and promote viral protein synthesis instead, the virus targets the mRNAs of the host cell using specific enzymes (12). Hydrolysis of the 59 cap structure (m 7 GpppN) on eukaryotic mRNAs, a process known as decapping, is considered to be a crucial and highly regulated step in the degradation of mRNA (13). Some viruses including ASFV and vaccinia virus (VACV) can harbor decapping enzymes for control of viral and cellular gene expression (14). Two poxvirus Nudix hydrolases, D9 and D10, have been confirmed with intrinsic mRNA-decapping activity, although the two decapping enzymes appear to have some differences in substrate recognition (15,16).
Nudix hydrolases (nucleoside diphosphate-linked moiety X) are widely present in bacteria, archaea, and eukarya, where they belong to a superfamily of hydrolytic enzymes that catalyze the cleavage of nucleoside diphosphates and the decapping of the 59 cap of mRNAs, the latter of which plays a pivotal role in mRNA metabolism (17,18). Mammalian cells have about 30 different genes with Nudix motifs, including Dcp2, Nudt16, and NUDT3/DIPP1, which cleaves mRNA caps in mRNA degradation by the 59-39 decay pathway in vivo (19)(20)(21). The mRNA-decapping enzyme g5R protein (g5Rp), which is the only Nudix hydrolase in ASFV, shares sequence similarity to the mRNAdecapping enzymes Dcp2 in Schizosaccharomyces pombe and D9 or D10 in VACV (22)(23)(24). However, g5Rp and its Nudix homologs D9 and D10 exhibit higher hydrolytic activity toward diphosphoinositol polyphosphates and dinucleotide polyphosphates than toward cap analogs (25,26). Similar to Dcp2, these Nudix hydrolases cleave the mRNA cap attached to an RNA moiety, predicating that RNA binding is crucial for performing its mRNA-decapping activity (16). Recently, structural study has confirmed that the Nudix protein CFI m 25 has a sequence-specific RNA binding capability (27). The requirement of RNA binding for the majority of the Nudix decapping enzymes suggest that the members of the Nudix family also belong to RNA binding proteins.
The viral mRNA-decapping enzyme g5Rp is expressed in the endoplasmic reticulum from the early stage of ASFV infection and accumulates throughout the infection process, playing an essential role in the machinery assembly of mRNA regulation and translation initiation (23). Like other members of the Nudix family, g5Rp has a broader range of nucleotide substrate specificity, including that for a variety of guanine and adenine nucleotides and dinucleotide polyphosphates (25). Generally, g5Rp has two distinct enzymatic activities in vitro (viz., diphosphoinositol polyphosphate hydrolase activity and mRNA-decapping activity), implying that it plays roles in viral membrane morphogenesis and mRNA regulation during viral infections (28). In light of these biochemical observations, the elucidation of the structure of g5Rp is of fundamental importance for our understanding of the molecular mechanisms through which it degrades cellular RNAs and regulates viral gene expression.
Here, we report the crystal structure of g5Rp and its complex structure with InsP 6 . Combined with biochemical experiments, the dimeric form of g5Rp and three RNA binding surfaces on each protomer are critical to substrate RNA binding of g5Rp. The g5Rp-InsP 6 complex structure shows that two of the RNA binding surfaces are occupied by InsP 6 , indicating that InsP 6 may play a role in its ability to inhibit g5Rp-RNA binding activity. Meanwhile, we evaluate the inhibitor effect of InsP 6 on the mRNA-decapping enzyme activity of g5Rp. Therefore, we proposed that such inhibition could be caused by the competition of InsP 6 with substrate mRNA for binding to g5Rp. Furthermore, we show in detail how InsP 6 inhibits g5Rp activity by occupying the RNA binding interfaces on g5Rp, thereby competitively blocking the binding of substrate mRNA to the enzyme. These results suggest InsP 6 or its structural analogs may be involved in the manipulation of the mRNAdecapping process during viral infections and provide an essential structural basis for the development of ASFV chemotherapies in the future.
RESULTS
Characterization of recombinant ASFV g5Rp. Recombinant wild-type (WT) ASFV g5Rp (residues 1 to 250) was expressed in Escherichia coli with an N-terminal His 6 tag. The purified g5Rp was eluted from a Superdex 200 column (GE Healthcare) with a major elution volume of 15.6 mL, indicating an approximate molecular weight of 32.1 kDa (Fig. 1A). The fractions were further analyzed by sodium dodecyl sulfate-polyacrylamide gel (Fig. 1B). A cross-linking assay confirmed that g5Rp exists as a stable homodimer in solution (Fig. 1C).
We first characterized the nucleic acid binding ability of g5Rp with different lengths of single-stranded RNA (12-mer and 26-mer ssRNA). Electrophoretic mobility shift assay (EMSA) results demonstrated that g5Rp binds ssRNA (0.25 mM) at the lowest concentration of 0.5 mM (Fig. 1D and E). Furthermore, we measured the binding affinity of wild-type (WT) g5Rp for ssRNA by using surface plasmon resonance (SPR) (Fig. 1G and H). The enzyme exhibited a stronger binding affinity to ssRNAs with the following equilibrium dissociation constants: 12-mer K D = 164.0 nM and 26-mer K D = 44.8 nM. The kinetic analysis of the binding experiments is shown in Table 1. These results indicate that g5Rp possesses a higher affinity with long ssRNA. Next, we reevaluated the decapping activity of recombinant g5Rp by incubating the protein with a 32 P-cap-labeled RNA substrate in a reaction. The products of the reaction were resolved by polyethyleneimine (PEI)-cellulose thin-layer chromatography (TLC) and detected by autoradiography (23). As shown in Fig. 1F, the recombinant g5Rp in the decapping reaction released 7-methylguanosine cap (m 7 GDP) product efficiently. In contrast, the 32 P-cap-labeled RNA substrate as control remained at the origin of the plate. These results suggest that the recombinant g5Rp possesses efficient mRNA-decapping enzyme activity.
Overview of the ASFV g5Rp structure. To investigate structural insights into the catalytic mechanism of g5Rp, we determined its dimeric structure by single-wavelength anomalous diffraction (SAD) phases using selenomethionine (SeMet)-labeled protein. As shown in Fig. 2A, the g5Rp dimer is composed of two protomers that each adopt a "boxing glove" shape with a distinct helical domain and Nudix domain (Fig. 2D). The helical domain (residues 36 to 124) forms a globin-fold-like feature composed of six a-helices (a1 to a6) that connects to the Nudix domain by two hinge linkers (linker I, residues 32 to 35; linker II, residues 119 to 139). The Nudix domain (residues 1 to 35 and 125 to 250) consists of a central curved b-sheet (b1, b2, b3, b4) surrounded by five a-helices (a7 to a11) and several loops, thereby forming a classic a-b-a sandwich structure. Linker II splits the top of the b-sheet to connect a6 and a7 (Fig. 2E). The Nudix motif located in the center of the Nudix domain is highly conserved and comprises the loop-helix-loop architecture that contains the Nudix signature sequence extending from residues 132 GKPKEDESDLTCAIREFEEETGI 154 in g5Rp (Fig. 2F). The sequence of the g5Rp Nudix motif matches the classic pattern of the Nudix motif in the Nudix hydrolase superfamily, that is, GX 5 EX 7 REUXEEXGU, where X is any residue and U is Ile, Leu, or Val (29,30). Using the Dali server (31), we compared the structure of g5Rp with that of other proteins in the Protein Data Bank (PDB), whereupon 46 structures were found to be likely homologous to the enzyme, with Z-scores in the range of 8 to 20 (data not shown). However, all the listed protein structures shared high architectural similarity only with the Nudix domain located in the C terminus of g5Rp. Therefore, a search on the Dali server was carried out for the helical domain alone, whereupon no homologous structure with a Z-score above 4 was found, suggesting that the helical domain of g5Rp adopts a novel fold. Compared with the structures of Dcp2 in a number of different conformations, g5Rp shows a unique globin-fold-like domain ( Fig. 2B and C).
A previous study showed that the helical domain of g5Rp is the major mediator of RNA interaction (28). However, the positively charged surface of the g5Rp structure overlaps both the helical domain and the Nudix domain that may exhibit RNA binding activity (Fig. 3A). We proposed that both positively charged regions could contribute to g5Rp-RNA interaction. To test the hypothesis, we measured the binding of the truncation variants g5RpDC (helical domain, residues 36 to 124) and g5RpDN (Nudix domain, connecting residues 1 to 35 and 125 to 250 directly) to ssRNAs (12-mer and 26-mer), respectively. Our EMSA results showed that both the helical domain (g5RpDC) and Nudix domain (g5RpDN) of g5Rp are involved in ssRNA interaction (Fig. 3B). The helical domain exhibited K D values of 39.0 and 50.7 nM for the surface-immobilized 12-and 26mer ssRNAs measured by SPR, respectively ( Fig. 3C and D). In contrast, the K D values of wild-type g5Rp for the ssRNAs (12-mer K D = 164.0 nM, 26-mer K D = 44.8 nM) are slightly lower than that of the helical domain with ssRNAs, indicating that both full-length and truncated g5Rp associated with RNA with high affinity.
The dimeric structure of g5Rp. When recombinant g5Rp was subjected to gel filtration chromatography to estimate molecular weight, it migrated as a single population of molecules at a molecular mass consistent with a monomer. However, g5Rp dimerization was consistent with cross-linking experiments ( Fig. 1A and C). To obtain more information about the interfaces and likely biological assemblies of g5Rp, we analyzed its structure using the PDB-related interactive tool Proteins, Interfaces, Structures and Assemblies (PDBePISA) (32). The results suggested that g5Rp forms a stable symmetric dimer in crystal packing. The dimer was composed of two protomers (A and B) positioned in an orientation similar to two boxing gloves stuck together back to back ( Fig. 2A). The dimer interfaces were stabilized mainly by hydrophobic interactions. Furthermore, a network of hydrogen bonds conferred additional stability on the interface. One interface was composed of four a-helices (a3 and a4 from each A and B protomer) from the N terminus of each protomer. To determine the multimeric state of g5Rp in solution and to examine which of its termini is critical for its dimerization, we measured the multimerization of two g5Rp truncation variants (g5RpDN and g5RpDC) using cross- linking experiments. The results showed that the wild type, N terminus, and C terminus of g5Rp all formed a dimeric conformation in solution ( Fig. 4B and C). The g5Rp mutant I84A/ I116A/L200A/I206A/F222A that prepared to dissociate the dimeric form of g5Rp was successful in altering a monomeric state, even the dimeric total buried area of 3,050 Å 2 . Wildtype g5Rp and mutants were subjected to gel filtration chromatography, showing that the mutant I84A/I116A/L200A/I206A/F222A has a larger retention volume, corresponding to a lower molecular weight (Fig. 5A). The protein cross-linking experiment showed that the dimeric conformation was significantly reduced in solution for the mutant (Fig. 5B). The ssRNA binding ability of the monomeric mutant has been measured by SPR and EMSA. The monomeric mutant with analyte concentrations was passed over immobilized ssRNA. The resultant sensorgrams are shown in Fig. 5C and D, and kinetic analysis is shown in Table 1. EMSA data are shown in Fig. 5E. Both measurements produced consistent results indicating that the g5Rp mutant I84A/I116A/L200A/I206A/F222A partially impaired the RNA binding ability. Therefore, we proposed that the dimeric g5Rp is preferred for efficient RNA binding. Meanwhile, mRNA-decapping assays showed that the decapping activity of mutant I84A/I116A/L200A/I206A/F222A dropped greatly (Fig. 5F).
Structure of the g5Rp-InsP 6 complex. g5Rp was originally characterized through its ability to dephosphorylate 5-PP-InsP 5 (InsP 7 ) to produce InsP 6 (25). We were surprised to find a tight interaction between InsP 6 and g5Rp by microscale thermophoresis (MST) (Fig. 6A). To gain insight into the molecular basis of the interaction, we determined the crystal structure of the g5Rp-InsP 6 complex and found that each asymmetric unit contained one g5Rp-InsP 6 complex in space group P4 1 22. PDBePISA analysis revealed that an identical dimeric conformation exists in the g5Rp-InsP 6 complex structure (Fig. 6B). Two InsP 6 molecules were situated on the edge of the b1 strand of each g5Rp protomer through interactions with residues Gln 6 , Lys 8 , and Lys 133 (Fig. 6C). Due to the 2-fold symmetry in the crystal, each of the g5Rp protomers shared two InsP 6 molecules (InsP 6 and InsP 6 asym ) with its neighboring g5Rp protomer in the crystal lattice. Besides the InsP 6 binding on the b1 strand located on the edge of the Nudix domain, an extra InsP 6 molecule from the neighboring molecule also interacted with g5Rp through residues Lys 94 and Lys 98 on the a5 helix in the helical domain (Fig. 6C). In this way, each InsP 6 molecule is surrounded by four Lys residues in complex structure. The solvent-accessible surface of the InsP 6 binding region of g5Rp was calculated according to the electrostatic potential. It was apparent that both InsP 6 molecules were situated on the highly positively charged area located in the protein cleft between the helical domain and Nudix domain of g5Rp (Fig. 7A). The local conformational changes of g5Rp in the complex structure induced by its interaction with InsP 6 are illustrated in Fig. 7B. In the complex structure, the b1, b3, and b5 strands located in the Nudix domain had moved closer to the helical domain, and a2 was pushed away from the InsP 6 binding sites. These changes rendered the g5Rp conformation more stable in the complex.
To assess their relative importance in g5Rp-InsP 6 interaction, amino acid residues involved in InsP 6 binding pockets were replaced by single point mutation (Q6A, K8A, K94A, K98A, K133A). Each mutant was tested for its binding affinity for InsP 6 by MST. Figure 6D showed that mutants resulted in a notable decrease in g5Rp-InsP 6 interaction, and furthermore, the quintuple mutant Q6A/K8A/K94A/K98A/K133A totally lost the binding ability with InsP 6 . Taken together, the mutagenesis work indicates that positively charged residues Lys8, Lys94, Lys98, and Lys133 form a cluster to mediate the g5Rp-InsP 6 interaction.
Analysis of residues involved in g5Rp-RNA interfaces. To characterize RNA binding surface on g5Rp, we analyzed the electrostatic potential at the surface of g5Rp, which indicated that three highly positively charged areas (areas I to III) may play roles in g5Rp-RNA interaction (Fig. 3A). Area I is located on the helical domain, containing residues Lys94, Lys95, Lys98, Arg100, and Lys101 located on helix a5. Area II is composed of residues Lys8, Lys131, Lys133, Lys135, Arg146, Lys175, Lys179, and His180 mostly located on the b1 and b3 strands, which are close to the Nudix motif; area III is located at the very end of the C terminus of g5Rp, comprising residues Arg 221 , Lys 225 , Arg 226 , Lys 243 , and Lys 247 on helices a10 and a11 (Fig. 8A). To identify the mRNA binding surfaces on g5Rp further, the residues mentioned above located in three positively charged areas of g5Rp were mutated, respectively. The EMSA pattern showed that some mutants reduce the RNA binding affinity of g5Rp. Specifically, residues Lys 8 , The binding ability between various g5Rp mutants and InsP 6 was measured by MST. The dissociation constants between g5Rp mutants and InsP 6 were calculated from three independent replicates (shown as mean 6 standard deviation). InsP 6 is shown as a stick model. (Fig. 8B and C), implying that the g5Rp-RNA interaction interfaces are mainly located at areas I, II, and III. These results also agree with our hypothesis that residues Lys 8 , Lys 94 , Lys 98 , and Lys 133 of g5Rp are involved in both RNA and InsP 6 interaction. We further explored whether these key residues were responsible for cap cleavage in a manner dependent on the RNA moiety interaction. Mutant proteins including Q6E/K8E, K94E, K95E, K98E, K175E, R221D, and K243E were expressed and purified. Consistent with our previous data, incubation of the 32 P-cap-labeled RNA substrate with wild-type g5Rp resulted in cap cleavage, as observed by m 7 GDP release. When equivalent amounts of the mutants of g5Rp were included in the decapping reaction, the amount of m 7 GDP released was reduced variously in each lane. Mutant K95E decreases the decapping activity almost 50% ( Fig. 8D and E), indicating that these residues of g5Rp play a pivotal role in mRNA decapping by interacting with substrate mRNA.
Residues Gly 132 , Lys 133 , and Glu 147 in the Nudix motif impact the decapping activity. The Nudix motif of hydrolases contains crucial residues involved in catalytic activity. However, the residues in the catalytic pocket of g5Rp are still elusive from the viewpoint of structure. To elucidate the function of the key residues in g5Rp, three substrate binding structures from the Nudix superfamily were selected to identify homologous domains with high similarity at the potential catalytic pockets (Fig. 9A to C), as shown in Table 2, viz., Ap4A hydrolase (Aquifer aeolicus, PDB accession no. 3I7V) (33), Nudix hydrolase DR1025 (Deinococcus radiodurans, PDB accession no. 1SZ3), and MTH1 (Mus musculus, PDB accession no. 5MZE) (34,35) (all belonging to the Nudix superfamily). Superposition of the C terminus of g5Rp with that of MTH1, Ap4A hydrolase, and Nudix hydrolase DR1025 resulted in Ca backbone root mean square deviation values of 0.50, 3.08, and 5.6 Å, respectively, despite the low sequence identities among these proteins (Fig. 9D). Therefore, the potential substrate binding site of g5Rp was proposed on the basis of the superpositions of these substrate binding protein structures of the Nudix superfamily. Residues Gly 132 , Lys 133 , and Glu 147 located on the Nudix motif of g5Rp may be responsible for cap cleavage.
To investigate the potential roles of these key residues located on the Nudix motif in the decapping activity, we replaced g5Rp residues G132, K133, and E147 from the Nudix motif ( Fig. 9D and Fig. 10A) with Ala, Glu, and Gln, respectively. As expected, the replacement of the residue K133 with glutamate resulted in a 30% decrease in the decapping activity. And the replacement of the residues G132 and E147 by alanine and glutamine, respectively, inactivated the decapping function of g5Rp completely ( Fig. 10B and C). No m 7 GDP was observed when the two mutants of g5Rp were included in the decapping reaction, validating that the decapping activity was dependent on these two key residues located in the Nudix hydrolase motif. Interestingly, EMSA results showed that mutant K133E reduces g5Rp's binding affinity to RNA, which suggests that the loop region of the Nudix motif takes part in substrate mRNA binding (Fig. 10D and E).
InsP6 inhibits the decapping activity by disrupting g5Rp-mRNA interaction. The finding that residues located on mRNA binding regions of g5Rp are also playing pivotal roles in g5Rp-InsP 6 interaction suggests that InsP 6 may inhibit the g5Rp decapping activity through preventing g5Rp from binding to its mRNA substrate (Fig. 11A). This prediction was confirmed by decapping and EMSAs using recombinant g5Rp, InsP 6 , and RNA substrates in vitro. Increasing amounts of InsP 6 were added to the decapping reactions to analyze its effect on RNA decapping by g5Rp. As shown in Fig. 11B, the addition of InsP 6 significantly affected g5Rp cleavage, suggesting that InsP 6 can inhibit the decapping activity of g5Rp in vitro. To investigate if this inhibitory mechanism of InsP 6 on g5Rp is due to inositol phosphate competitively inhibiting mRNA binding to the g5Rp, we further measured the competition of InsP 6 with nucleic acids for the binding to g5Rp by using EMSA. As expected, the amount of free single-stranded nucleic acids increased with an increasing concentration of InsP 6 , demonstrating that InsP 6 interrupts the g5Rp-mRNA interaction through directly binding to g5Rp (Fig. 11C and D). In addition, all residues involved in InsP 6 interaction in g5Rp were mutated into alanine at the same time. The quintuple mutant (Q6A/K8A/K94A/K98A/K133A) of g5Rp lost most of its ability to bind with both InsP 6 and RNA ( Fig. 6D and see Fig. 13A). These mutations also significantly affected the catalytic ability of g5Rp in vitro (see Fig. 13C and D), suggesting that InsP 6 inhibits the mRNA-decapping activity of g5Rp through competing for the substrate mRNA binding surface in g5Rp.
Transient expression of g5Rp decreases levels of mRNA substrates in 293T cells. The above data provide strong in vitro evidence for g5Rp-mRNA interaction being a critical step for the decapping enzyme process. To determine whether changes in g5Rp-mRNA interaction were directly related to the stability of cellular mRNAs in vivo, representative cellular mRNA (eIF4E, eIF4EA, and TP53) levels were tested by quantitative real-time PCR (RT-qPCR) in cells. In 293T cells, the Flag-tagged g5Rp and the g5Rp mutants (K8E, K94E, K95E, K98E, G132A, K133E, E147Q, K175E, R221D, and K243E) were overexpressed, respectively. As shown in Fig. 12A, the g5Rp-WT and mutant proteins were detected by Western blotting. The mRNA levels of target genes (eIF4E, eIF4EA, and TP53) were decreased in 293T cells overexpressing g5Rp-WT. There were no obvious changes in mRNA levels in the catalytic destructive mutants Q132A and E147Q. The overexpression in cells of truncated version g5Rp-DN and mutants K95E and R221D, mutants which significantly lost the RNA binding ability in vitro, had no effect on the mRNA levels of target genes in 293T cells. Mutants K8E and K133E, which had reduced RNA binding in vitro, had various degrees of increase compared with the mRNA levels of the g5Rp-WT group. However, the changes in mRNA levels of target genes observed in mutants K94E, K98E, K175E, and K243E did not have statistical differences from those in g5Rp-WT (Fig. 12B to D). Taken together, these results suggest that key residues K8, K95, K133, and R221, playing pivotal roles in g5Rp-RNA interaction, are also important to the g5Rp-related cellular RNA degradation in vivo.
Structural and Functional Insights into g5Rp
Journal of Virology
DISCUSSION
Given that an ASFV outbreak in China would potentially devastate the world's largest pork producer, significant efforts have been made to determine the structures and functions of essential viral proteins that may be used as targets for new anti-ASFV drugs. Several structures of ASFV-encoded enzymes and associated proteins that are involved in viral transcription and replication have been reported, including AP endonuclease (36), the histone-like protein pA104R (37), pS273R protease (38), DNA ligase (39), and dUTPase (40,41). However, the structures and functions of some critical ASFV proteins remain elusive, including those of g5Rp, a decapping enzyme that is crucial for viral infection (23). Our structures of g5Rp alone and in complex with InsP 6 provide the molecular basis for g5Rp substrate recognition and reveal that inositol phosphate was involved in the regulation of cellular mRNA degradation through direct interaction with the ASFV decapping enzyme g5Rp. Three potential RNA binding regions are identified, including a novel folding domain located on the helical domain of g5Rp and the Nudix motif on its C terminus. More importantly, identification of the major nucleic acid binding surfaces as well as the binding pocket of InsP 6 on g5Rp provides important structural information and a novel strategy for future anti-ASFV drug design.
To explore the nucleic acid binding properties of g5Rp, we conducted a series of nucleic acid binding experiments. Results indicated that an intact dimeric interface is efficient for g5Rp-RNA interaction. Meanwhile, the helical domain and Nudix domain of g5Rp are both involved in ssRNA interaction. Our EMSA and SPR measurements show that the helical domain of g5Rp can bind with ssRNA with equally high affinity as the full-length protein. Six a-helices form a globin-fold-like helical domain, which is different from the traditional RNA binding domain that prefers to adopt the alpha/beta topologies (42)(43)(44)(45). According to the g5Rp structure, the surface electrostatic potential characteristics of the N terminus present a highly positively charged area on helix a5. The single point mutations of positively charged residues in the N terminus significantly reduced the nucleic acid binding activity of g5Rp with ssRNA ( Fig. 8B and C). Furthermore, there are two positive areas located on the C terminus of g5Rp, including the Nudix motif, participating in the substrate RNA interaction. We mutated the two positively charged regions (K8A/K131A/K133A/K135A and R221A/K225A/R226A/ K243A/R247A) located in the Nudix domain; the EMSA data showed that the nucleic acid binding ability of these two mutants was significantly reduced (Fig. 13B), and the Fig. 13E and F data showed that the substantial decline in capacity of K8A/K131A/ K133A/K135A removed the m 7 Gppp RNA cap. These results predicated that the Nudix motif of g5Rp possesses substrate selectivity at the step of mRNA binding. Previously, studies revealed that the Nudix motif (residues 132 to 154) is an essential component of the a-b-a sandwich in the catalytic center of g5Rp. Several of the conserved catalytic amino acids and glutamate residues (E 147 , E 149 , E 150 , and E 151 ) located on the a-helix of the Nudix motif of g5Rp have been found to be important for the activity of Nudix hydrolases (23,28). However, the function of the loop region within the Nudix motif is exclusive, leading us to predicate that the loop region may contribute to binding with mRNA. Therefore, we mutated several residues in this loop region, including the mutations K133E and G132A, and examined the effects on the protein's interaction with single-stranded nucleic acids. It is interesting to find that substitutes for the conserved residues K 133 and G 132 are highly sensitive to g5Rp-RNA interaction. Compared with glutamate residues located on the a-helix of the Nudix motif of g5Rp involved in mRNA cap structure interaction, residues K 133 and G 132 are important for binding with the RNA moiety on the substrate. In this way, we provided a demonstration that the short loop in the Nudix motif is required for g5Rp-RNA interaction. Including the Nudix motif, three positively charged patches on the g5Rp surface were mapped as mRNA binding regions. Furthermore, we also investigated the importance of the residues involved in mRNA interaction in g5Rp-mediated decapping. The g5Rp mutants K8E, K94E, K95E, K98E, K175E, and R221D showed a strong reduction in decapping activity, demonstrating the importance of the mRNA binding residues for catalysis. The dimeric form of g5Rp is also important to the decapping activity. We constructed mutant g5Rp-I84A/I116A/L200A/I206A/F222A in which the dimerization surface was destroyed. mRNA-decapping assays showed that the decapping activity of mutant g5Rp-I84A/I116A/L200A/I206A/F222A decreased drastically (Fig. 5F). It will be of profound interest to elucidate the structural basis of the enzymatic activity of g5Rp by solving the structure of g5Rp in complex with mRNA in the future.
The other important finding in this study was that InsP 6 is able to inhibit the decapping activity of g5Rp. As we know, InsP 6 is widespread in cells with diverse biological functions (46)(47)(48)(49). Here, we found that InsP 6 competes with mRNA substrates for binding to g5Rp and inhibits its decapping activity. A previous study reported that g5Rp is a diphosphoinositol polyphosphate phosphohydrolase (DIPP), which preferentially removes the 5-b-phosphate from InsP 7 to produce InsP 6 with unclear functional significance (25). Later, Parrish and colleagues identified that g5Rp can hydrolyze the mRNA cap when tethered to an RNA moiety in vitro (23). Our results show that InsP 6 as the product of g5Rp playing the role of DIPP can directly inhibit the mRNA-decapping activity of g5Rp. To illustrate the structural basis of the inhibitory mechanism of InsP 6 for the decapping activity of g5Rp, we solved the structure of the complex of g5Rp with InsP 6 and also the enzyme-product complex in the Nudix superfamily. To our surprise, InsP 6 is located on the mRNA binding region instead of in the catalytic center of the g5Rp. Furthermore, we superposed the catalytic domain of g5Rp-InsP 6 complex with the structures of human DIPP1 in complex with the substrate InsP 7 (50,51). The visualized result showed that the substrate InsP 7 is located in the catalytic center of DIPP1, unlike InsP 6 , which sits on the edge of the catalytic domain of g5Rp (Fig. 14A). Therefore, the structure of the g5Rp-InsP 6 complex may represent an intermediate in the release of the product of the enzymatic reaction (52). We also noticed that InsP 6 decreased the temperature value (B factor) around the binding sites compared with B factor in the same regions of the g5Rp wild-type structure, suggesting that the flexible loop closed to the catalytic center is locked in place by InsP 6 (Fig. 14B). InsP 6 itself was refined with a correspondingly high B factor that exceeded the average B factor of the protein in complex. Considering that the g5Rp-InsP 6 interaction has a dissociation constant (K d ) in the 22.5 mM range, the ligand achieves only a reasonable occupancy of 70% (53). To avoid an instance of overenthusiastic interpretation of ligand density, we tested the InsP 6 binding site by using single point mutations. Residues involved in the InsP 6 binding surface of g5Rp replaced by alanine (Q6A/K8A/K94A/K98A/K133A) reduced its InsP 6 binding capacity and RNA interaction, indicating the destructive InsP 6 binding site has the capability to abolish the substrate RNA binding ability of g5Rp (Fig. 13A, C, and D).
Our study raises the possibility that g5Rp hydrolyzes InsP 7 to upregulate the level of InsP 6 , which is a key regulator of g5Rp-mediated mRNA decapping during ASFV infection in vivo (54). Very recently, Sahu and colleagues reported that InsP 7 regulates the NUDT3-mediated mRNA decapping and also observed the phenomenon that InsP 6 inhibits mRNA decapping by NUDT3 (54). There are emerging signs that the functions of InsP 6 are associated with mRNA transportation and degradation in ASFV-infected cells. Further studies on the function of InsP 6 and the regulation mechanism in the inositol-based cell signaling family during viral infection are required.
Plasmid construction, protein expression, and purification. The gene encoding ASFV g5Rp (D250R) was synthesized and subcloned into pSMART-1 and pcDNA3.1, respectively. The amino acid sequence of Table 3. The recombinant plasmids were confirmed by sequencing (Sangon Biotech, China) before being introduced into E. coli BL21(DE3) (Invitrogen, USA) or human 293T cells. The bacterial cells were cultured in Luria broth medium at 35°C until the optical density at 600 nm reached 0.6 to 0.8. Protein expression was then induced by the addition of isopropyl-b-D-1-thiogalactopyranoside for 16 h at 16°C. The g5Rp molecules were purified by Ni-nitrilotriacetic acid (NTA) (Qiagen, Germany) affinity chromatography, followed by heparin affinity chromatography (GE Healthcare, USA). The peak fractions containing the target proteins were pooled, concentrated to 1 mL, and finally loaded onto a Superdex 75 column (GE Healthcare, USA) for further purification and characterization. Selenomethionine-labeled g5Rp (SeMet-g5Rp) was then prepared using a previously described protocol (55). The purity of all proteins was above 95% on the SDS-PAGE gel.
Protein crystallization and optimization. The prepared SeMet-g5Rp was concentrated to 12 mg/mL for the crystallization trials. The crystals were grown using the hanging-drop vapor diffusion method at 16°C in a reservoir solution containing 0.1 M sodium citrate tribasic dihydrate (pH 5.8), 0.54 M magnesium formate dihydrate, and 10% (vol/vol) 1,2-butanediol as an additive reagent. The g5Rp-InsP 6 complexes were prepared by mixing g5Rp with InsP 6 at a stoichiometric ratio of 1:3. Then, using the hanging-drop vapor diffusion method, crystals of the complexes were grown from 1 M imidazole (pH 7.0) at 16°C. All crystals were transferred into solutions containing 20% (vol/vol) glycerol prior to being frozen and stored in liquid nitrogen.
Data collection, processing, and structure determination. The single-wavelength anomalous dispersion (SAD) data were collected using synchrotron radiation of an 0.98-Å wavelength under cryogenic conditions (100 K) at the BL18U1 beamline, Shanghai Synchrotron Radiation Facility. All diffraction data sets including g5Rp-WT and the complex with InsP 6 were indexed, integrated, and scaled by using the HKL-2000 package (56). The selenium atoms in the asymmetric unit of SeMet-g5Rp were located and PyMOL. The Ca B factors are depicted on the structure in dark blue (lowest B factor) through to red (highest B factor), with the radius of the ribbon increasing from low to high B factor. The lower B factor is observed in the overall structure of g5Rp-InsP 6 , with the InsP 6 binding sites also displaying lowerthan-average B factors, consistent with the InsP 6 contacts stabilizing this region of g5Rp relative to the overall structure. refined, and the SAD data phases were calculated and substantially improved through solvent flattening with the PHENIX program (57). A model was built manually into the modified experimental electron density using the model-building tool Coot (58) and then further refined in PHENIX. The model geometry was verified using the program MolProbity (59). Molecular replacement was used to solve the structure of the g5Rp-InsP 6 complex, using Phaser in the CCP4 program suite with an initial search model of SeMet-g5Rp (60). Structural figures were drawn using PyMOL (DeLano Scientific). The data collection and refinement statistics are shown in Table 1.
|
v3-fos-license
|
2022-12-18T16:04:39.360Z
|
2022-01-01T00:00:00.000
|
254811678
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "abe0908ecb8c3e8677874e0782d3f27b9bcbe5fc",
"pdf_src": "Sage",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46091",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "33dfa936bed12980a4768a0545027855064ae927",
"year": 2022
}
|
pes2o/s2orc
|
The DynAIRx Project Protocol: Artificial Intelligence for dynamic prescribing optimisation and care integration in multimorbidity
Background Structured Medication Reviews (SMRs) are intended to help deliver the NHS Long Term Plan for medicines optimisation in people living with multiple long-term conditions and polypharmacy. It is challenging to gather the information needed for these reviews due to poor integration of health records across providers and there is little guidance on how to identify those patients most urgently requiring review. Objective To extract information from scattered clinical records on how health and medications change over time, apply interpretable artificial intelligence (AI) approaches to predict risks of poor outcomes and overlay this information on care records to inform SMRs. We will pilot this approach in primary care prescribing audit and feedback systems, and co-design future medicines optimisation decision support systems. Design DynAIRx will target potentially problematic polypharmacy in three key multimorbidity groups, namely, people with (a) mental and physical health problems, (b) four or more long-term conditions taking ten or more drugs and (c) older age and frailty. Structured clinical data will be drawn from integrated care records (general practice, hospital, and social care) covering an ∼11m population supplemented with Natural Language Processing (NLP) of unstructured clinical text. AI systems will be trained to identify patterns of conditions, medications, tests, and clinical contacts preceding adverse events in order to identify individuals who might benefit most from an SMR. Discussion By implementing and evaluating an AI-augmented visualisation of care records in an existing prescribing audit and feedback system we will create a learning system for medicines optimisation, co-designed throughout with end-users and patients.
Introduction
The Artificial Intelligence (AI) for dynamic prescribing optimisation and care integration in multimorbidity (DynAIRx) project addresses problematic polypharmacy in multimorbidity (co-existence of ≥2 long-term conditions). The aim is to improve holistic care in multimorbidity by supporting medicines optimisation, in alignment with the UK National Health Service (NHS) Long Term Plan and 2021 National Overprescribing Review. 1,2 As a population we are living longer, driven by medical advances improving survival at all ages. 3 Age is the dominant risk factor for the acquisition of long-term conditions. The more conditions a patient has, the more associated medications they are likely to take. Polypharmacy describes the use of multiple regular medications by an individual, most often described as taking more than five daily. Without medicines optimisation, polypharmacy may worsen the prevalence, outcomes, experiences and costs of multimorbidity. 4 The information for coordinating care is hard to assemble and understand, particularly in timeconstrained consultations. The effective withdrawal of medications to improve outcomesdeprescribingis hindered by scattered records impeding the integration of care across providers. Holistic medication reviews have enormous potential to benefit those with multimorbidity, yet there is little support for such reviews. The NHS Long Term Plan 1 seeks to optimise prescribing, including by deprescribing. Recent evidence has identified limitations in deprescribing during an acute hospital admission, and a proactive, primary care-based approach may be preferable. 5 Using AI (machine learning for information extraction, dynamic prediction and visualisation), DynAIRx will bring the predictive information and longitudinal care summaries together with guidelines in new visualisations to support medicines optimisation. This combined information will be piloted in prescribing audit and feedback systems that clinicians are using in research and clinical practice. 6
Rationale
Despite the need for deprescribing support, evidence of how to do it systematically is lacking. Three Cochrane reviews [7][8][9] identified various deprescribing interventions, with barriers to implementation leading to inconsistent effectiveness. Primary-care-embedded development of audit and feedback shows promise for improving prescribing, with success depending on how feedback is delivered. 10 Previously, such systems have been limited by data supply. The roll-out of integrated/shared care records is now providing the data for patient-centred and locality context sensitive 'learning systems'. 11 DynAIRx will develop and implement statistically principled AI approaches to systematically identify problematic polypharmacy in major multimorbidity groups. In order to be effective, AI-augmented feedback to clinicians must be co-produced with clinical stakeholders and reviewed iteratively. Therefore, early engagement with clinicians in the form of a needs analysis will enable: 1. Understanding of the requirements of those involved in SMRs (including patients). 2. Defining the barriers and facilitators to implementation of AI-guided SMRs. 3. Iterative refinement of the proposed prescribing feedback to clinicians.
Aim(s)
The overall aim of DynAIRx is to develop new, easy to use, AI tools that support general practitioners (GPs) and pharmacists to find patients living with multimorbidity (two or more long-term health conditions) who might be offered a better combination of medicines. The project will focus on three groups of people at high risk of rapidly worsening health from multimorbidity: 1. People with mental and physical health problems, in whom the prescribing for mental health improvement can lead to adverse physical health consequences. 2. People with complex multimorbidity in the form of four or more long-term health conditions taking ten or more drugs. 3. Older people with frailty as a subgroup of people with multimorbidity at especially high risk of adverse outcomes.
Objective(s)
The objectives of the DynAIRx project are to: and refinement of feedback systemsparticipating clinicians, who undertake SMRs will participate in "think-aloud" studies of the protype tool and identify positive and negative features of the tool which will allow the iterative improvement of the prototype (codeveloped with patient and public representatives). 8. Refine the later prototypes through user-group feedback and, through two workshops, to explore further the perceived strengths and weaknesses and thus the implementability of the system.
Methods/Design and Analysis
DynAIRx involves a combination of qualitative stakeholder engagement (DynAIRx Qualitative Phase 1, clinical needs analysis), large-scale health informatics (DynAIRx health data) and co-development/iterative analysis (DynAIRx Qualitative Phase 2) to harness linked data across primary, secondary and social care to create visualisations of patient journeys, risk-prediction estimates and prescribing dashboards to support SMRs. DynAIRx will harness the emerging integrated records mandated for NHS Integrated Care Systems to coordinate services across providers. Through statistically robust approaches, it will predict avoidable multimorbidity and harm resulting from medications.
DynAIRx Qualitative Phase 1 -Needs analysis and requirements engineering
Description of study design
The DynAIRx qualitative studies will explore the perceptions of key stakeholders on how SMRs are currently being undertaken and what the barriers and facilitators are to making them effective and efficient. The research adopts a descriptive and exploratory methodology, and is based on qualitative data from participants regarding their current and retrospective experiences of SMRs (figure 1). This includes semi-structured interviews and focus groups.
The qualitative studies will also explore the opinions of key stakeholders on the prototype prescribing audit & feedback tools that are developed to support SMRs, informed by analysis of patient journeys and AI-assisted integration of care records. This will be undertaken through one-to-one think-aloud studies and mixedparticipant workshops.
1. What are the barriers and facilitators to the uptake and utilisation of an AI-augmented prescribing support system for SMRs from the perspective of primary and secondary care clinicians, pharmacists, patients, and commissioners/managers involved in SMR services? 2. What are the features that would make such a resource acceptable and usable?
DynAIRx Qualitative In-depth Interviews
Semi-structured one-to-one interviews will be undertaken with a broad range of representatives from Primary Care Networks, GPs, pharmacists (primary care and chief), clinical pharmacologists, practice managers and patients to understand their priorities for such reviews and potential barriers/facilitators to implementation. Semi-structured interviews will allow us to elicit participant personal feelings, opinions, and experiences, and help the researchers to gain insights into barriers and facilitators to future uptake of the proposed systems.
DynAIRx Qualitative Focus groups
Semi-structured interviews will be followed by broader focus group discussion (1-2 groups) across the 5 stakeholder groups (4-8 participants per focus group). The total number of participants will depend on the themes that emerge and the requirement for further exploration. Focus group interview guides will be co-developed to address the key research questions, enhanced by themes that emerge from the initial one-to-one interviews.
Prescribers' requirements for support with SMRs will be identified across the key groups. This will include insight into data-driven medication reviews, including what clinicians consider high-risk vs high-volume prescribing. Work will also focus on clinical uncertainty. For example, exploring prescriber needs in high-risk situations, such as severe mental illness, where stopping an antipsychotic may not be viable, yet the dose could be adjusted to a safer level, or an alternative drug with lower cardiovascular risk could be prescribed. Discussions will explore how medications might be best prioritised for older people living with frailty, and people with complex multimorbidity, including using the The National Institute for Health and Care Excellence (NICE) Database of Treatment Effects and Scottish Polypharmacy Guidance. The key stakeholder groups that will be engaged to provide feedback on the current experience of structured medication reviews and to undertake iterative review of prototype prescribing tools. Task-based workshops/focus groups may be supported by ongoing, semi-structured qualitative interviews with stakeholders, including Clinical Commissioning Group Leads and Chief Pharmacists, alongside patients and carers across our key groups. All workshops/interviews will be audio-recorded, with participant consent, and transcribed for thematic analysis.
Description of sample selection/data collection
Semi-structured interviews (∼10-20 participants across 5 stakeholder groups) and focus groups (1-2 per stakeholder group involving 4-8 participants) will be undertaken via video conference (or telephone for one-to-one interviews). The groups are deliberately small to engage effectively and allow for open discussion and to obtain the views of a broad range of practitioners nationally to examine the current scope of practice. Participants will contribute their expertise from community, primary and secondary care practice.
Inclusion criteria
Patient and carer representatives:
Exclusion criteria
Patient and carer representatives: · Unable to give informed consent to participate. · History of hearing or speech impairment to a degree that would render normal conversation impossible via video interview and this is their only option; however, all participants who have such impairments who wish to participate would be offered the option of a face-toface meeting, along with necessary adjustments, to ensure inclusivity. · Unable to communicate in English.
Health care professionals:
· Not involved with prescribing Patient and carer participants will be recruited via a variety of networks including: · Outpatient clinics for long-term conditions. · Via social media (Facebook, Twitter) email to the qualitative team. · Via networking at events (conferences, public engagement etc). · Sampling the CARE75+ cohort participants with frailty, at the University of Leeds. Frail participants are defined using either the phenotype model, or as having mild/moderate/severe frailty using the electronic Frailty Index. 12 The research team will be provided with restricted details (e.g. name, telephone number, address) of CARE75+ participants who meet the eligibility criteria by the CARE75+ study team. Only CARE75+ participants who have already given consent to be approached for future research studies, including provision of this restricted data, will be approached. Study information will be mailed to the potential participant by the research team, who will subsequently contact the potential participant to discuss the study and whether they are interested in being involved. · Charities including Age UK and Mind. Working in partnership with the charity, patient information sheets will be sent to charities for distribution through their networks. · Mental health directorate expert patient reference groups and patient liaison group to engage service users.
Interview format
All interviews will be audio-recorded, with participant consent, and transcribed for thematic analysis.
· Individual one-on-one semi-structured interviews will be conducted over telephone or video conferencing with 1-2 members per key stakeholder group (GPs, secondary care, commissioning/management of services, pharmacists, patients/carers). · Demographic (name of surgery, Trust or Clinical Commissioning Group, grade of profession) and professional information will be collected from Health Care Professionals (HCPs) (e.g. hospital registrar or consultant, GP registrar, locum, partner, pharmacist years of experience, years undertaking medication reviews) prior to starting the interview. · Prompt questions as per interview guide (Supplementary Appendix 1).
An interview guide (Supplementary Appendix 1) will be cocreated with experts by experience (professionals and PPI) focused around key areas of interest including: · What data do prescribers/practices need to undertake effective Structured Medication Reviews efficiently? · How are Structured Medication Reviews currently being undertaken, by whom,where, and how long do they take? · What kind of digital tools and supports will be most useful? · What do participants consider the top priority target medication challenges relating to key multimorbidity groups (older people with frailty; co-existing physical and mental health problems; complex multimorbidity and potentially problematic polypharmacy)? · What are likely barriers/facilitators to uptake and utilisation and sustained use of AI(-augmented) tools?
Focus group format
Initial semi-structured interviews will be followed by broader focus group discussion across the stakeholder groups depending on the numbers attending each focus group, the themes that emerge and the requirement for further exploration. Focus group interview guides will be further developed from the key questions, enhanced by themes that emerge from the initial one-to-one interviews.
· Demographic information will be collected prior to the focus group from participants. · Minimal information will be collected from HCPs (name of surgery, Trust or Clinical Commissioning Group (CCG), years trained, grade (e.g. hospital registrar or consultant, GP registrar, locum, partner, pharmacist years of experience, years undertaking medication reviews) prior to starting the focus group. · The co-produced topic guide will be followed to structure the focus group (Supplementary Appendix 2). · Digitally recorded.
Description of study design
Machine learning algorithms will be used to bring the predictive information and longitudinal care summaries available in integrated care records together with guidelines in new visualisations to support medicines optimisation. This combined information will be piloted in prescribing audit and feedback systems that GPs are using in research and practice. 6 DynAIRx will develop tools to combine information from electronic health and social care records. De-identified patient data obtained from health records will be combined with clinical guidelines and risk-prediction models to ensure that clinicians and patients have the best information to prioritise and support Structured Medication Reviews.
AIs will be developed that combine information from multiple records and guidelines and calculate risks of hospital admissions and other adverse outcomes for our three multimorbidity groups. To ensure this information is easily understandable, visual summaries of patients' journeys will be developed, showing how health conditions, treatments and risks of future adverse outcomes are changing over time. These visual summaries will be tested in general practices across northern England and improved based on feedback from clinicians and patients (described in DynAIRx Qualitative Phase 2).
Description of development of research proposal/questions
Research questions of DynAIRx Health Data
Description of sample selection/data collection and curation
Core research datasets will be curated and maintained from these integrated general practice, hospital, and social care records, where available. Accredited (ISO27001/NHS DSPT) cloud-based TREs support the software, tools, compute, and governance for research access. These federated data sources will feed a minimum core dataset (MCD) for evaluation and deploymentincluding coded data fromgeneral practices as well as Secondary Uses Service data from hospitals and structured community and mental health datasets, where available. The MCDs will be extended, where available, with information extracted from, and tracked across, clinical narratives using NLP-contextualised language models such as BERT. This builds upon existing healthcare NLP applications and annotated datasets, such as WEB-RADR (extracting events related to adverse drug reactions) 13 and AVERT (mining mental health narratives from clinical letters). 14 The data include over four billion annotations over 12 years in a large mental health and community provider trust, plus inputs from other regions. To extract (de) prescribing events, related drugs and contexts across narratives will be identified. This involves named entity recognition for detecting drug name or label variations; context extraction, such as treating an adverse effect of another drug; and entity mapping across time and/or sources, including extraction of time references for tracking prescribing journeys. These data can then be linked and validated against the routinely collected and integrated care record data.
A data catalogue will be maintained. A federated and open-source approach will be taken to data analysessharing all code via a GitHub public repository.
Interventions and comparisons
The structured data that have been curated and processed will be analysed to discover clusters of multimorbidity and polypharmacy with high apparent prescribing harm in the key multimorbidity groups. Machine learning and statistical methods will be used to develop prediction models for adverse outcomes, and to estimate which patients may benefit most from a structured medication review.
Adverse outcomes may include events such as falls in older people with frailty; strokes in people with severe mental illness, diabetes, and hypertension; and hospitalisation for adverse drug reactions or emergency/unplanned hospitalisations. Patterns indicating adverse outcomes or sentinel events such as prescribing cascades will be extracted from the curated data. Patient histories will be modelled as temporal graphs capturing clinical events (diagnoses, prescriptions etc.) in their timeline, and extracted using 3D convolutional neural networks. This will exploit recent advances in video and time-series classification to discover temporal patterns and not just sequences of events (as with recurrent neural networks). The output will be a time-series of clinical feature vectors, which can be used to predict outcomes or to define clusters of typical patient trajectories. Soft, temporal clustering algorithms will be used to track a patient's membership of each cluster over their recorded history. For instance, they may move gradually from a low-risk cluster to a cluster with high risk of hospitalisation. The identified patterns/clusters will be visualised and user feedback (described in DynAIRx qualitative Phase 2) used to refine the AI (e.g., find clusters that deviate from NICE guidelines).
The distilled patterns/clusters will be used to generate hypotheses, followed by development of explainable information. To reduce the risk of posing spurious associations (e.g., confounded relationships) as causal relationships, an expert panel will be called upon for potential reference. Where available, causal estimates will be derived from randomised controlled trials, or other robust external sources such as Mendelian randomisation studies. When required, causal estimates may be derived from the data in-hand using g-methods.
Visualisation and expert clinical and evidence-based reasoning are key in 1) informing the construction of graphical models to represent causal relationships between variables, and 2) weighing the plausibility of identified putative causal relationships. Where a causal relationship is in doubt, it will be examined within the key stakeholder groups (described in table 1), requesting additional data curation as needed.
In parallel, dynamic clinical prediction models will be developed to identify risks of adverse outcomes and expected multimorbidity trajectories. These can be aggregated to practice level to enable identification of clinicians/ practice outliers to better guide supportive interventions. The incorporation of causality then enables the identification of clusters/individuals at high risk, and prioritises those where the identified causal pathways suggest that structured medication review might benefit the patient(s). The models also form a strong basis for future work to identify anticipated benefits (effect sizes) of potential interventions such as deprescribing at an individual patient level. In principle, such tools can be used to support clinicians performing medication reviews (as well as suggesting which patients/clusters can benefit from medication reviews, as proposed here), as risks of multiple outcomes can be evaluated and discussed under different intervention strategies. Particular attention will be paid to explainability of AI, focusing on feature importance, rule extraction and consistency in individual risk prediction between AI models with comparable population-level performance. In contrast to 'black box' AI approaches to prediction, the methods utilised here are anchored in causal inference, explicitly handling causality. Causal queries are used to generate predictions under hypothetical interventions, which naturally ensures model explainability. Explainability and temporality are also embedded in the clustering approaches (describing the temporal characteristics of individuals within each cluster), and the visual summaries (visualizing patients over time and between clusters). Data sparsity is explicitly represented as uncertainty within directed acyclic graphs prompting requests for further data or experimentation. Counterfactual causal reasoning will be used to identify and minimise possible biases and unfairness in our models.
Mitigation of bias
Bias due to confounding factors (especially socio-economic and demographic) and data quality will be mitigated via a systematic bias assessment as part of the statistical learning and clustering for multimorbidity prediction. The consistency of AI results in individual predictions for models with comparable population performance will be evaluated and the effects of hyperparameters explored; models with acceptable hyperparameters can yield varying individual predictions. 15 This methodology will consider how risk predictions vary between clinical sites (as reported for QRISK, a widely used risk prediction tool 16 ).
COMBINED LONGITUDINAL DATA VISUALISATION FOR MEDICATION REVIEWS
Creating visual summaries has four stages. First, implementing functionality to extract and aggregate prescribing/disease events at cohort/patient and longitudinal/cross-sectional/overall granularities using curated data. This will provide a stable application programming interface (API) to connect care record systems to DynAIRx prescriber dashboards, in order to detail the 'chronicles of events' identified by the key stakeholder groups in DynAIRx qualitative.
Second, exploring alternative approaches for presenting interactive visual summaries of prescribing and disease events. Standard single-screen dashboards will provide a baseline but are unlikely to satisfy GP/pharmacist's requirements. Exploring dashboard designs from two approaches better suited to multifactor, temporally complex data: 1) dashboard networks, where dozens of types of events of interest are summarised in a miniature dashboard, which are connected in a network to portray temporal changes between patients/cohorts 17 ; and 2) the QualDash engine, already deployed in cardiology and paediatric intensive care (five hospitals). 18 Stage 3 will compare the pros and cons of the alternative approaches with GP/pharmacist end-users, selecting and then implementing the best approach (detailed in DynAIRX Qualitative Phase 2). This stage will incorporate data generated by statistical learning and clustering to provide visual summaries and drill-down of patient histories in the context of patient clusters, trajectories, drug-drug interactions and clinical guidelines. It will also provide customisable functionality needed to present the patient event summaries in the context of feature spaces from the statistical learning output, which will be invaluable for: (a) identifying features that distinguish one step from the next in patients' journeys, and clusters of patients from each other, and (b) gaining clinical input about the explainability of the models.
The final stage, evaluation, includes the development of a user guide and quick start tutorial, and hands-on evaluation with GPs/pharmacists performing Structured Medication Review scenarios (covered by DynAIRx qualitative protocol Phase 2).
PRESCRIBER FEEDBACK AND LEARNING SYSTEM -Data analysis
Translation of research findings into daily clinical practice is a major challenge. There is considerable need for clinical decisions to be based on the best available evidence, but often this evidence is not available (no trials conducted) and guidelines are only generic and usually relate to single conditions. It also needs to be balanced against clinician and patient/carer choice and preference, affordability according to local formularies, and congruence about goals and management plans between professionals and patients/ carers to enhance shared agreement about treatment regimes.
The Learning Healthcare System has been proposed to better integrate research and clinical practice. 19 This approach involves iterative phases including data analytics (data to knowledge), feedback to clinicians (knowledge to performance) and implementation of quality improvement activities by the clinicians (performance to data). The cycle of the Learning Healthcare System starts again by evaluating the effectiveness of these quality improvement activities. The analytics phase includes a detailed data analysis of the opportunities and challenges in current clinical practice and the local site (including analysis of the effectiveness of current activities). The results of the analysis would enable identification of care pathways and conditions ripe for focused targeting for improvement. The second phase involves review by the clinicians of these results and decide which have sufficient credibility to generate recommendations for change, ideally customized to its own specific circumstances. The third phase involves implementation of these recommendations by clinicians. Cluster trials have reported that data feedback can be effective in optimising prescribing. 20 The effectiveness of data feedback has been found to depend on content and how the feedback is provided including visualisations. 10 Feedback on simplistic targets may lack effectiveness (an example is the Quality and Outcome Framework that only resulted in small improvement despite its major investment. 21 Engagement with clinical stakeholders in the developing of feedback prototypes and iterative reviews are important in improving the feedback effectiveness. The Learning Healthcare System approach can also tailor feedback to individual clinical sites, prioritising to the most frequent challenges, as well as tailor feedback to care practices with best outcomes as determined by e.g. statistical learning. Furthermore, technologies that are most successful in optimising professional practice are those that explicitly use behaviour change techniques in their implementation including peer-to-peer comparisons. 22 Analyses in large research datasets (including > 5 million patients aged 65+) are ongoing. AI approaches found that medication patterns were strongly associated with ADR-related hospital admission (Odds Ratios [OR] of 7) and emergency admission (ORs of 3). Analyses of multiple drug-drug interactions with antibiotics (as listed in the British National Formulary) are providing information on relative as well as excess absolute risks. Analyses of medication reviews in polypharmacy patients found limited changes in prescribing in before-after analyses, highlighting the need for better evidence and support. Techniques such as random forest and gradient boosting methods will be used in this project to identify challenges and higher rates of adverse outcomes in medicine combinations used by our study populations. This will be followed by practice and peer comparisons 23 to identify possible areas of improvement, which could be used in the feedback to practices.
PRESCRIBER FEEDBACK AND LEARNING SYSTEM -Dashboard co-development
A recent BRIT2 clinical pharmacist (CP) workshop examined analytics-based input to support Structured Medication Reviews (SMR) for polypharmacy patients. CPs were interested in analytics which indicate the clinical risk of BNF drug-drug interactions, identify problematic prescribing patterns in the community (e.g. unexpected psychopharmacological effects), and target medication reviews toward high-risk patients. CPs felt that they were currently overloaded with information and popups as existing systems did not fit with the way they work. They were very clear that any tool would need to be very well targeted, userfriendly and have good explainability, which is very important as CPs must rationalise medication changes with other clinicians and patients and cannot 'just trust the data'.
Prescribing dashboards to support SMRs will be codeveloped with key stakeholder groups and deployed in an existing prescribing audit and feedback used by GPs. 24,25 Participating clinicians, who undertake Structured Medication Reviews in Liverpool, Manchester, Leeds and Bradford, will receive novel reports to support reflective practice concerning their patients with notable multimorbidity and polypharmacy issues in our key areas of study. The reports will extend the BRIT2 platform. 11 BRIT2 includes general practices in northern England. Technical specifications have been agreed for embedding/enhancing BRIT2 in the Graphnet Integrated Care Record System as part of the CIPHA expansion programme, which currently covers North West England and parts of the Midlands and South England. Data will be analysed in the TREs and the results fed back to practices via practice-specific dashboards.
Patterns of conditions, medications, tests, and clinical contacts antecedent to the multimorbidity events uncovered and the novel visualisations created will be incorporated into prescriber dashboards. DynAIRx qualitative engagement will help shape this content into forms that clinicians and practices find useful. Variability in multimorbidity-related prescribing across practices/ prescribers will be studied as part of this. This will build on BRIT2 which is currently analysing large cohorts of elderly patients with national primary care data extracts (Clinical Practice Research Datalink, Aurum). These results will be used for benchmarking under existing ethics approvals. Each practice population of multimorbid patients will be matched by propensity for adverse outcomes, morbidity cluster and data quality. This matching helps show where a practice deviates from its peers. As part of DynAIRx qualitative engagement, clinicians will be able to comment on dashboards, providing feedback to researchers on the acceptance of the results. The applicability of socialnorm, practice/prescriber-level feedback to medicines optimisation in multimorbidity will be studied with key stakeholders, with particular consideration of the scale achievable at low cost through AI.
Analyses for each iteration of feedback will be prioritised by users (DynAIRx qualitative). A particular focus will be quantification of the absolute risks of interactions and, where possible, presence of effect modifiers (such as level of polypharmacy).
At least two cycles of updating practice-tailored dashboards will be applied (DynAIRx qualitative). The effects of the feedback will be studied within statistical learning and clustering for multimorbidity prediction using interrupted time series models and recurrent neural nets.
Research questions of DynAIRx Qualitative Phase 2 -Prototype iterative analysis 1. What are the strengths and weaknesses of the AIaugmented prototype dashboard and prescriber reports? 2. What improvements could be made to ensure the AIaugmented process achieves maximal clinical utility?
Think-aloud study format
Two rounds of one-to-one 'think-aloud' studies on prototype systems will be undertaken with a small group of clinicians to understand perceived strengths and weaknesses of the prototypes and to iteratively refine them. Participants will be asked to comment on components of the systems, with prompts and questions to elaborate responses. Participants will be encouraged to suggest improvements and explain what they like/dislike, which aspects are (not) intuitive, and how they envisage using such systems in reallife. Findings will be shared immediately with dashboard developers to refine prototypes ahead of the next thinkaloud study. Approximately 10 think-aloud studies are planned across a variety of potential users. They will be recorded and transcribed and the transcripts thematically analysed. Data relating to implementation will be conceptualised through a Normalization Process Theory (NPT) lens. Comments will be noted to be either positive, where the user liked or identified with what they saw, or negative where the user disliked or disagreed with what they saw, or where the user suggested improved content, presentation, or interaction.
· Each think-aloud study will consist of one participant, and will take approximately 2 hours. · Approximately 4-6 studies will occur per iteration of the resource. · Participants will be given a brief task sheet for them to work through utilising aspects of the online resource/ dashboard, taking approximately 2 hours. · Participants will be asked to talk through what they are doing as they are completing the task sheet. · Following this, participants will be asked to provide any general thoughts or feedback from their interaction. · Think-aloud studies will be audiotaped and transcribed to ensure no feedback is missed.
Task-based workshops format: Stakeholders will also critique each major new version of the system in two workshop eventsone for each development iteration. Emerging findings will be shared with the health data analysts, ensuring that statistical learning and visualisation are informed by clinician, commissioner and patient insights. Following the development of the final DynAIRx prototypes, we shall present them to the wider group for feedback to enable further discussion of perceived strengths and weaknesses and to address future implementability. We will audio record and transcribe the sessions and thematically analyse transcriptions as described earlier.
Comments will be noted to be either positive, where the user liked or identified with what they saw, or negative where the user disliked or disagreed with what they saw, or where the user suggested improved content, presentation, or interaction.
Think-aloud studies and workshops will be organised face to face, ideally in the practitioner's own place of work where possible and practical to obtain the most real-world usage data. However, these could also be undertaken remotely if felt appropriate, for example, if pandemic restrictions were to be re-introduced or at the preference of participants. Both primary and secondary care practice will be covered. Novel visualisations of patient journeys enhancing medication reviews 7 Co-design a prototype tool, through iterative review and refinement of feedback systemsparticipating clinicians, who undertake SMRs will participate in "think-aloud" studies of the protype tool and identify positive and negative features of the tool which will allow the iterative improvement of the prototype (codeveloped with patient and public representatives). Can a learning system be created that incorporates the needs of prescribers alongside the key high-risk trajectory indicators?
Integration of outputs 1 -4 to produce a clinically useful learning healthcare system, co-developed by the end users and supporting the delivery of SMRs by GPs and pharmacists and be accessible to patients/carers 8 Refine the later prototypes through usergroup feedback and, through two workshops, to explore further the perceived strengths and weaknesses and thus the implementability of the system.
Organising the qualitative data
The recording of the interviews will be transcribed and anonymised (all names and other identifiable information will be removed). The digital recordings will be held securely at the University of Liverpool or University of Glasgow, with secure file transfers to/from the transcription company. Once the transcripts are checked against the audio files, those audio files will be deleted. The socio-demographic information, including information on HCP roles, of the interview and focus group/ workshop and think aloud study participants will be entered into a spreadsheet and then exported into NVivo software to create case nodes. Tables will be constructed summarising the socio-demographic and role data. The case nodes will facilitate the comparability of themes within and between groups and across the different study contexts.
Thematic analysis of data and normalisation process theory (NPT)
All semi-structured interviews, focus groups, think-aloud studies and task groups will be audio-recorded and transcribed verbatim to form the data for analysis. Transcripts will be read and re-read and a thematic analysis will be undertaken using Braun and Clarke's six step framework for thematic analysis which combines elements of deduction and induction whereby some themes are expected to be found in the data based on the literature or the theoretical framework (in the case of think alouds and task-based workshops reviewing prototypes that will be Normalization Process Theory) and others appear by themselves during analysis. 26,27 The six steps are: familiarization, coding, generating themes, reviewing themes, defining and naming themes, and writing up 26,28 This approach essentially involves an exploration of the data to identify patterns, themes and/or theoretical constructs. This involves detailed reading of the transcripts and identifying all key issues, concepts and themes, drawing on a priori issues while being alert to new ideas raised by the participants. This work will help us understand stakeholder priorities for SMRs and potential barriers/facilitators to implementation. Once themes are finalised, they will be mapped onto the constructs of NPT: coherence (sense making); cognitive participation (engagement work); collective action (operationalisation work); and reflexive monitoring (appraisal), where appropriate. The data will not be forced to fit the constructs of NPT. NPT will instead be used as a theoretical lens with which to interrogate the findings. 27,29 NPT has been widely used to consider how individuals and groups understand, integrate, and sustain digital or new ways of working (e.g. SMRs) into everyday practice, and has enhanced (understanding of) implementation processes. 30 Data analysis will be carried out by the DynAIRx clinical researchers and the post-doctoral research assistants (PDRAs). Coding clinics will be undertaken to refine the themes identified and ensure consistency of coding across the team. A common analytical framework will be developed to ensure consistency in analysis across the various study locations. The analytical framework would be flexible and iterative and continuously refined as the analysis evolves. NVivo software will be employed to organise the data, and help manage the data analysis process. All the DynAIRx clinical researchers will be trained in how to use the software. Data analysis will be undertaken in parallel with data collection. This will help the researcher determine whether saturation has been reached on any of the research questions and to identify gaps for further data collection.
Any quotations used in any reports will be anonymised.
Ethics approval and dissemination:
The study has been approved by the Newcastle North Tyneside Research Ethics Committee (REC reference:22/ NE/0088). No safety concerns were identified. Study findings will be presented at public meetings, national and international conferences and published in peer-reviewed journals.
Discussion and Conclusion
DynAIRx will provide patient benefit by: a) targeting medication reviews/optimisation to those most at risk from harm due to problematic polypharmacy and most likely to benefit from SMR; b) reducing the risks of drug-related harms; c) freeing up clinician time for patient interaction through automated data collection for structured medication reviews; and d) providing a clear, visual summary of disease trajectories to inform clinician/patient discussion. Key outputs from DynAIRx (mapped to objectives and research questions, table 2) include: 1. Evaluation of key challenges and opportunities around medicine optimisation in general practices 2. A pipeline of structured and unstructured care data into multimorbidity (AI) research. 3. An AI framework for identifying those most at risk of problematic polypharmacy and for discovering disease trajectories that should trigger high priority SMRs. 4. Novel visualisations of patient journeys enhancing medication reviews. 5. Integration of outputs 1 -4 to produce a clinically useful learning healthcare system, co-developed by the end users and supporting the delivery of SMRs by GPs and pharmacists and be accessible to patients/carers.
The 2021 NHS Overprescribing Review sets out a plan to reduce overprescribing and improve patient safety. The report identifies a key evidence gap, recommending new research to support safe and appropriate prescribing, specifying research to ensure digital systems and records make structured medication reviews a simple task. 2 Dy-nAIRx directly addresses this important evidence gap.
In the longer term (DynAIRx 2) we will build multimorbidity decision support on DynAIRx visualisations and outcome predictions, for use in consultations.
|
v3-fos-license
|
2021-11-17T06:17:58.293Z
|
2021-11-15T00:00:00.000
|
244132563
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-021-01618-3.pdf",
"pdf_hash": "db149bbd58c5f748149ae53ef37d32391fe7be09",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46093",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "8bf5ca9b93fc737cca425090b48b2ea4924e74e3",
"year": 2021
}
|
pes2o/s2orc
|
CyFi-MAP: an interactive pathway-based resource for cystic fibrosis
Cystic fibrosis (CF) is a life-threatening autosomal recessive disease caused by more than 2100 mutations in the CF transmembrane conductance regulator (CFTR) gene, generating variability in disease severity among individuals with CF sharing the same CFTR genotype. Systems biology can assist in the collection and visualization of CF data to extract additional biological significance and find novel therapeutic targets. Here, we present the CyFi-MAP—a disease map repository of CFTR molecular mechanisms and pathways involved in CF. Specifically, we represented the wild-type (wt-CFTR) and the F508del associated processes (F508del-CFTR) in separate submaps, with pathways related to protein biosynthesis, endoplasmic reticulum retention, export, activation/inactivation of channel function, and recycling/degradation after endocytosis. CyFi-MAP is an open-access resource with specific, curated and continuously updated information on CFTR-related pathways available online at https://cysticfibrosismap.github.io/. This tool was developed as a reference CF pathway data repository to be continuously updated and used worldwide in CF research.
Results
Concept and features. CyFi-MAP offers a resource that accurately (i.e., adequately confirmed in the literature) and graphically illustrates CFTR molecular pathways in an easy-to-read manner by the scientific community. Given that CF is caused exclusively by mutations in the CFTR gene that alter multiple cellular functions, the CyFi-MAP information was organized according to the CFTR life cycle, from its biogenesis to degradation, where two sub-maps were developed: (i) one representative of wt-CFTR and (ii) one of F508del-CFTR.
The main differences between these submaps are given by the representation of some of the key processes side-by-side (available on the website in the Map section), in order to facilitate submap comparison at molecular mechanism level. Additionally, a scheme is also available with CFTR traffic pathways inside the cell where the major physical alterations (e.g., traffic impairment and mucus clogging) in wild-type vs the mutant in airway epithelial cells are depicted (Fig. S1). An overview of the key CFTR processes/modules included in CyFi-MAP is given to inform and guide the user to the map content (Fig. 1).
Pathway inclusion strategy.
We included information on CFTR interactors that was confirmed in a minimum of two published references. We focused on those that studied airway epithelial cells and used methodologies that allowed the detection of physical interaction between components (such as immunoprecipitation or nuclear magnetic resonance spectroscopy (NMR)). We also captured information on confidence and accuracy to each interaction in the map. Specifically, this step resulted in 296 research papers providing physical evidence between proteins and more than 1000 papers reviewed from PubMed (for more details on map curation see "Methods" section).
CyFi-MAP presents features that allow retrieving information visually such as (1) different types of interactions between the entities (i.e., activation, trafficking and inhibition-more details in Fig. S2 in Supplementary material), (2) the glycosylation (i.e., form B or C) and folding status (N-glycosylation) of CFTR, (3) the proteins that bind uniquely to the F508del-CFTRprotein, (4) identification of eachlife cycle steps included-additional information in Supplementary material, and (5) cell organelles specific interactions represented through images, with the entities (i.e., proteins, complexes, ions, and others) adequately located in the biological compartments, differentiating between organelle lumen and the cytosol. www.nature.com/scientificreports/ CyFi-MAP navigation. Map availability. The source of these schemes are available at the map online repository (https:// cysti cfibr osism ap. github. io/) and the CyFi-MAP can be accessed online and explored interactively via the Molecular Interaction NetwoRks VisuAlization (MINERVA) 37 .
Online and interactive navigation. In the form of an interactive diagram via the MINERVA platform (Fig. 2), CyFi-MAP provides the capacity to easily follow CFTR interactions starting with its folding until degradation, with every intermediate step described (see supplementary material). The MINERVA platform allows easy navigation and exploration of the CF related molecular pathways available in CyFi-MAP. The user can also zoom in and find various details about proteins of interest (such as location of their interactions with other biological elements inside a cell, information on the protein name/alternative names and identification names in several databases such as Ensembl, human gene nomenclature, UniProt etc.; the annotation information is linked to the biological resources via direct urls). The user can also filter and extract information regarding the type of interaction through the edge colour (see Fig. S2 in Supplementary Material).
Content of the CyFi-MAP.
The progression of CF disease is driven by the deregulation of multiple cell processes due to the loss of CFTR function. Hence, CyFi-MAP has greatly focused on the CFTR proteostasis network, as it encompasses alterations occurring on all those processes. Along with the map it is possible to observe several numbers which represent each step on the CFTR life cycle (a brief description of these is given in supplementary material) although the order indicated is not deterministic, serving only as an indication of all the processes in which CFTR is involved. The content included in the CyFi-MAP can be divided into four mains aspects: (i) CFTR synthesis and production, (ii) maintenance at the PM in the functional state, (iii) traffic and (iv) degradation.
CFTR synthesis and production. The folding of CFTR is highly regulated in the ER before the protein is allowed to proceed along the secretory pathway to the PM. This information is represented in the Folding module at both submaps hence, it is possible to visualize it side-by-side in the GitHub link. Beginning with thewt-CFTR, folding starts with the nascent CFTR CFTR polypeptide chain being translocated to the ER membrane (step [1]), after which the N-glycosylation occurs (step [2]), a posttranslational modification during protein synthesis in the ER that is critical for PM expression and function (Fig. 3A) [38][39][40] .In CyFi-MAP, the different glycosylation states of wt-CFTR are represented and possible to follow. N-glycosylation starts with the addition of oligosaccharide residues (glucose3-mannose9-N-acetylglucosamine2) to CFTR (Fig. 3A, step [2]). At this step, several chaperones and co-chaperones bind to wt-CFTR to assist with folding. The folding process includes at least four ER quality control (ERQC) checkpoints that are involved in assessing CFTR correct folding 41 .As the process consists of the subsequent trimming of glucose, they are initially identified with three glucose (G3), the first checkpoint, before binding to calnexin (CANX) with only one (G1), the second checkpoint (Fig. 3A, steps [3][4][5][6]) 42 .
F508del-CFTRis mostly targeted to degradation in the first checkpoint (Fig. 3B, step [2]), hence the other ERQC checkpoints are only represented at the wt-CFTR submap There, is possible to follow the second checkpoint, where the protein enters the CANX cycle for additional rounds of refolding, the third checkpoint, where www.nature.com/scientificreports/ www.nature.com/scientificreports/ specific signals when exposed lead to ER retention, or the fourth checkpoint, with the recognition of an export motif to leave ER (Fig. 3A, step [7] and step [8]) 43 . During these checkpoints, wt-CFTR can be recognized as www.nature.com/scientificreports/ misfolded and move to the degradation or achieves an incomplete glycosylated state known as form B, which allows it to proceed to Golgi. A more detailed description of these checkpoints is in the degradation chapter. Although insertion in the membrane of ER and ERQC checkpoints that lead to degradation are identified as different steps, there is evidence that co-translational folding and degradation occur.
After CFTR folding has been successfully achieved, the protein is ready to proceed along the secretory pathway to the Golgi apparatus, where its oligosaccharide structure is further modified by multiple glycosylation events generating its mature form, known as C form, which will be transported to the PM 44 . F508del-CFTR, because is highly degraded at ER, from the moment is depicted outside this organelle acquires a dark red colour, representing a rescue protein rF508del-CFTR.
Maintenance at the plasma membrane in the functional state. After delivery to the PM, wt-CFTR is regulated at multiple levels namely: (1) PM stabilization, at specific PM sites; (2) activation/channel shut down, where phosphorylation/dephosphorylation cycles activate/inactivate the channels; (3) ion channels and transporters regulation, concerning the regulation of and by other PM proteins; and (4) endocytosis, in either CCVsor caveolae vesicles.
In contrast, rF508del-CFTR is characterized by (1) PM stabilization; (2) channel shut-down; and (3) endocytosis, with consequent degradation. The reduced number of modules at PM is representative of the instability, loss of function and accelerated endocytosis that characterize this mutated protein.
In the PM stabilization module, Postsynaptic density 95, disks large, zonula occludens-1 (PDZ) domaincontaining proteins are the main characters, responsible for anchoring CFTR to the PM 45 . Mechanisms such as cytoskeletal activation are represented, which enable PM anchoring and tethering of wt-CFTR to the PM (Fig. 4A, step [11]). TherF508del-CFTR stabilization module includes a lower number of interactions with PDZ proteins and acquisition of new interactions when compared to the wt-CFTR module (Fig. 4B, step [5]). PDZK1 (CAP70) is illustrated at PM in the wt-CFTR submap to be able to potentiate the CFTR chloride channel activity by cluster two CFTR molecules (wt-CFTR submap, step [12]) 46 (Fig. 4A, step [12]).
The Channel shut-down present in both submaps includes dephosphorylation of CFTR and the proteins involved in its triggering-such as protein phosphatases, receptors, phospholipases, and others 52,53 .
The transporters and Ion channel regulation module is only present at the wt-CFTR submap, and besides directly binding to wt-CFTR also allows to visualize PDZ proteins role as intermediates between them, maintaining the proteins in close proximity and leading to changes in their respective functions ( Fig. 5B/C, steps [16], [17] and [18]). Proteins such as Solute Carrier Family 26 Member 3 (SLC26A3, also known as DRA), Solute Carrier Family 26 Member 6 (SLC26A6, also known as PAT1), Anoctamin 1 (ANO1, also known as TMEM16A) and epithelial sodium channel (ENaC), were included in this module [54][55][56] .
Traffic. In the secretory pathway, traffic is essential for all processes since folding/processing and function to degradation. CFTR traffic processes start by Coat Protein complex II (COPII) vesicles, responsible for its transport between ER and Golgi, from where wt-and rF508del-CFTR reach the PM (wt-CFTR submap, step [9]) 43 . rF508del-CFTR traffic between ER and Golgi is depicted through the COPII vesicles module with the same mechanisms as wt-CFTR (rF508del-CFTR submap, step [3]) 33,34,57 . Although this information is only supported by high-throughput research articles and with only one paper for each interaction with the mutated protein, it was one of the exceptional cases that were selected to CyFi-MAP given that is explained by the functional context provided in wt-CFTR. A list with the exceptions isin the additional information section of the supplementary material.
CFTR is endocytosed and arrives at the sorting endosome from which wt-CFTR moves back to the PM, either directly-Recycling Module (Fig. 7A, step [22], [23])-or through the Golgi-Golgi Module (Fig. 7A, step [24])-or to degradation-Degradation Module (Fig. 7A, step [26]). In these processes, several proteins of the Rab family are represented as they are essential for CFTR traffic 58 . In the case of rF508del-CFTR, at the sorting endosome, it is sent to degradation (Fig. 7B, step [10]).
Degradation. During folding and processing, several quality control proteins target misfolded CFTR to degradation, therefore, in each of the organelles-ER and Golgi-there is a module called Degradation.
Discussion
CF is the most common life-threatening autosomal recessive disease in the Caucasian population 63 . Caused by an absent/dysfunctional CFTR channel that leads to an impairing balance of ions across the membrane, CF is characterized by affecting several organs, especially the lung 20 . CFTR seems to be involved not only in the transport of ions but also in the regulation of other channels, working in a dynamic network that modulates its activity 30 . There have now been more than 30 years of scientific discoveries with new milestones achieved every year in our understanding of intracellular interactions after CFTR loss of function that control the progression of this disease. The knowledge obtained over this research has enabled diagnosis and discovery of therapies that increased life expectancy 20 . Yet no final curative treatment has yet been developed for CF disease [64][65][66] .
The increasing data available on public databases lead to the improvement of tools to filter and extract relevant knowledge required for the discovery of therapeutic targets. With this need, disease maps were developed as a multilayer-readable network that allows representing increasingly complex and extensive information in In the image A two types of internalization of wt-CFTR are present, through clathrin coated vesicles [19] and through caveolae vesicles [20]. As is represented, in the first, several proteins are involved, since the clathrin triskelion complex to cytoskeletal F-actin-MYO6 complex and other proteins assisting the process. In rF508del-CFTR, only the complex Caveolin 1 (CAV1)/ Caveolin 2 (CAV2) was found in this protein endocytosis [9], with assistance of Flotillin 2 (FLOT2). www.nature.com/scientificreports/ an easily updatable manner 11 . The visual representation of CFTR key processes in a cell can act as a powerful tool to understand and share knowledge. In this work, we built the CyFi-MAP, a manually curated disease map of CFTR-related available information, as a resource that permits a deeper understanding and interpretation of the disease mechanisms. CyFi-MAP development is motivated by the absence of resources differentiating between wt-CFTR and its variants and has the objective of concentrating on a single free access resource the CF major hallmarks, representing the data scattered across different platforms/research papers in form of pathways and interactions. This tool was designed to be useful for CF scientists as a reference source to analyse previous knowledge and assist in the whole-organism level perspective as well. In the image (A), is possible to see wt-CFTR arriving at the sorting endosome [21] and the possible pathways to follow, either recycling [22], [23] and [24], or degradation [25] www.nature.com/scientificreports/ trapped in the first steps of ERQC and is mostly targeted for degradation as soon as the polypeptide is synthesized-indicated in CyFi-MAP by the absence of the others ERQC processes 42,43,68,69 . Furthermore, the location and function of CFTR at PM are affected in the case of rF508del-CFTR, visible by the lack of interactions and by the absence of key processes when compared with wt-CFTR in CyFi-MAP. rF508del-CFTR in CyFi-MAP lacks proteins involved in the activation of the channel and regulation of other channels and transporters at the PM. This is a consequence of its instability where unique proteins that interact with rF508del-CFTR, such as Calpain 1 (CAPN1) and Calpain 2 (CAPN2), play a role in its destabilization by impairing its binding to PDZ anchor proteins. Besides that, ubiquitination and subsequent targeting to endocytosis and degradation involve additional proteins such as Ring Finger And FYVE Like Domain Containing E3 Ubiquitin Protein Ligase (RFFL), TSG101, HGS, CHMP4B and others that prevent the recycling to the PM 62,70,71 . Some proteins (e.g. SLC9A3R1, PDZK1 and others) involved in several roles along the CFTR life cycle (including stabilization, anchoring and function at PM) are present on the map more than once. PDZ proteins are essential elements also that act as intermediates that connect other channels with CFTR. Additionally, PDZK1 is found binding to two CFTR proteins, maintaining CFTR proteins functioning in close association.
CyFi-MAP included data
Besides the conventional trafficking, unconventional secretion pathways have been described for membrane proteins such as CFTR and usually involve bypassing the Golgi, a route that was identified through blocking the conventional Golgi-mediated exocytic pathway 72,73 . Pathways such as this, were not included in this version of CyFi-MAP as they are unlikely to represent the cell in its physiological state. Notwithstanding, it can be helpful to provide a broader view of the possible interactions and they may appear in a future version of CyFi-MAP with less stringent criteria for inclusion.
CyFi-MAP expansion and future work.
Given the fact that new data are generated continuously, and CF aspects are yet to be included, the CyFi-Map is constantly developed with support from the community and funding agencies.CyFi-MAP benefits from major features of the MINERVA platform via its online distribution: comments and suggestions from users with regard to changes in the map content (addition, removal, update) can be analysed directly by curators and addressed potentially in the map after further refinements. In this way, users can promote active discussions and knowledge exchange to build an increasingly accurate and continuously updated CF disease map. Notwithstanding, of specific interest as a future direction, is to include in CyFi-MAP the specific steps in CFTR processes targeted by compounds, depicting this way the specific target/ mechanism where each of them acts. Additionally, we anticipate including a diagram focusing on the process description layer is anticipated of the CF molecular processes (e.g., the N-glycosylation of CFTR in ER) in order to provide a deeper understanding of such interactions. Furthermore, the creation of submaps representing other CFTR mutations would be relevant to study the molecular mechanisms affected.
Altogether, CyFi-MAP represents the first stable milestone into a robust and reliable CF knowledge base integrating information on key pathways involved in molecular pathophysiological CF mechanisms, based on curated literature and expert-domain-approval. CyFi-MAP offers an integrative and system-level view of CFTR knowledge. CyFi-MAP may support the interpretation of CF progression and may facilitate the development of novel therapeutic targets and strategies. In fact, a better understanding of CFTR mechanisms can not only assist in the design of improved therapies for CF but also identify factors that work in other lung diseases, such as COPD or disseminated bronchiectasis. The next steps can also involve the integration of the knowledge acquired using CyFi-MAP as a basis for mathematical models to generate new data through network inference, modelling and creation of new hypotheses to be tested.
Methods
CyFi-MAP construction. The development of the CyFi-MAP follows the disease map development protocol, using primarily Kondratova et al. and Mazein et al. 11,74 . Specifically, three main steps entail the construction of CyFi-CFTR (Fig. 8): I. The first step consisted of searching relevant CFTR-related information, selecting a total of 297 research papers and more than 1000 reviewed articles. A complete list of publications consulted for the CyFi-MAP development is available in https:// cysti cfibr osism ap. github. io/. The CF disease hallmarks were obtained from peer-reviewed research papers, domain experts' suggestions and advice, previously documented and validated pathways, and curated up-to-date databases (including Reactome 6 /KEGG 4 /MetaCore from Clarivate Analytics. Please see Content curation subsection for details). This task also involved the analysis of the collected pool of data, followed by the curation of the most relevant CFTR-related knowledge (Fig. 9). II. The second step comprised the effective diagram building, assuring the correct level of detail and the most appropriate and aesthetically output, to guarantee that the resulting map is as readable and userfriendly as possible. The biological mechanisms representation follows the Systems Biology Graphical Notation (SBGN) notation and was built in the yEd Graph Editor using the SBGN Palette (https:// yed. yworks. com/ suppo rt/ manual/ layout_ sbgn. html). The yEd Graph Editor is a freely available graph editor providing functionality to manage large-scale graphs including: (i) features that considerably facilitate the diagram drawing process such as friendly user interface, drawing guides, zooming on the diagram and easy application of specific aesthetics (e.g. same colour for nodes/ edges, curved connectors) to an individual or multiple elements; (ii) algorithms for automatic layout (details on using yEd to automatically layout SBGN-related diagrams are given in e.g. 75 ). The yEd Editor also incorporates the SBGN Palette that permits the direct representation of the SBGN-specific elements into the yEd inner GraphML format. After the CyFi-MAP was developed in yEd, we converted it into the SBGN standard format by www.nature.com/scientificreports/ using the ySBGN converter, (a bi-directional converter between the SBGN and yEd GraphML formats, available at https:// github. com/ sbgn/ ySBGN). Further, the CyFi-Map SBGN diagram was loaded to the MINERVA online platform. The organelle images (developed manually and expert-revised) aim to facilitate visualisation of the mechanisms at the top level; thus, special attention was given to the localization of the interactions in each organelle. III. The third step in the construction of the CyFi-MAP was the map exploration via the MINERVA platform 37 . In a first approach, the construction focused on the creation of small organelle-specific maps, illustrating CFTR-relevant processes on those locations. The maps included CFTR interactions covering its intracellular and intraorganellar traffic. Later, these were improved upon by the addition of other, more widespread CFTR processes and pathways, which allowed a more effective integration of the existing data. The resulting cell-wide map is expected to be continuously evolving with user input and consistent expert curation. The map is available through the web platform MINERVA that provides interactive and exploratory features.
The current version of CyFi-MAP has been manually curated by CF domain experts. To ensure a continuous updating of this resource, both regular expert verification of new information, as well as regular user input are deemed essential to achieve an accurate representation of current CF data. Constant feedback from cell biologists, biochemists, physiologists and bioinformaticians contributed to a comprehensive representation of the various layers of information.
Inside CyFi-MAP, each process comprises pathways that include proteins (as individual entities or as complexes) and different types of chemical species (ions and lipids) interacting among themselves. Nodes represent entities (i.e., proteins, ions or complexes) and edge colours correspond to processes (i.e., activation, inhibition, synthesis, or in some cases movement of entities inside the cell).
CyFi-MAP currently comprises 618 nodes and 420 edges, with 426 nodes and 307 edges in wt-CFTR and 216 nodes and 117 edges in F508del-CFTR. In total, entities presented at both submaps are classified into 193proteins, 25complexes, 5ions and 5 simple molecules in wt-CFTR and 98 proteins, 12 complexes, 1 ion, and 2 simple molecules in F508del-CFTR.
Content curation.
The data used in CyFi-MAP was obtained by manual human search, curation and validation with domain experts in three main sources: Pathway databases. The curation process started by reviewing previous attempts to summarize CF information in these signalling networks. Pathways from MetaCore (Clarivate Analytics), Reactome and KEGG were www.nature.com/scientificreports/ reviewed in order to analyse the pathway availability for the CF disease 6,76 . Major CF-related pathways were retrieved from these databases and confirmed in the literature for their accuracy.
Literature. The main hallmarks of CF were extensively searched in PubMed. As CFTR is the protein that plays a central role in the map, direct interactions with it were very carefully selected, following strict criteria. Considering that the lung is the most affected organ, the focus was on human airway epithelial cells studies. The massive number of CFTR reviewed articles and studies available were analysed and selected as particularly relevant; results obtained from essays with other relevant cell types (such as intestinal epithelial cells) were also included when validated by review papers, meaning they are accepted by the scientific research community. Priority in the selection process was given to the molecular mechanisms involving protein folding and traffic, as these are the main processes that are impaired in F508del-CFTR. Most studies regarding this mutant's behaviour at the PM resulted from experiments on rF508del-CFTR, either chemically or temperature-dependent. The inclusion of information from proteomic studies was dependent on the functional context provided by already documented interactions. Although each direct interaction with CFTRon CyFi-MAP was confirmed in a minimum of two papers, some exceptions may apply such as information retrieved from recent articles (2018on) and protein interactions that were part of well-characterized pathways involved in CF referred in more than one peer-review research paper. An example of the last is the interaction between STX3 with CFTR, as only one research paper was found with a physical interaction although is mentioned in peer-review articles 77 and hence included as one interaction accepted by the scientific community.
Databases. Among the web resources used for data gathering, the most significant were GeneCards 78 , Stringdb 79 , Biogrid 80 , UniProt 81 and HGNC (HUGO Gene Nomenclature Comittee) 82 which were used to confirm the correct names of proteins/genes, their known function and their interactors. For each protein, the name was checked in HGNC. UniProt and Genecards were used to search for alternative names for the same protein so as to find the correct HGNC designation 78,81,82 . Often, although a protein complex is known to interact/participate in a CFTR process, the specific proteins that constitute that complex are not described in the original literature report. Figure 9. CyFi-MAP curation process. The curation process developed presents 5 levels. 1st level filter the data to studies with proteins that interact directly with CFTR, meaning that only experimental techniques which confirmed a direct interaction were considered such as e.g. Immunoprecipitation, Surface Plasmon Resonance (SRP), and others. The 2nd level is related to the type of cell culture used in these experiments, focusing on human airway epithelial cells, although other cell types were included when described on review publications. In both levels, if the studie do not agree with the criteria is rejected. The 3rd level consist on finding the location of the interaction inside the cell (e.g. ER, Golgi, cytosol, PM) followed by the type of interaction (binding or inhibition). The 4th level confers confidence to the interaction, consisting on the search for publications that support the information. The 5th level confirms information related to the protein after being selected (e.g. does it belong to a complex? Which pathway does it belong to?). The protein is manually added to the yEd Graph Editor used to built CyFi-MAP with the name accordingly to the HGNC nomenclature. www.nature.com/scientificreports/ Accordingly, proteins reported in the literature to interact with CFTR as part of larger complexes were searched for in databases to find the protein components of the complex. During this step, name disambiguation must be considered in order to find all data related to that protein and also to not repeat proteins. For instance, by looking for syntaxin 5, names such as Syn5 and STX5 are also available for the same protein. The same happens for Golgi Associated PDZ And Coiled-Coil Motif Containing protein, known as GOPC although other names such as CAL and FIG are referred to on research papers and used to retrieve as much information as possible.
Diagram building. CyFi-MAP was built using yEd graph editor (https:// www. yworks. com/) using the SBGN Palette, and the data was represented based on the SBGN standard 12 . This notation provides a knowledge representation language used in the illustration of molecular pathways and protein interactions as the standard notation for disease maps 11 . Presenting three languages that provide different types of knowledge illustration allow to adapt on the level of detail intended to be highlighted on the map, including the following layers: Activity Flow to depict interactions with process direction, Process Description detailed specific mechanisms, and Entity Relationships which describe the mechanisms without a sequential process 12 .
CyFi-MAP was implemented following the SBGN Activity Flow, in order to provide a compact, sequential and easy-read format or involving signalling pathways. This language is useful to represent the flow of information in biological sequences/pathways in a way that the information can still be captured for underlying mechanisms of unknown influence 12 .
Each subcellular organelle (ER, Golgi, endosome, etc.) were drawn manually and added to the map as a background image for graphical representation of the different subcellular compartments. Additionally, each interaction provides information on itself. Depending on the selected edge different types of interactions can be found on CyFi-MAP, namely (see Fig. S1 on supplementary material for more details): (1) activation, representing a normal binding, (2) synthesis, when an altered product is released, (3) trafficking, representing movement inside the map, and (4) inhibition, when the interaction inhibits a function.
Map exploration via the MINERVA platform. CyFi-MAP diagrams are available in the platform MINERVA accessible through GitHub (https:// cysti cfibr osism ap. github. io/). The project description and key processes (shown side-by-side, represented through images to allow the comparison between wt-CFTR and F508del-CFTR submaps on the CyFi-MAP) are given on the website. Starting at the cell level, it is possible to identify the main differences between the submaps (Fig. 1). This view is relevant in order to compare the cells in presence of the two proteins since wt-CFTR is transported across the secretory pathway to the PM and endocytosed to be either degraded or recycled back to the PM, whereas most F508del-CFTR is retained at the ER from where it is sent for degradation. This impairment leads to the so-called 'CF pathogenesis cascade' , which does not occur for wt-CFTR. In the cell level view, it is possible to observe these features, allowing the extraction of relevant knowledge.
Additionally, wt-CFTR and F508del-CFTR submaps were divided into modules, each representing key processes of its life cycle in order to guide the navigation through the map. To compare information in both submaps, images placing the modules folding, stabilization and sorting side-by-side are available. Mutation-specific proteins are highlighted in a different colour to emphasize differences between wild-type and mutated phenotypes.
The interactive web platform MINERVA allows accessing an interactive CyFi-MAP to navigate and explore its molecular networks. This tool provides automated content annotation, direct feedback to content curators and SBGN-compliant format 83 . Navigation in CyFi-MAP is similar to navigation in Google Maps being possible to through MINERVA search elements that are highlighted by markers and also retrieve additional information on each element on the panel on the left side, presenting several identifications names using HGNC and UniProt as sources.
The zoom feature allows a high-level view of the intracellular organelles and a close view inside each providing easier access to the complex and extensive information it contains. Every CFTR interaction represented in the CyFi-MAP is validated by PubMed references. The user can curate the data by commenting, given they provide the respective reference as well.
All suggestions will be analysed by curators and CF domain experts to maintain CyFi-MAP quality and accuracy.
The user can contribute to CyFi-MAP by adding comments with questions, corrections or additions to the map. These will be visible to other users and developers. To add a comment to CyFi-MAP during navigation, right-click on the specific location and choose to add a comment. It is possible to choose a specific identity to link the comment, such as the protein or reaction, or to remain 'general' , which will link the comment to the location the user chooses. The remaining fields allow the user to fill in the name and email in order to facilitate communication with the developers and to clarify any questions that may emerge. Last, there will be a box where comments can be added (Fig. S2). Any supporting information provided will be helpful to incorporate the changes into the map. After sending the comment, it will not be possible to correct it and it will be visible on the map publicly. Details on adding user's comments in the underlaying MINERVA platform are given at https:// miner va. pages. uni. lu/ doc/ user_ manual/ v15.0/ index/# add-comme nt.
The CyFi-MAP allow exploring the map with and without seeing the comments provided by the users by clicking on the checkbox Comments above in the map toolbar. These comments will allow the map users to benefit from the domain knowledge and expertise of researchers and to collect valuable information for the research community. All suggestions will be analysed by curators and CF domain experts in agreement with the pre-established curation process to maintain CyFi-MAP quality and accuracy. www.nature.com/scientificreports/
|
v3-fos-license
|
2018-12-18T14:24:21.817Z
|
2010-11-18T00:00:00.000
|
56346592
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://os.copernicus.org/articles/6/949/2010/os-6-949-2010.pdf",
"pdf_hash": "b5d6cfb282d8f056aff84152dd2b4ebc80293bff",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46094",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "7f79e1311d394c3f975b70f1fa42914544d7db30",
"year": 2010
}
|
pes2o/s2orc
|
Thermophysical property anomalies of Baltic seawater
While the thermodynamic properties of Standard Seawater are very well known, the quantitative effect of sea salt composition anomalies on various properties is difficult to estimate since comprehensive lab experiments with the various natural waters are scarce. Coastal and estuarine waters exhibit significant anomalies which also influence to an unknown amount the routine salinity calculation from conductivity measurements. Recent numerical models of multi-component aqueous electrolytes permit the simulation of physical chemical properties of seawater with variable solute composition. In this paper, the FREZCHEM model is used to derive a Gibbs function for Baltic seawater, and the LSEA DELS model to provide estimates for the conductivity anomaly relative to Standard Seawater. From additional information such as direct density measurements or empirical salinity anomaly parameterisation, the quantitative deviations of properties between Baltic and Standard Seawater are calculated as functions of salinity and temperature. While several quantities show anomalies that are comparable with their measurement uncertainties and do not demand special improvement, others exhibit more significant deviations from Standard Seawater properties. In particular density and sound speed turn out to be significantly sensitive to the presence of anomalous solute. Suitable general correction methods are suggested to be applied to Baltic Sea samples with known Practical Salinity and, optionally, directly determined density. Correspondence to: R. Feistel (rainer.feistel@io-warnemuende.de)
Introduction
From Knudsen's "Normalwasser VI" (Knudsen, 1903) to the current IAPSO 1 service, Standard Seawater (SSW) collected from the North Atlantic and processed into sealed bottles has served for the calibration of oceanographic measuring devices for more than a century.This water has also been used to characterise the properties of seawater (Millero et al., 2008).However, the chemical composition of seawater is not exactly constant.Regional deviations of seawater composition and properties were occasionally investigated, in particular in the 1970s (Rohde, 1966;Cox et al., 1967;Kremling, 1969Kremling, , 1970Kremling, , 1972;;Connors and Kester, 1974;Brewer and Bradshaw, 1975;Millero et al., 1978;Poisson et al., 1981;Millero, 2000), but were generally considered of minor relevance and ignored by previous international oceanographic standards (Forch et al., 1902;Jacobsen and Knudsen, 1940;Lewis, 1981;Millero, 2010).However, the effects of these compositional variations are measureable, and are easily the largest single factor currently limiting the accuracy of empirical formulas for the thermodynamic properties of seawater.It is therefore desirable to investigate the effects of these regional deviations, and to determine how these deviations can be incorporated into routine procedures for obtaining numerical estimates of different seawater properties (Lewis, 1981).
The new TEOS-10 2 formulation of seawater properties (Feistel, 2008;IAPWS, 2008;IOC et al., 2010) supports the analysis of anomalous seawater properties in a first approximation even though methods and knowledge available for the description of the related effects are still immature.An important step in this direction was the definition of the Reference Composition (RC) as a standard composition model for sea salt (Millero et al., 2008).The RC can be used to define a Reference Salinity, which represents the actual mass fraction of solute in seawater of Reference Composition.It also defines a baseline relative to which anomalies can be properly quantified in detail.The RC is defined in the form of exact molar fractions, x RC a > 0, for 15 major sea salt constituents, a. Deviations of molar fractions, x a = x RC a , from the RC found in samples of natural or artificial seawater are regarded as composition anomalies.A second step towards an analysis procedure for anomalous seawater has been to define a parameter, the Absolute Salinity, which will provide the best estimate of the density of a particular seawater sample whose composition is different than the Reference Composition, when used as a numerical input into the TEOS-10 Gibbs function (Wright et al., 2010b).Under this definition, the Absolute Salinity represents the mass fraction of solute in a seawater of Reference Composition with the same density as that of the sample, and can also be called the Density Salinity.It may therefore be different than the actual mass fraction of solute in the sample, which is termed the Solution Absolute Salinity.
In the past, the thermodynamic properties of freshwater and estuarine systems have been found to be approximately described by a heuristic, referred to as "Millero's Rule" here, that states that these properties depend primarily on the mass of solute, and only secondarily on the composition of the solute (Millero, 1975;Chen and Millero, 1984).If this is true for density, then the Density Salinity is a good approximation for Solution Absolute Salinity, even in the presence of composition anomalies.However, recent analysis (Pawlowicz et al., 2010) suggests that this approximation might have a much narrower range of validity than was previously believed.
The Baltic Sea is an obvious place to study the effects of composition anomalies since the existence of composition anomalies in Baltic seawater has been known since the formulation of Knudsen's equation of state (Knudsen, 1901;Forch et al., 1902) in the form of its salinity intercept at zero Chlorinity.The details of these anomalies were determined by chemical analysis beginning in the 1960s (Rohde, 1965;Kremling, 1969Kremling, , 1970Kremling, , 1972;;Feistel et al., 2010a), and some empirical evidence has been gathered on the effects on density (Kremling, 1971;Millero and Kremling, 1976).
The electrical conductivity of anomalous solute in Baltic seawater is not negligible and has led in the past to various mutually inconsistent empirical relations between Practical Salinity and Chlorinity (Kwiecinski, 1965;Kremling 1969Kremling , 1970Kremling , 1972) ) and to an experimental study of whether Practical Salinity is conservative within its measurement uncertainty (Feistel and Weinreben, 2008).Here, conservative means that the salinity value remains the same when temperature or pressure of the sample are changing.However, there is little theoretical knowledge of the reasons for the mag-nitude of the resulting density and conductivity anomalies, and very little is known at all about the quantitative effect of anomalous solutes on the sound speed, the heat capacities, the freezing point, or many other thermodynamic properties (Feistel, 1998).
One drawback of using the Baltic Sea as a test region is that the relative composition of the water is likely not constant with position or depth.The composition variations derive from the inflow of many rivers, which themselves have a wide range of compositions, and these are not well mixed within the Baltic Sea.In addition, these riverine additions are not constant in time and are involved in complex biogeochemical processes during the water residence time of 20-30 years (Feistel et al., 2008b;Reissmann et al., 2009); significant variations apparently occur on at least decadal time scales (Feistel et al., 2010a).Acknowledging this uncertainty, we shall use a highly simplified model of the composition anomaly that represents only the effects arising from the addition of calcium and bicarbonate ions which dominate observed anomalies.
In parallel with the development of TEOS-10, numerical models that can be used to investigate the thermodynamic and transport properties of seawaters from a theoretical basis have been developed and tested (Feistel and Marion, 2007;Pawlowicz, 2010).Known as FREZCHEM (Marion and Kargel, 2008) and LSEA DELS (Pawlowicz, 2009) respectively, these models have been used to extend the range of validity of the thermodynamic Gibbs function into salinities larger and smaller than have been studied experimentally (IAPWS, 2007;Feistel, 2010), and to investigate the effects of composition anomalies resulting from biogeochemical processes on the conductivity and density of seawater (Pawlowicz et al., 2010).In this paper we combine these numerical approaches to study the properties of Baltic Sea water.We create a correction to the TEOS-10 Gibbs function that can be used to determine all the thermodynamic properties of Baltic Sea water, and a correction to the PSS-78 Practical Salinity Scale that can be used to estimate the conductivity of this water.These analytical models are used to study whether the Density Salinity (i.e. the Absolute Salinity as defined by TEOS-10) is in fact a good estimate of the Solution Absolute Salinity (actual mass fraction of solute), and whether or not the Density Salinity can be used in conjunction with the Gibbs function for SSW to determine other thermodynamic parameters.
The composition anomaly of the Baltic Sea, Fig. 1, is dominated by riverine calcium excess (Rohde, 1965;Millero and Kremling, 1976;Feistel et al., 2010a).The dissolved positive Ca ++ ions are charge-balanced mainly by dissolved carbon dioxide, CO 2 , e.g., in the form of two negative bicarbonate HCO − 3 ions.Baltic carbonate concentrations depend in a complex way on exchange with the atmosphere, seasonal solubility, biological activity as well as various chemical reactions with the sediment under occasionally anoxic conditions (Thomas and Schneider, 1999;Nausch et al., 2008;Omstedt Ocean Sci., 6, 949-981, 2010 www.ocean-sci.net/6/949/2010/approach taking into account the experimental uncertainty.In section 6, as functions of the two salinity variables, correlation formulas for the conductivity, Practical Salinity and Reference Salinity of Baltic seawater are derived from results based on the LSEA_DELS model.Combining the previous results, section 7 discusses the errors implied by computing seawater properties directly from Practical Salinity readings, and suggests general correction algorithms for error reduction. Fig. 1: The Baltic Sea is a semi-enclosed estuary with a volume of about 20 000 km 3 and an annual freshwater surplus of about 500 km 3 /a; direct precipitation excess accounts for only 10 % of the latter value (Feistel et al., 2008b).Baltic seawater The Baltic Sea is a semi-enclosed estuary with a volume of about 20 000 km 3 and an annual freshwater surplus of about 500 km 3 a −1 ; direct precipitation excess accounts for only 10% of the latter value (Feistel et al., 2008b).Baltic seawater (BSW) is a mixture of ocean water (OW) from the North Atlantic with river water (RW) discharged from the large surrounding drainage area.Regionally and temporally, mixing ratio and RW solute are highly variable.Collected BSW samples consist of Standard Seawater (SSW) with Reference Composition (RC) plus a small amount of anomalous freshwater solute (FW), which we approximate here to be calcium bicarbonate, Ca(HCO 3 ) 2 .In dissolved form, depending on ambient temperature and pH, Ca(HCO 3 ) 2 is decomposed into the various compounds of the aqueous carbonate system with mutual equilibrium ratios (Cockell, 2008).et al., 2009;Schneider et al., 2010).Additions of solute can cause changes in the equilibrium chemistry (e.g., in pH), and hence can lead to particles of, say, HCO − 3 , being converted into particles of CO 2− 3 by solute-solvent reactions.Such reactions convert H 2 O molecules from being part of the solvent to being part of the solute, or vice versa, such as in the case of Eq. (1.1).A full numerical simulation must model these changes as well, and this requires additional assumptions.
In FREZCHEM an "open system" approach is used.Lime (CaCO 3 ) is added, and then the chemical composition is allowed to evolve to an equilibrium state under the restriction that the partial pressure of carbon dioxide (pCO 2 ) and the total alkalinity (TA) are fixed.This is a reasonable approach for laboratory studies in which waters at 25 • C are stirred in contact with air after the addition of a salt, or for wind-mixed river plumes in equilibrium with the atmosphere.In the additions modelled here, a substantial inflow of CO 2 gas occurs and increases the mass of anomalous solute, so that the final composition is approximately modelled as an addition of Ca ++ and 2 HCO − 3 , i.e. a reaction of the form (Cockell, 2008): (1.1) In LSEA DELS a "closed system" approach is used.In this case a salt is added, and the chemical composition is allowed to evolve to an equilibrium state under the restriction that the total dissolved inorganic carbon (DIC) is fixed.This is a reasonable approach in situations where a TA and DIC anomaly are known.In the Baltic, these anomalies in TA and DIC are almost equal (Feistel et al., 2010), which indicates that the composition change is approximately modelled as an increase in Ca(HCO 3 ) 2 .This again is consistent with a reaction of the form (1.1).
Although the different assumptions in the two models are potentially a source of discrepancy between the results of our investigation into thermodynamic properties, which requires FREZCHEM, and investigation into conductivity properties, which required LSEA DELS, there is little difference between the final compositions obtained using the two approaches in this particular case.From another numerical model referred to as LIMBETA (Pawlowicz et al., 2010), an equilibrium model consistent with LSEA DELS, density is computed for comparison with FREZCHEM in order to quantify the effect of the different boundary conditions.The difference in the predicted density anomalies for a given Ca anomaly is less than 6 g m −3 , as discussed in Sect.6.
The FREZCHEM model results are used here to develop a Gibbs function for Baltic seawater in the form of a small correction to TEOS-10.A Gibbs function is a thermodynamic potential in terms of temperature, pressure and particle numbers and is therefore consistent with "closed system" conditions.The proper thermodynamic potential for FREZCHEM is a function which takes chemical potentials rather than particle numbers as independent variables, such as the Landau potential, = pV , where p and V are pressure and volume (Landau and Lifschitz, 1987;Goodstein, 1975).The Landau potential is related to the Gibbs potential by a Legendre transform (Alberty, 2001;Feistel et al., 2010c).The chemical potential of water in seawater expressed in terms of the Gibbs function is an example for such a Legendre transform.Since the differences between the open and the closed models are small, we refrain from the relatively complicated conversion procedure between Gibbs and Landau potentials in our generalization of the TEOS-10 Gibbs function with respect to an additional salinity variable.The gain expected from this significantly more demanding model will very likely be minor and at this stage does not warrant the additional effort.
Thermodynamic potentials describe unique equilibrium states at given conditions, e.g., in terms of numbers of atoms of the elements present in the system.These atoms may or may not form mutual bound states, and chemical reactions may occur between those compounds, between the solutes www.ocean-sci.net/6/949/2010/Ocean Sci., 6, 949-981, 2010 or the solvent, without affecting the validity of the thermodynamic potential expressed in terms of the system's elementary composition.This very convenient property is evident from the representation of thermodynamic potentials in statistical mechanics such as the canonical or the grand canonical ensemble.Formally, the atom numbers can also be replaced by suitable fixed stoichiometric combinations, i.e. by numbers of certain molecules as independent variables.Hence, the concentrations of Ca ++ and HCO − 3 ions are sufficient to correctly formulate the Gibbs function for Baltic seawater, regardless of any chemical reactions that in reality occur in the marine carbonate system, and which are modelled correspondingly by FREZCHEM and LIMBETA to determine the particular equilibrium states.
The paper is organised as follows.In Sect.2, several required composition variables and basic thermodynamic terms are introduced.In Sect.3, a formal expression for the Gibbs function of Baltic seawater is derived.This expression is used in Sect. 4 to obtain a formulation for the Baltic Sea Gibbs function through an empirical correlation of a specified functional form against results estimated using of the FREZCHEM model.This Gibbs function depends on two salinities, the Absolute Salinity of the SSW part, and a correction proportional to the anomalous calcium excess.In Sect. 5 selected property anomalies are computed from the Gibbs function for Baltic seawater and compared with a density-salinity approach taking into account the experimental uncertainty.In Sect.6, as functions of the two salinity variables, correlation formulas for the conductivity, Practical Salinity and Reference Salinity of Baltic seawater are derived from results based on the LSEA DELS model.Combining the previous results, Sect.7 discusses the errors implied by computing seawater properties directly from Practical Salinity readings, and suggests general correction algorithms for error reduction.
Composition variables
Baltic seawater, BSW, is a mixture of ocean water, OW, from the Atlantic plus a riverine freshwater contribution, RW, which may contain a small amount of salt, Fig. 1.The composition of OW is very close to the RC, i.e., to the composition of IAPSO Standard Seawater (SSW).RW contains various salts with the composition varying strongly in time depending on the different river sources (Perttilä, 2009).On average, the molar ratio of calcium to chloride for RW is significantly higher that for the RC.When RW and OW are mixed to form BSW, the two different origins of the chloride fraction can no longer be distinguished but a measurable calcium excess remains compared to the concentrations seen in SSW of the same Chlorinity and this represents the primary composition anomaly associated with RW inputs to the Baltic.Thus, samples collected from the Baltic Sea can reasonably be regarded as a parent solution of pure-water diluted Stan-dard Seawater, SSW, with Reference Composition, RC, plus a small amount of anomalous freshwater solute, FW, which originates from river discharge and contains mainly the calcium fraction of RW in excess of the expected value based on the Ca/Cl ratio of the RC.Note that the SSW contribution includes pure water plus RC solute from both OW and RW whereas FW refers only to the anomalous solute derived from riverine inputs.
The SSW and FW fractions of BSW are usually separated by the definition that FW does not contain any halides, i.e., that the Chlorinity of BSW determines the SSW fraction, independent of whether or not some of the river water entering the Baltic carries a relevant halide load.Because the RW component does in fact contain a small fraction of halides, the use of Chlorinity to estimate the SSW fraction will always result in this component including a small contribution from RW of all species in the RC.However, because the halide concentrations in OW are so large, the relative change in their concentration due to RW solute is very small, as is the corresponding error in the concentrations of all species in RC, and thus can be neglected.Anomalies of BSW, i.e., the composition of the FW fraction, in chemical species other than calcium and carbonates are neglected in our models.They are less relevant and were also found to vary significantly from author to author and between the analysed samples (Feistel et al., 2010a).
We emphasize that the models considered in this paper are formulated in terms of two independent salinity variables representing the SSW and FW fractions of BSW.In contrast, it is a common practice to assume that the FW composition equals that of RW (Millero and Kremling, 1976;Feistel et al., 2010a), which is consistent with the fact that the composition anomaly of BSW increases with decreasing brackish salinity.When results from our models are discussed or compared with observations, we will make use of such empirical salinity-anomaly relations between SSW and FW to conveniently display the typical anomalous properties as functions of a single variable that is routinely observed, the brackish salinity.In particular, the SSW and FW variables of the models will be approximately linked to the OW and RW concentrations, Eq. (2.16).However, it should be noted that the thermophysical equations derived from our models do not rely on any empirical and climatologically varying relation between SSW and FW; they depend separately on the two concentration variables.
In the FREZCHEM and LSEA DELS models, the FW composition is simplified to consist only of the carbonate equilibrium components that evolve from the dissolution of Ca(HCO 3 ) 2 in pure water, neglecting any other solutes such as sulfate or magnesium.The Gibbs function derived from FREZCHEM takes only the mass fraction of Ca(HCO 3 ) 2 as the FW input variable, regardless of the chemical equilibrium composition details after its dissolution in water.
To describe the thermodynamic properties of a given BSW sample, we first introduce a number of terms and variables.
Ocean Sci., 6, 949-981, 2010 www.ocean-sci.net/6/949/2010/A set of independent primary variables (considered as known) is required to describe the composition of the solutions corresponding to a particular water sample: the number of water molecules from OW, N OW 0 , and from the local freshwater input RW, N RW 0 , and the number of particles N OW a and N RW a of the related solute species, a. Their molar masses of the solvent and solute species are denoted by A 0 and A a , respectively.The number of particles per mole is Avogadro's number, N A .
When conservative mixing and a neutral precipitationevaporation balance are assumed, the number of water and solute particles in BSW are, (2.2) Here, the total solute particle numbers of the SSW and the FW fraction, N SSW S and N FW S , respectively, are chosen so that > 0 for all species of the RC but N FW a = x FW a N FW S = 0 for most of the RC species in the freshwater fraction.The molar fractions of the Reference Composition, x RC a > 0, are defined by Millero et al. (2008), and the molar fractions of the anomalous solute, x FW a ≥ 0, are inferred from the simplified dissociation reaction Eq.(1.1), as Additional basic quantities are derived from the previous variables to determine the related water properties.These quantities include: -the mass of salt from the SSW part, -the mass of the FW part, which consists of the solute only, -the total mass of solvent, M BSW 0 , which equals the solvent mass of the SSW part, -the total mass of solute, M BSW S , -the total mass of the SSW solution, (2.8) -the total mass of the combined BSW sample, M BSW , (2.9) In terms of those basic particle numbers and masses, several other useful properties are defined, such as the total number of water particles, N SSW 0 , in SSW, and of salt, N BSW S , in BSW, (2.10) and the Absolute Salinity of BSW, (2.11) The latter consists of the sum of the mass fractions of sea salt from the SSW, S BSW SSW , and from the FW, S BSW FW , to the BSW, in the form, ) (2.13) Before mixing, the salinities of the two end members are for the OW part, where M OW S is the mass of salt dissolved in the sample mass M OW , and for the RW part, where M RW S is the mass of salt dissolved in the sample mass M RW .Under the plausible assumption that the SSW solute originates from ocean water OW, M SSW S ≈ M OW S , and the FW solute from river discharge, RW, M FW S ≈ M RW S , the relation between the partial salinities before and after the conservative mixing process is given by the mass balance, S BSW FW /S For the estimation of the riverine salinity S RW A from density measurements of Baltic Sea samples, this equation is commonly used under the additional assumption that the SSW end member, North Atlantic surface water, has exactly standard-ocean salinity, S OW A ≈ S SO (Millero and Kremling, 1976;Feistel et al., 2010a), which is given in Table A1.The value of S BSW SSW can be determined from Chorinity measurements since the amount of halides in FW is zero by definition and the value of S BSW FW can then be determined from Eq. (2.11) with the value of S BSW A , Eq. (2.26), estimated from density measurements.
The mean molar masses of the solutes from the SSW and from the FW, respectively, are defined as (2.17) In the final solution, BSW, the total molality 3 of the solute is (2.18) expressed as the sum of the partial molalities, m BSW SSW and m BSW FW , of sea salt from the SSW and from the FW contributions to BSW, Compared to the molalities, Eqs.(2.19), (2.20), the salinities, Eqs.(2.12), (2.13), have the disadvantage that the salinity measure S BSW SSW of salt present with standard composition is (slightly) changing as soon as some anomalous solutes, M FW S , are added or removed, even if the amount of salt that stems from the SSW, M SSW S , and the mass of solvent, M BSW 0 , remain the same.
In general, a formal solute decomposition in the form of Eq. (2.2) is not self-evident.If a seawater sample of a certain molar solute composition x and molality m is given and its original end members are unknown, the decomposition of the solute into a "preformed" part with Reference Composition x RC and molality m RC , and a residual anomalous "freshwater" part with a resulting composition x FW and molality δm takes the form Here, the molar fractions are normalised, x FW a = 1.These mass-balance equations for the n species 3 Molality = moles of solute per mass of solvent do not possess a unique solution for the (n + 1) unknowns m RC , δm and x FW which fully characterise the end members.Consequently, due to this ambiguity of m RC , the "Preformed Salinity" (Wright et al., 2010a) of an arbitrary seawater sample, (2.22) may take any desired value unless it is subjected to a specified additional condition.One suitable, physically reasonable condition is that δm takes a minimum non-negative value and that m RC and all the freshwater fractions x FW are also non-negative, x FW a ≥ 0. In this case, two chemically well-defined and meaningful end members are associated with the given seawater sample.The molar mass A FW , Eq. (2.17), is positive definite under this condition, and the molality, m BSW FW , Eq. (2.20), the salinity, S BSW FW , Eq. (2.13), the mass, M FW S , and the particle numbers, N FW a , of the anomalous solute are nonnegative.The ideal-solution part of the Gibbs function of any aqueous solution, (2.23) possesses a regular and reasonable series expansion with respect to the anomaly if x FW a ≥ 0 and 0 ≤ x FW a δm x RC a m RC , and the chemical potentials of the RC and the FW solutes are mathematically valid and physically meaningful expressions, Eq. (3.6).Symbols newly introduced in Eq. (2.23) are specified in the glossary, Appendix B.
Alternatively, if for certain reasons the separation (Eq.2.21) is formally specified in such a way that at least one of x FW a ≤ 0, x RC a ≤ 0, m RC ≤ 0 or m < m RC is implied, some of the previous convenient properties may no longer be valid and a mathematically more cautious treatment of the thermodynamic perturbation is required.In this respect we can distinguish at least three qualitatively different situations, here referred to as modified, alien, and deficient seawater.The distinction between these cases is necessary only if the anomaly is preferably described in terms of an anomalous solute with thermodynamically well-defined concentration and composition values, i.e., if non-negative molar fractions x FW a and non-negative molalities m RC and δm are relevant for the equations used, and if each of the anomalous concentrations, x FW a δm, is assumed to be small compared to that of the parent solution, x RC a m RC , as exploited in this paper.These conditions are mostly met in the case (a) but partly violated in the cases (b) and (c).Thus, anomalies of the kinds (b) or (c) may require a different Gibbs function approach than the one developed in this paper.a. Modified seawater is defined by the condition x a > 0 for each dissolved species a in the RC (i.e., for all species with x RC a > 0), and x a = 0 for all species a not included Ocean Sci., 6, 949-981, 2010 www.ocean-sci.net/6/949/2010/ in the RC (i.e., for all species with x RC a = 0).Under these conditions, a nonvanishing anomaly implies that x a = x RC a for at least two of them.This is the simplest case and it is considered exclusively in this paper.It occurs when e.g.riverine freshwater or hydrothermal vents increase the concentration of selected species relative to the parent solution with Reference Composition, or if some species are partially precipitated due to supersaturation at high salinity or high temperature, or biologically depleted.If m is the molality of the given sample, the solute can be uniquely separated into a regular part with Reference Composition and the molality m RC < m, and an anomalous part with the molality δm = m−m RC , subject to the conditions x k m − x RC k m RC = 0 for at least one species k ∈ RC. (2.25) The species k is regarded as the key species which is not present in the anomalous part; its molality specifies the regular part via the RC ratios.In this study of the Baltic Sea, chloride will serve as the key species.Because of the condition (Eq.2.24), the anomalous part does not contain species with formally negative concentrations and can be modelled physically/chemically in the form of added salt.Usually, δm m RC will be assumed.
b. Alien seawater is defined by the condition x a > 0 for at least one dissolved species a, the alien species, that is not part of the RC (i.e., x a > 0 for a species for which x RC a = 0).Two examples of this case are when biologically produced silicate or organic compounds are added to seawater at relevant amounts, and when seawater is acidified to prevent precipitation in technical systems.Compared to the Reference Composition, the responsible physical state space dimension must be expanded to cover the alien species, and the representative point for the RC is then located on the boundary of the positive cone of the expanded space rather that in its interior.On the boundary or in its immediate vicinity, thermodynamic properties possess very special properties such as singularities of chemical potentials or electrolytic limiting laws.Thus, alien species cannot be described theoretically by a small linear deviation from a regular point in the phase space; they require specific nonlinear mathematical expressions such as limiting laws.c.Deficient seawater is defined by the condition x a = 0 for at least one species a, the deficient species, that is part of the RC (i.e., for a species with x RC a > 0).The missing constituent may be a volatile or reactive compound such as CO 2 or OH − that has disappeared in a certain physical, chemical or technical environment.Although the resulting composition may be very similar to the RC, a procedure like in case (a) is impossible here since it would formally lead to a zero-molality regular part and an anomalous part that contains all of the solute.In this case it may be more reasonable to specify the anomalous part as a small deviation from the RC concentrations some of which are negative.It is clear that this anomalous part can no longer be considered as an "added salt".
As suggested by observational evidence (Feistel et al., 2010a), the Baltic seawater is modelled here as modified seawater, as specified under case (a).The related Preformed Salinity, Eq. (2.22), is the Absolute Salinity of the diluted SSW, denoted here by A differs from the OW end-member salinity, S OW A , Eq. (2.14), at least due to the dilution with the pure water part of the riverine input and possibly, depending on where and when the BSW sample was collected, due to the riverine contributions to the key species, chloride.We will assume that the dilution effect strongly dominates.The resulting brackish SSW part, the parent solution, can properly be described by the TEOS-10 Gibbs function in terms of S SSW A , T and P .An expression for the correction to this Gibbs function, proportional to the anomalous solute molality, δm, is derived from thermodynamic considerations in the following section.
Theoretical formulation of the Gibbs function for Baltic seawater
In the Baltic Sea, small amounts of anomalous solutes, N FW a , are added to the brackish water body of dilute standard ocean water which consists of N BSW 0 water molecules and N SSW a solute particles.The Gibbs energy of the diluted, anomalyfree parent solution is the sum of the chemical potentials (Feistel and Marion, 2007), (3.6) The chemical potentials, µ a , required here depend only on the properties of the parent solution, µ a = µ 0 a (T ,P ) + kT ln(m a γ a ). (3.7) Here, γ a (m,T ,P ) is the practical activity coefficient of the species a, which depends on the set m = {m a } of all molalities of the parent solution, (3.8) Symbols newly introduced in Eq. (3.7) are specified in the glossary.The particle numbers of the anomalous solutes can be expressed in terms of their mole fractions and their total molalities, (3.9) In these terms, the Gibbs energy anomaly, Eq. (3.6), reads (3.10) Here, R = N A k is the molar gas constant, and γ id FW , γ id a , related by (3.11) are the limiting values of the activity coefficients at infinite dilution.
Note that the Eq.(3.10) is applicable only to anomalous species, x FW a > 0, that are already present in the parent solution, x RC a > 0. Otherwise, in the limit x FW a > 0, x RC a = 0, Eq. (3.10) possesses a logarithmic singularity for "alien" species a that do not belong to the RC but appear in the anomaly.
Dividing the Gibbs energy by the related mass of the solution, we obtain the expressions for the Gibbs functions of the (diluted) parent solution, and of Baltic seawater, (3.13) Here, g SW S SSW A ,T ,P is the TEOS-10 Gibbs function of seawater as a function of Absolute Salinity, S SSW A , Eq. (2.26), of the "preformed" parent solution with Reference Composition (RC) (Millero et al., 2008;Pawlowicz et al., 2010), Newly introduced symbols are explained in the glossary.From Eqs. (3.12) and (3.13), in linear approximation with respect to the anomalous solute concentration, the Gibbs function anomaly is (3.15) The partial specific Gibbs energy, g FW , of the very dilute anomalous solute in the parent solution is inferred from Eqs. (3.10) and (3.15) to depend only on the parent solution properties, in the form where R FW = R/A FW (Table A1) is the specific gas constant of the anomalous solute.The constant γ id FW is the limiting value of γ FW at infinite dilution and is formally introduced here to keep the arguments of the two logarithmic terms dimensionless after their separation; its numerical value is chosen such that the second term disappears at low concentrations.Note that γ FW is defined only up to an arbitrary constant factor which enters the reference state condition, Eq. (4.12), in combination with µ 0 FW .The partial Absolute Salinity, S SSW A , of the salt fraction with Reference Composition in BSW is related to the given molality, m BSW SSW , by means of Eq. (3.14).The chemical potential, µ 0 FW , of the anomalous solute in pure water at infinite dilution is FW , the salinities associated with the salts from the North Atlantic and from the local riverine inputs.The function g BSW depends on the known Gibbs function of SSW, g SW , and an unknown function, g FW , that represents the FW properties in the compact form of Eq. (3.16), and will be determined empirically from simulated data in the next section.
The partial Absolute Salinity, S BSW FW , Eq. (2.13), of the anomalous solute is related to its molality in BSW, m BSW FW , by In terms of the partial salinities S SSW A and S BSW FW , the Absolute Salinity of BSW, S BSW A , Eq. (2.11), is given by the formula (3.21) The salinity variable S BSW A is computed from the molar masses of all the dissolved species and is denoted by S soln A (the mass fraction of dissolved material in solution) in the nomenclature of Wright et al. (2010a).The function g FW depends on the concentration of the SSW part, S SSW A , and the anomalous composition of the FW part but according to Eq. (3.16) it is independent of the concentration, S BSW FW , of the FW part which is assumed to be very dilute.In the next section, an empirical correlation equation for g FW will be derived from model data computed using FREZCHEM (Marion and Kargel, 2008).
Fitting the Baltic Gibbs function to FREZCHEM simulation data
For arbitrary aqueous electrolyte solutions, the related Gibbs function in the form (Feistel and Marion, 2007) g(S A ,T ,P ) = g W (T ,P ) + S A (T ,P ) can be estimated from available Pitzer equations for the constituents using the FREZCHEM model.Here, S A is the Absolute Salinity (mass fraction of dissolved material) of the particular solution, g W is the Gibbs function of pure water, is the partial specific Gibbs energy at infinite dilution, R S is the specific gas constant of the particular solute, and is the activity potential, expressed in terms of the osmotic coefficient, φ, and the mean activity coefficient, γ , of the solution.Infinite dilution is the theoretical asymptotic state of a solution at which the mutual interaction between the solute particles is negligible as the result of their large pairwise separations.Activity coefficients γ are defined only up to an arbitrary constant factor; here, γ id is the limiting value to which the particular γ is normalized at infinite dilution, commonly, γ id = 1 kg mol −1 .Any change of this constant is compensated by the conditions, Eq. ( 4.12), imposed on the freely adjustable coefficients of seawater at the specified reference state (Feistel et al., 2008a).
Using the FREZCHEM model, the absolute salinity, S A = S BSW A , the activity potential, ψ, the specific volume, v = (∂g/∂P ) S A ,T , and the heat capacity, c P = −T (∂ 2 g/∂T 2 ) S A ,P , of Baltic seawater were computed for a number of grid points at given values of T , P , the chloride molality, m Cl (which determines the SSW contribution), and the Calcium molality anomaly, δm Ca (which determines the FW contribution).From these data and Eq.(4.1), an empirical correlation for the partial specific Gibbs energy, g FW , Eq. (3.16), was determined numerically by regression with respect to the anomalies relative to SSW, i.e., relative to δm Ca = 0.
To relate the given molalities, m Cl and δm Ca , to the arguments, m BSW SSW and m BSW FW , of the Gibbs function (3.19), suitable composition models must be specified.For SSW, the Reference Composition model gives Therefore, the SSW composition variable in Eq. (3.19) is obtained from m Cl by Eq. (3.13), In terms of constituents of the RC, the mole fractions of lime dissolved in FW are assumed here to be given by Eq. ( 2.3).
The only purpose of this reaction scheme is its use as a proxy to represent the complex marine carbonate chemistry simulated by FREZCHEM, in order to provide the theoretical Gibbs function model with reasonable molar fractions, Eq. (2.3), and molar masses, Eq. (4.6), of the anomalous solute.The related calcium anomaly of BSW is given by The total calcium molality in BSW is the sum of the SSW and the FW parts, Derived from the structure of the target function of the regression, Eq. (3.16), we use the polynomial expression (Feistel and Marion, 2007), where the dimensionless reduced variables are defined by (Feistel, 2008;IAPWS, 2008), The standard-ocean parameters S SO , T SO and P SO are given in Table A1.Comparing equal powers of T and P of the logarithmic term in Eqs.(3.16) and (4.8) in the limit x → 0, the coefficients r j k are analytically available from the relation to be The coefficients c 000 and c 010 are arbitrary and chosen to satisfy reference state conditions which determine the absolute energy and the absolute entropy of the anomalous solute.
Here we employ the reference state conditions From the Gibbs function (3.19) in conjunction with the functional form (4.8) we derive expressions for the available properties v, c P and ψ in terms of the remaining unknown coefficients, c = c ij k .These coefficients are then determined numerically by the requirement to minimise the penalty function, in which δv i , δc P i and δψ i are property anomalies of Baltic seawater relative to the parent solution at the grid points chemistry implemented in FREZCHEM.If, for example, results were calculated without allowing for the contribution from atmospheric CO2 in the reaction (1.1), then a mismatch between Millero's Rule and FREZCHEM of approximately 30% occurs in the modified results corresponding to Fig. 2; this difference results from the smaller molar mass of the solute, FW A , eq. ( 2.17), and hence the smaller contribution to salinity from the FW source eq.i of the FREZCHEM simulation results, weighted by estimated uncertainties ω.Selected examples of the data for δv i , δc P i and δψ i are displayed in Figs. 2, 3 and 4. In our Gibbs function, the original complex chemistry implemented in FREZCHEM is represented in the simplified form represented by the reaction (1.1) in conjunction with the analytical expression (4.8).Since Eq. (4.13) measures the deviation between the two numerical models, the uncertainties ω cover their numerical round-off and mutual misfit rather than any experimental accuracy.In practice, the ω values were suitably chosen to allow a reasonably smooth fit.Experimental uncertainties are irrelevant for the regression considered in this section and will be discussed in the subsequent section where the properties of the resulting Gibbs function (4.8) are analysed.The scatter of the FREZCHEM points relative to the fitted Gibbs function are shown in Figs. 5, 6 and 7.
Thermodynamic property anomalies
Various salinity measures such as Reference Salinity SR, Absolute Salinity, SA, Density Salinity, SD, or Chlorinity Salinity, SCl, have the same values for SSW but differ from each other for BSW.The estimate of Density Salinity based on inversion of the expression for density in terms of the Gibbs function for SSW at arbitrary values of temperature and pressure is represented by D S , and referred to as "measured" Density Salinity since it is based on whatever the conditions of the direct density measurement are.It is the Absolute Salinity of SSW (here assumed to have Reference Composition) that has the same density as BSW at given temperature and pressure, i.e., ( 5 . 1 ) Fig. 7. Scatter of the activity potential anomalies computed from FREZCHEM, δψ i , relative to the activity potential anomalies computed from the Gibbs function, δψ (c), Eq. (4.33), at 1260 given data points.The rms deviation of the fit is 3.1×10 −5 .Symbols 0-5 indicate the pressures of 0.1 MPa, 1 MPa, 2 MPa, 3 MPa, 4 MPa and 5 MPa, respectively.These residual anomalies should be compared with the total anomalies δψ i shown in Fig. 4.
-Absolute Salinity of anomalous seawater can be computed from its density using the TEOS-10 equation of state, and results in the same value at any temperature or pressure at which the density was measured, as well as that -the properties of anomalous seawater can be computed from the TEOS-10 Gibbs function if Absolute Salinity is used as the composition variable, and finally, the first two rules combined, that -the properties of anomalous seawater can be estimated by the TEOS-10 functions in terms of SSW properties evaluated at the same density, temperature and pressure.
In this section, we discuss the validity of Millero's Rule and compare the results derived from the FREZCHEM model with those from the TEOS-10 Gibbs function evaluated at the same Absolute Salinity.In the next section, we again discuss the validity of Millero's Rule and compare the results derived from the fitted Gibbs function of Baltic seawater with those from the TEOS-10 Gibbs function evaluated at the same Absolute Salinity or at the same density.
In Figs. 2, 3 and 4, the simulated FREZCHEM data are compared with those estimated from Millero's Rule, i.e., property differences computed from the already available TEOS-10 Gibbs function at the Absolute Salinities S BSW A and S SSW A .The very good agreement visible in Fig. 2 between the simulated density anomalies and those estimated from Millero's Rule depends on two factors.The first factor is how well the rule estimates the results of the FREZCHEM simulation.In other words, how consistent the rule is with the Pitzer equations for the specific volume in the special case of the Baltic seawater composition.The second factor is how well the simple static composition model of the anomaly, Eq. (1.1), used here for the construction of the Gibbs function with intentionally only two representative conservative composition variables, is capable of approximately covering the underlying complicated dynamic solute chemistry implemented in FREZCHEM.If, for example, results were calculated without allowing for the contribution from atmospheric CO 2 in the reaction (1.1), then a mismatch between Millero's Rule and FREZCHEM of approximately 30% occurs in the modified results corresponding to Fig. 2; this difference results from the smaller molar mass of the solute, A FW , Eq. (2.17), and hence the smaller contribution to salinity from the FW source Eq. (4.6), which changes the value of S BSW A used for Millero's Rule at a specified value of the Calcium molality anomaly.
The analytical expressions required in Eq. (4.13) for the fit of the anomalous properties are derived from Eqs. (3.15) and (4.8), in the form and (4.15) The required analytical formula for the activity potential anomaly δψ (c) expressed explicitly in terms of the TEOS-10 Gibbs function g SW and the Gibbs function correction, g FW , which depends on the unknown coefficients c, is more complicated to obtain.From the Gibbs function for BSW, g BSW , Eq. (4.1), the activity potential is derived, and similarly that of SSW, (4.17) Ocean Sci., 6, 949-981, 2010 www.ocean-sci.net/6/949/2010/After some algebraic manipulation of the difference between Eqs. (4.16) and (4.17), the activity potential anomaly, δψ (c), takes the form Here, A BSW is the molar mass of Baltic sea salt, the Gibbs function of pure water is and the partial specific Gibbs energy at infinite dilution is computed from Eq. (4.1) in the mathematical zero-salinity limit, (4.21) Since the TEOS-10 Gibbs function is defined as a series expansion in salinity, in the form (Feistel et al., 2010b), it follows immediately from Eq. (4.21) that SSW is given by SSW (T ,P ) ≡ g 2 (T ,P ). (4.23) The function BSW (T ,P ) in Eq. (4.18) is the coefficient of the linear salinity term of the Gibbs function g BSW and can be determined by comparison of the two different expressions available for g BSW , on the one hand, Eq. (4.1), in terms of Pitzer equations, and on the other hand, Eq. (3.19), in the form of a linear correction to TEOS-10, Note that g BSW in Eqs.(4.24) and (4.25) represent different approximations of the Gibbs function that we want to determine.The Gibbs function given by Eq. (4.24) is nonlinear in the anomaly.For the composition model given, its activity potential ψ BSW can be computed from complicated systems of Pitzer equations.To derive a simpler correlation function, we estimate ψ BSW here by means of the Gibbs function, Eq. (4.25), which is linear in the anomaly, S BSW FW .We consider the series expansions of Eqs.(4.24), (4.25) with respect to salinity s and require that the coefficients of the terms s 0 , s ln s and s 1 are identical in the two equations.As the small expansion parameter we choose s ≡ S BSW A under the condition that the composition ratio r ≡ S BSW FW /S BSW A remains constant in the mathematical limit s → 0.
In terms of s and r, the salinity variables are The truncated series expansions are for Eq.(4.24), for Eq.(4.25), for Eq.(4.22), and for Eq.(3.16), Note that the limiting laws of ψ BSW and ln γ /γ id are of the order O s 1/2 .The combination of Eqs.(4.28), (4.29), (4.30) gives Here we used the specific gas "constant" Note that BSW (T ,P ) depends on the composition of BSW, in particular on the ratio r = S BSW FW /S BSW A of the two independent salinity variables.
In Eq. (4.18), we replace BSW by Eq. (4.32) and get the final formula for the required activity potential anomaly, δψ (c), Here, the saline part of the Gibbs function of SSW is g S S SSW A ,T ,P =g SW S SSW A ,T ,P −g SW (0,T ,P ), (4.34) or, using Eq.(4.22), Similarly, the saline part of the partial Gibbs function of freshwater solute is defined by g F S SSW A ,T ,P = g FW S SSW A ,T ,P − µ 0 FW (T ,P ) or, using Eq.(3.16), Note that in the zero-salinity limit of Eq. (4.33), the singularity lim g F S SSW A ,T ,P of Eq. (4.37) cancels exactly with the corresponding singularity of g S /S SSW A , Eq. (4.35).In Eq. (4.33), all terms are known at the FREZCHEM data points except for g F which depends on the set of coefficients c = c ij k to be adjusted by the regression, Eq. (4.13).After this compilation, the reference state conditions, Eq. (4.12), must be satisfied.After setting c 000 = 0 and c 010 = 0 in g FW , the final values are computed from the equations c 000 = − g FW (S SO ,T SO ,P SO ) The results for the coefficients are given in (5.1) In contrast, the true Density Salinity is defined to be strictly conservative and represented by S dens A in the nomenclature of Wright et al. (2010a).To ensure that it is independent of temperature and pressure, it is computed using Eq. ( 5.1) evaluated at T = 298.15K and P = 101325 Pa, and is by definition the same for the given sample at any other T or P .
Chlorinity Salinity, S Cl , is the Absolute Salinity of SSW that has the same Chlorinity as BSW, Density Salinity and Chlorinity Salinity can be measured in the Baltic Sea; readings are currently related by the approximate empirical relation (Feistel et al., 2010a) in the form of Eq. (2.16), S , computed from eq. (5.1) for Baltic seawater at the standard ocean surface pressure and temperatures between 0 and 25 °C.The uncertainty of Density Salinity measurements is 2 g m -3 / (βρ) = 2.5 mg kg -1 (Feistel et al., 2010a), indicated by the solid horizontal lines.
The density anomaly of the Baltic Sea is shown in Fig. 9 as the difference between the densities with and without the freshwater solute, i.e., of SSW and BSW with the equal Chloride molalities (roughly, equal chlorinities), 5.6), between the densities with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure and temperatures between 0 and 25 °C.The uncertainty of density measurements is 2 g m -3 (Feistel et al., 2010a), indicated by the solid horizontal line.
The Baltic Sea anomaly of the thermal expansion coefficient is shown in Fig. 10 as the difference between the coefficients with and without the freshwater solute, i.e., of SSW and BSW with equal Chloride molalities (roughly, equal chlorinities), Using βρ ≈ 0.8 × (10 6 g m −3 )/(10 6 mg kg −1 ), it is seen that division of the numerical values of δρ/(g m −3 ) in Fig. 9 by 0.8 provides an approximate conversion to the units used in Fig. 8 so that comparison of the results in these two figures reveals that the relative errors associated with using S D in place of S BSW A to estimate salinity anomalies due to the addition of calcium carbonate is at most 25%, and only about 2.5% for a typical brackish salinity value of S SSW A ≈ 8 g kg −1 .Note that the salinity change associated with the added calcium carbonate solute (S D − S SSW A ) is itself a small fraction of the salinity change associated with the addition of fresh water (S SO − S SSW A ). Using Eq. (5.4), the ratio is approximated by (S D − S SSW A )/(S SO − S SSW A ) ≈ (130 mg kg −1 )/S SO ≈ 0.4%.
The Baltic Sea anomaly of the thermal expansion coefficient is shown in Fig. 10 as the difference between the coefficients with and without the freshwater solute, i.e., of SSW and BSW with equal chloride molalities (roughly, equal Chlorinities), The uncertainty of the TEOS-10 thermal expansion coefficient is estimated as 0.6 ppm K -1 , so the Baltic anomalies are within the uncertainty and can in practice be neglected.(5.7), between the thermal expansion coefficients (solid lines) with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure and temperatures between 0 and 25 °C, in comparison to estimates from Millero's Rule, D δα , based on Density Salinity (dashed lines), eq. (5.8), and A δα , based on Absolute Salinity (dotted lines, temperatures not labelled), eq.(5.9).For the latter two, the responsible difference between BSW A S and D S is shown in Fig. 8.The estimated experimental uncertainty of the thermal expansion coefficient is 0.6 Fig. 10.Difference δα, Eq. (5.7), between the thermal expansion coefficients (solid lines) with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure and temperatures between 0 and 25 • C, in comparison to estimates from Millero's Rule, δα D , based on Density Salinity (dashed lines), Eq. (5.8), and δα A , based on Absolute Salinity (dotted lines, temperatures not labelled), Eq. (5.9).For the latter two, the responsible difference between S BSW A and S D is shown in Fig. 8.The estimated experimental uncertainty of the thermal expansion coefficient is 0.6 ppm K −1 (Feistel and Hagen, 1995;IAPWS, 2008) S SSW A ,0,T ,P SO g BSW P S SSW A ,0,T ,P SO .(5.9) The uncertainty of the TEOS-10 thermal expansion coefficient is estimated as 0.6 ppm K −1 , so the Baltic anomalies are within the uncertainty and can in practice be neglected.
For seawater with varying composition, there are several ways to define the haline contraction coefficient, depending on the particular thermodynamic process by which the composition is changing with salinity.Here we consider the anomalous contraction coefficient which provides the density change with respect to the addition of freshwater solute 5.12), between the haline contraction coefficients (solid lines) of the parent solution with respect to the addition of FW solute and of SSW solute for Baltic seawater.Values are determined at the standard ocean surface pressure and temperatures between 0 and 25 °C.The standard-ocean value of the haline contraction coefficient is 0.781 = 781 ppm g -1 kg.The haline contraction coefficient associated with the addition of calcium carbonate is within 20% of the haline contraction coefficient for Standard Seawater.
The Baltic Sea anomaly of the isobaric specific heat is shown in Fig. 12 as the difference between the values with and without the freshwater solute, i.e., of SSW and BSW with the equal chloride molality (roughly, equal Chlorinity), The Baltic Sea anomaly of the isobaric specific heat is shown in Fig. 12 as the difference between the values with and without the freshwater solute, i.e., of SSW and BSW with the equal chloride molality (roughly, equal Chlorinity), 5.13), between the specific isobaric heat capacity (solid lines) with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure and temperatures between 0 and 25 °C, in comparison to estimates from Millero's Rule, D δ P c , based on Density Salinity (dashed lines), eq.(5.14), and A δ P c , based on Absolute Salinity (dotted lines, temperatures not labelled), eq.(5.15).For the latter two, the responsible difference between BSW A S and D S is shown in Fig. 8.The experimental uncertainty of cP relative to pure water is 0.5 J kg -1 K -1 , as indicated by the solid horizontal line.A typical value for the heat capacity of water or seawater is 4000 J kg -1 K -1 .The changing curvature of the solid curves below 5 g kg -1 is probably a numerical edge effect of the regression.
The sound speed c is computed from the Gibbs function g using the formula, ( 5 . 1 6 ) The Baltic Sea anomaly of the speed of sound is shown in Fig. 13 as the difference between the values with and without the freshwater solute, i.e., of SSW and BSW with equal chloride molalities (roughly, equal Chlorinities), Fig. 12. Difference δc P , Eq. ( 5.13), between the specific isobaric heat capacity (solid lines) with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure and temperatures between 0 and 25 • C, in comparison to estimates from Millero's Rule, δc D P , based on Density Salinity (dashed lines), Eq. (5.14), and δc A P , based on Absolute Salinity (dotted lines, temperatures not labelled), Eq. (5.15).For the latter two, the responsible difference between S BSW A and S D is shown in Fig. 8.The experimental uncertainty of c P relative to pure water is 0.5 J kg −1 K −1 , as indicated by the solid horizontal line.A typical value for the heat capacity of water or seawater is 4000 J kg −1 K −1 .The changing curvature of the solid curves below 5 g kg −1 is probably a numerical edge effect of the regression.
The anomalies of c P remain with the experimental uncertainty of 0.5 J kg −1 K −1 , Fig. 12.The errors associated with using Millero's Rule are similar to those associated with simply neglecting the FW solute and are again negligible.
The sound speed c is computed from the Gibbs function g using the formula, c = g P g T T g 2 T P − g T T g P P . (5.16) The Baltic Sea anomaly of the speed of sound is shown in Fig. 13 Rule is only slightly better than totally neglecting the influence of the FW solute on sound speed estimates.In eq. ( 5.16), the largest contribution to the sound speed anomaly comes from the anomaly of the compressibility, gpp, which is of order of magnitude up to 0.07%.Compressibility estimates from FREZCHEM have larger uncertainties than e.g.those of the density or the heat capacity (Feistel and Marion, 2007).5.17), between the sound speed (solid lines) with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure Fig. 13.Difference δc, Eq. (5.17), between the sound speed (solid lines) with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure and temperatures between 0 and 25 • C, in comparison to estimates from Millero's Rule, δc D , based on Density Salinity (dashed lines), Eq. (5.18), and δc A , based on Absolute Salinity (dotted lines, temperatures not labelled), Eq. (5.19).For the latter two, the responsible difference between S BSW A and S D is shown in Fig. 8.The experimental uncertainty of c is 0.05 m s −1 , indicated by the solid horizontal line.
The anomalies of c are much larger than the experimental uncertainty of 0.05 m s −1 , Fig. 13 and poorly approximated by Millero's Rule.Except at very low salinities, use of Millero's Rule is only slightly better than totally neglecting the influence of the FW solute on sound speed estimates.In Eq. (5.16), the largest contribution to the sound speed anomaly comes from the anomaly of the compressibility, g pp , which is of order of magnitude up to 0.07%.Compressibility estimates from FREZCHEM have larger uncertainties than e.g.those of the density or the heat capacity (Feistel and Marion, 2007).
Because of the freely adjustable constants, only relative enthalpies can reasonably be compared between samples that have different compositions.The Baltic Sea anomaly of the relative specific enthalpy is shown in Fig. 14 as the difference of relative enthalpies between the values with and without the freshwater solute, i.e., of SSW and BSW with the equal chloride molalities (roughly, equal Chlorinities), 5.20), between the relative specific enthalpies (solid lines) with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure and temperatures between 5 and 25 °C, in comparison to estimates from Millero's rule, D δh , based on Density Salinity (dashed lines, only the 15 -25 °C results are labelled), eq. ( 5.21), and A δh , based on Absolute Salinity (dotted lines, temperatures not labelled), eq.(5.22).For the latter two, the responsible difference between BSW A S and D S is shown in Fig. 8.The experimental uncertainty of the relative enthalpies is 0.5 J kg -1 × t /°C.
For the computation of the freezing temperature of Baltic seawater we need a formula for the chemical potential, µW, of water in Baltic seawater similar to µ0 in eq.(3.1), but on a mass rather than on a particle number basis: ( 5 . 2 3 ) Here, µW is defined by (5.22) For the computation of the freezing temperature of Baltic seawater we need a formula for the chemical potential, µ W , of water in Baltic seawater similar to µ 0 in Eq. (3.1), but on a mass rather than on a particle number basis: Here, µ W is defined by and apply the chain rule, (5.26) to obtain the result (5.27) This general formula is simplified in our case using the linear expression Eq. (3.19), to give: (5.28) At the freezing point, T f S SSW A ,S BSW FW ,P , the chemical potential µ W equals that of ice, µ Ih (IAPWS, 2009b): (5.29) The Baltic Sea anomaly of the freezing temperature is shown in Fig. 15 as the difference of freezing points between the values with and without the freshwater solute, i.e., of SSW and BSW with the equal chloride molalities (roughly, equal Chlorinities), as a function of Density Salinity, computed from Eqs. (5.4) and (5.5).For comparison, the anomaly is estimated by Millero's Rule using Density Salinity S D , Eq. ( 5.1), from The experimental uncertainty of the freezing temperature of seawater is 2 mK.The anomaly is of the same order of magnitude and can normally be ignored.Millero's Rule does not provide much improvement over neglecting the anomalies.( ) , is computed from the condition that the chemical potential of water in seawater, W μ , eq. ( 5.28), equals that of vapour, g V (IAPWS, 2009a, Feistel et al., 2010b): (5.33) The Baltic Sea anomaly of the vapour pressure is shown in Fig. 16 as the difference of pressures between the values with and without the freshwater solute, i.e., of SSW and BSW with the equal Chloride molalities (roughly, equal chlorinities), The experimental uncertainty of the freezing temperature of seawater is 2 mK.The anomaly is of the same order of magnitude and can normally be ignored.Millero's Rule does not provide much improvement over neglecting the anomalies.The vapour pressure of Baltic seawater, P vap S SSW A ,S BSW FW ,T , is computed from the condition that the chemical potential of water in seawater, µ W , eq. (5.28), equals that of vapour, g V (IAPWS, 2009a, Feistel et al., 2010b): A ,S BSW FW ,T ,P vap = g V T ,P vap . (5.33) The Baltic Sea anomaly of the vapour pressure is shown in Fig. 16 as the difference of pressures between the values with and without the freshwater solute, i.e., of SSW and BSW with the equal chloride molalities (roughly, equal Chlorinities), (5.36) The anomalies shown in Fig. 16 are a factor of 10 smaller than the uncertainty of the most accurate experimental data (Robinson, 1954;Feistel, 2008).(5.36) The anomalies shown in Fig. 16 are a factor of 10 smaller than the uncertainty of the most accurate experimental data (Robinson, 1954;Feistel, 2008).
The "measured" Density Salinity S D is given by Eq. ( 5.1) as a function of S SSW A , S BSW FW , T and P .When a sample's temperature is changing, its molalities m Cl and δm Ca are conservative, and so are the salinities S SSW A and S BSW FW computed from Eqs. (4.4) and (4.6).On the contrary, Density Salinity, Eq. (5.1), is not strictly conservative unless the thermal expansion coefficient and compressibility of BSW happen to be exactly the same as those for SSW. Figure 17 FW is conservative with respect to the temperature.Density Salinities are less sensitive to temperature changes than density measurements but may need to be stored together with the temperature at which they were determined.Note that the mass fraction of anomalous solute in Baltic seawater is larger than that present anywhere in the deep ocean.For a typical Baltic Sea salinity of 8 g kg −1 the mass temperature changes than density measurements but may need to be stored together with the temperature at which they were determined.Note that the mass fraction of anomalous solute in Baltic seawater is larger than that present anywhere in the deep ocean.For a typical Baltic Sea salinity of 8 g kg -1 the mass fraction of anomalous solute is approximately 0.004 × (35 -8) g kg -1 = 0.108 g kg -1 , about 7 times as large as the maximum mass fraction of anomalous solute in the deep North Pacific where composition anomalies are largest in the open ocean. is not necessarily zero for temperatures different from 25 • C; typical results are shown in Fig. 18.These density errors are relatively small in comparison to the typical Baltic density anomalies of 50-100 g m −3 that are associated with fresh water solute (Fig. 9).The anomalies discussed in this section describe the differences between thermodynamic properties of BSW and of SSW if both have the same Absolute Salinity of the SSW part, S SSW A .For a given sample of BSW, S SSW A can for instance be determined from a Chlorinity measurement.This is expensive and time-consuming, cannot be carried out in Ocean Sci., 6, 949-981, 2010 www.ocean-sci.net/6/949/2010/situ and usually requires skilled personnel, in contrast to routine CTD casts that automatically produce in-situ readings of Practical Salinity, S P .Due to the electrolytic conductivity of the freshwater solute, the relation between S SSW A and S P of BSW is influenced by a significant anomaly that cannot be estimated from the Gibbs function g BSW .This problem is addressed in the following section.
Anomalies of Conductivity, Practical Salinity and Reference Salinity
Conductivity is a non-equilibrium, transport property of seawater and is not available either from the TEOS-10 Gibbs function, or from the FREZCHEM model, which provides only equilibrium thermodynamic properties.Since Practical Salinity, the currently most important solute concentration measure in oceanography, is determined from conductivity measurements, it is important to estimate the effects of the Baltic composition anomaly on measured conductivities.This conductivity effect could reduce or increase the difference between the actual thermodynamic properties of Baltic water and those determined for Standard Seawater diluted to the same conductivity, relative to the differences between the actual thermodynamic properties of Baltic water and those determined for Standard Seawater diluted to the same chloride molality which were discussed previously in Sect. 5.These property differences for waters of the same conductivity will be discussed in Sect.7, once we have determined how conductivity is affected by the composition changes present in the Baltic.In addition, predictions of conductivity also allow us to validate at least some of the model calculations against actual observations.At present, theoretical models of aqueous solution conductivity, based on arbitrary chemical composition, are not accurate enough to study the Baltic (or any other) anomalous seawater directly.However, the composition/conductivity theory of Pawlowicz (2008), which is valid for conductivities in limnological low salinity situations, has been adapted (Pawlowicz, 2009;Pawlowicz et al., 2010) using a linearization about the known characteristics of Standard Seawater to study changes in composition/conductivity/density relationships in seawater, arising from small composition perturbations that originate from biogeochemical processes.This linearization approach, implemented in the numerical model LSEA DELS, is now used to investigate changes in the relationship between Chlorinity and conductivity-based Reference Salinity, using our idealized model of the Baltic composition anomaly, Eq. (1.1).All considerations in this section refer to conditions at an arbitrary temperature, set to 25 • C unless otherwise specified, and atmospheric pressure, P = 101325 Pa.However, these parameters are omitted from the formulas for notational simplicity. .Deviation (5.37) between the density of Baltic seawater and the density computed from conservative Density Salinity, S dens A , Eq. (5.38).The experimental uncertainty of density measurements is 2 ppm (Feistel et al., 2010a), indicated by the solid lines.
Definitions
The starting point of simulations is a composition vector C SSW , specifying the molar composition of all constituents in a base seawater.In contrast to the development in Sect.2, but more straightforwardly linked to the structure of the Gibbs function (3.19), this base seawater is not an "ocean end member" with S P = 35.Instead, it is SSW diluted by the addition of pure water so that chloride molality will remain unchanged as the calcium carbonate solute is "added" to create Baltic water.The conductivity κ SSW = κ C SSW and density ρ SSW = ρ C SSW of this water depend on the composition, and the true mass fraction of dissolved material (Solution Salinity) will be S SSW A .Since this water is just a dilution of SSW, the Reference Salinity: based on using the observed conductivity in the algorithm S P (.) specified by the Practical Salinity Scale 1978, is scaled by an appropriate choice of the constant u P to give the Solution Salinity S SSW A .The factor u P is not exactly the same as u PS when anomalies are being calculated because LSEA DELS calculations are based on a SSW composition model that slightly differs from the RC (Wright et al., 2010a).
The composition of Baltic seawater is described by the composition vector C BSW .Exact details of the way in which C BSW is related to C SSW are discussed in Section 6.2, but both compositions have the same chloride molality.The composition C BSW has a Solution Salinity S BSW A , a conductivity κ BSW = κ C BSW and a density ρ BSW = ρ C BSW that will differ from that of the base seawater.All of these parameters can be estimated using LSEA DELS once the as computed from the model results is then directly comparable to that calculated using Eq.(5.6).This parameter can therefore be used to validate the densities calculated by LSEA DELS against the Gibbs function (itself based on FREZCHEM model calculations).In addition, the change in Solution Salinity between the original base seawater and the Baltic water is, Eq. (3.21): The approximation is valid when the amount of solute added is small, as it is in this case.
Typically, conductivity measurements in the ocean are used with SSW parameterizations for different properties under the assumption that the properties of the measured water are well-modelled by the properties of SSW diluted to the same conductivity.Thus we infer a third "reference" water type, described by a composition vector C BSW R , with Solution Salinity S BSW R , whose composition is that of SSW diluted by pure water, but whose conductivity matches that of BSW: The Solution Salinity of the reference water is then the Reference Salinity of the Baltic Sea water.The ultimate purpose of the modelling in this section is then to compare the change in the Reference Salinity between Baltic Sea water and diluted Standard Seawater of the same conductivity with the actual Solution Salinity change S BSW FW from Eq. (6.3).If the added solute has the same conductivity as that of sea salt, then S R = S BSW FW .If the added solute is not conductive, then S R = 0, irrespective of the value of S BSW FW .In addition, the density of this reference water, denoted as the reference density ρ BSW R = ρ C BSW R , will differ from the true density of Baltic water ρ BSW , and the change between the true and reference densities can then be directly compared with measurements of the density anomaly in the Baltic.Previous investigations have suggested that LSEA DELS calculations for S R have an error of between 1 and 10%, depending on the details of the composition anomaly.This uncertainty ultimately arises from uncertainties in the basic chemical data for binary electrolytes from which model parameters for the conductivity algorithm were extracted, as well as inadequacies in the theoretical basis of the model at higher salinities.Errors in the LSEA DELS density algorithms are themselves much smaller than those for conductivities, but since the Reference Salinity calculation implicitly involves conductivity changes, errors in conductivity will carry over into the density anomaly calculation.
The calculations described above can be carried out at any desired temperature.However, the temperature-dependence of the conductivity and density of seawaters may also vary with the composition anomaly.This implies that the value of S BSW R as calculated above may have a slight temperature dependence.For Baltic seawater, this non-conservative effect was shown experimentally to remain within the measurement uncertainty (Feistel and Weinreben, 2008), and neglect of this effect is also supported by numerical experimentation with LSEA DELS, which suggest the maximum error is less than 0.001 g kg −1 .
Composition anomalies
Although the Baltic Sea composition anomaly is idealized in this paper as arising from the addition of calcium carbonate, calcium itself is not directly measured in the Baltic.However, anomalies in the Total Alkalinity (TA), defined in LSEA DELS as and Dissolved Inorganic Carbon (DIC), defined as are known to be approximately equal.In this section, the usual chemical notation of total stoichiometric molalities by brackets [..] is preferred for convenience.Thus we assume for the anomalies δTA = δDIC.(6.8) The addition of Ca 2+ is then inferred from mass and charge balance considerations: δm Ca ≡ Ca 2+ = δTA/2.(6.9)Using Eqs.(6.6)-(6.9), the complete composition at any particular chloride molality can be determined as a function of the molality of the calcium anomaly.This will provide a direct comparison with the Gibbs function described in Sect. 4. In order to apply these calculations specifically to the Baltic (i.e. as in Sect.5), we relate some parameter to a function of the chloride salinity S Cl (or, alternatively, any other salinity measure) in the Baltic.The value of δTA at a chloride salinity of zero, which is taken as an endpoint of linear correlations in mixing diagrams, is estimated from observations to be 1470 µmol kg −1 (Feistel et al., 2010a).The TA anomaly in Baltic waters is then δTA = 1470 µmol kg −1 × 1 − S Cl S SO (6.10) Eqs. (6.8)-(6.10),hereafter denoted as "model-1", then specify the composition C BSW of Baltic water at all chloride Ocean Sci., 6, 949-981, 2010 www.ocean-sci.net/6/949/2010/ The LSEA_DELS model calculations for R δρ , eq. ( 6.2), using model-1 anomalies can also be compared directly (Fig 20) against calculations from the Gibbs function, eq.(5.6), with the Baltic anomaly being modelled using eq.(5.4).This is a complete intercomparison of not only the density algorithms but also different approaches for specifying the composition anomalies.The two independent calculations agree quite well, with values being within 6 3 m g − of each other at all temperatures.(Feistel et al., 2010a) as well as Eq. ( 5.3), and model predictions.
molalities.However, the composition is only specified in terms of aggregate variables TA and DIC.A carbonate chemistry model within LSEA DELS, based on equations for the equilibrium chemistry, is used to calculate the complete ionic chemical composition in a new chemical equilibrium.This involves changes to CO 2 , HCO − 3 , CO 2− 3 , B(OH) 3 and B(OH) − 4 , as well as to pH and pCO 2 .Although the actual compositional perturbation is now somewhat more complex than indicated by Eq. (1.1) almost all of the change that occurs at the pH of seawater is described by an increase in HCO − 3 , similar in LSEA DEL and in FREZCHEM.From Eqn. (1.1), the change in Solution Salinity due to the added mass of dissolved solute is S BSW FW ≈ 162.1 g mol −1 × δm Ca (i.e., the molar mass of Ca(HCO 3 ) 2 times the change in calcium molality, neglecting the change in the mass of solution).The change in Solution Salinity calculated directly from the full chemical compositions used by LSEA DELS is less than 3% larger than this value, which is insignificant here in comparison with other uncertainties.This procedure allows us to determine the conductivity and density anomalies at a particular S Cl within the Baltic.
Later we will discuss whether disagreements between the model predictions and observations of density anomalies arise from inadequacies in LSEA DELS, or whether they are inherent to the idealized composition anomaly used to model Baltic seawater.For this purpose we introduce a second model for composition anomalies in the Baltic that is slightly more complex.Sulfate is the next largest component of the actual Baltic composition anomaly after calcium carbonate.The sulfate anomaly is estimated (Feistel et al., 2010a) to have a zero-Chlorinity limit of about 166 µmol kg −1 (with a considerable uncertainty), With anomalies in both Ca 2+ and SO 2− 4 , charge balance considerations now require a modification to Eq. (6.9) to balance the charge associated with the sulfate anomaly, δ Ca 2+ − δ SO 2− 4 = δTA/2, (6.12) which will increase the size of the calcium anomaly.
Model validation
Although the model/data predictions will be shown to be in rough agreement, it is useful at this stage to enumerate possible sources of disagreement.The first potential source of disagreement is the error in density anomaly predictions from the conductivity model, which can themselves be in error by as much as 10% for a given composition anomaly.The second potential source is the idealization of the composition anomaly, which is only a simplified version of the true Baltic composition anomaly.This error can be investigated by comparing model-1 and model-2 predictions.A third potential source of disagreement is inhomogeneities in the chemical composition of the Baltic, which will tend to scatter results at a particular Chlorinity over a wider range than predicted by measurement uncertainty alone.A final potential source of disagreement is measurement uncertainty in the data itself.Feistel et al. (2010a) report 437 observations of the density anomaly δρ R in the Baltic Sea over the years 2006-2008, mostly at salinities of 10-20 g kg −1 .66 of these replicate measurements on water were obtained from 11 stations.The observations (Fig. 19a) show a large scatter.Part of this scatter arises from observational error in the density measurements, which can be estimated at about ±9 g m −3 (coverage factor 2) from replicate values about the means.However, scatter in excess of this value is present.The additional scatter likely derives from spatial variations in the magnitude and composition of the anomaly.The concentrations of TA in different rivers inflowing into the Baltic can vary by an order of magnitude, and these effects are not always well-mixed within the Baltic.In addition, the solute is subject to various complex chemical processes and interaction with the sediment over the residence time of 20-30 years.
In general, model calculations of δρ R using either model-1 or model-2 are quite consistent with the observations (Fig. 19a), within the limits of observational uncertainty and presumed spatial inhomogeneity.LSEA DELS predicts an anomaly of zero at S R = 35.16504g kg −1 , rising to 48 and 58 g m −3 for model-1 and model-2 anomalies respectively, www.ocean-sci.net/6/949/2010/Ocean Sci., 6, 949-981, 2010 at S R = 5 g kg −1 .The scatter in the observations is large enough that it is not clear which of the two models better describes the data.The model-2 results fall somewhat closer to the raw data at salinities of 15-20 g kg −1 .On the other hand, although both models predict much larger density differences than are observed at salinities <5 g kg −1 , the comparison is better for model-1.It should be noted that the small number of observations in this low-salinity range are from the Gulfs of Bothnia and Finland (Feistel et al., 2010a), Fig. 1, which are not representative of the freshwater inflows as a whole.Hence, complete agreement is not expected.We conclude that spatial inhomogeneities in the composition anomalies are likely the limiting factor in the present model/data comparison, rather than the accuracy of LSEA DEL itself.
The LSEA DELS calculations for both model-1 and model-2 anomalies suggest that δρ R is not a linear function of the salinity, but rather one with a pronounced downward curvature, especially at low salinities.The curvature is large enough that there is little change in predicted anomalies at salinities less than 5 g kg −1 .This downward curvature is somewhat consistent with the low density anomalies observed for S R < 5 g kg −1 , although as just discussed the lack of data makes it unlikely that the observed values are completely representative of mean Baltic values.The curvature in the model results arises because conductivity changes will account for an increasingly large proportion of the total salinity change at low salinities, although this will not become clear until Sect.6.4.
The δρ R observations are derived from measurements of density and conductivity.A small number of measurements were also made of density and Chlorinity in 2008 (Feistel et al., 2010a).Comparison of differences between Density Salinity and Chlorinity Salinity from these observations (Fig. 19b) against predictions using model-1 and model-2 anomalies again shows reasonably good agreement, with predictions using model-1 anomalies closer to the approximate empirical parameterization, Eq. (5.3).In this case, conductivity effects are not involved and the model curves are nearly straight lines, deriving from the straight lines in Eqs.(6.10) and (6.11).Although the expanded uncertainty (coverage factor 2) of the Chlorinity measurements is about 0.5% (Feistel et al., 2010a), the relationships, Eqs.(6.10), (6.11) are themselves fits to scattered data (again probably reflecting inhomogeneities in the Baltic's chemical composition), so better agreement is not expected.
The LSEA DELS model calculations for δρ R , Eq. (6.2), using model-1 anomalies can also be compared directly (Fig. 20) against calculations from the Gibbs function, Eq. (5.6), with the Baltic anomaly being modelled using Eq.(5.4).This is a complete intercomparison of not only the density algorithms but also different approaches for specifying the composition anomalies.The two independent calculations agree quite well, with values being within 6 g m −3 of each other at all temperatures.(Feistel et al., 2010a), with LSEA_DELS model predictions.b) Comparison of model results with 3 observational estimates of the anomalies between Density Salinity SD and the Chlorinity Salinity SCl (Feistel et al., 2010a) as well as eq.( 5.3), and model predictions .In this section we determine a correction factor for conductivity effects as a function of the same parameters using LSEA_DELS with the model-1 parameterization.
First, calculating R S Δ , eq. ( 6.13), for a grid of points in the range 0 < SCl < 35 g/kg and 0 < δmCa < 800 µmol/kg, we find that the calculated change in conductivity-based Reference Salinity, decreases significantly for a fixed δmCa as the salinity increases (Fig. 21).This reflects a commonly observed phenomenon that the conductivity per mole of charges (the equivalent conductivity), decreases as concentrations increase in solutions where the amount of solute is much less than the amount of solvent (Pawlowicz, 2008).The physical effects
Corrections to Practical Salinity required for Gibb function calculations
The Gibbs function determined in Sect. 4 is a function of chloride molality and the calcium anomaly, or equivalently S SSW A and S BSW FW .In this section we determine a correction factor for conductivity effects as a function of the same parameters using LSEA DELS with the model-1 parameterization.
First, calculating S R , Eq. (6.13), for a grid of points in the range 0 < S Cl < 35 g kg −1 and 0 < δm Ca < 800 µmol kg −1 , we find that the calculated change in conductivity-based Reference Salinity, decreases significantly for a fixed δm Ca as the salinity increases (Fig. 21).This reflects a commonly observed phenomenon that the conductivity per mole of charges (the equivalent conductivity), decreases as concentrations increase in solutions where the amount of solute is much less than the amount of solvent (Pawlowicz, 2008).The physical effects which reduce electrolytic conductivity are the relaxation force, electrophoresis and ion association; each of them tends to strengthen with increasing ion concentration (Ebeling et al., 1977(Ebeling et al., , 1979)).This change is largest at the lowest concentrations, with the decreases from its infinite dilution endpoint being proportional to √ S Cl in this limit, in accordance with limiting laws.At lower temperatures, S R for a given addition δm Ca is slightly larger than at higher temperatures.However, at all temperatures the changes S R are almost perfectly proportional to the magnitude of the composition anomaly.As expected, the ratio of S R to S BSW FW still depends significantly on S SSW A = S Cl / 1 − S BSW FW ≈ S Cl , Eq. ( 5.2), and also shows a slight temperature dependence.The results can be fit to an equation of the form, (6.14)where the reduced variables are τ = (T − 298.15 K)/(1 K) and ξ = S SSW A / 1 g kg −1 , and the coefficients a ij are given in Table 3. Numerical check values are available from Table A2.
The root-mean-square error of this fit is 5.3 × 10 −4 , but note that the model results themselves may be biased by as much as 0.05 (i.e., 10%).In Sect.7, Eqs.(6.13) and (6.14) will be used in conjunction with Eq. (3.19) to determine thermodynamic anomalies for waters of a measured conductivity.
Overall, conductivity changes will account for about 30-50% of the total change in salinity resulting from the presence of the anomaly, with the lower percentages occurring at highest salinities.
It had been shown experimentally that estimates of the Practical Salinity of Baltic seawater are independent of the sample temperature, within reasonable uncertainty (Feistel and Weinreben, 2008).From Eq. (6.13) and Fig. 21 we infer a weak temperature dependence of the Reference Salinity S R at constant S SSW A and S BSW FW if S R = u PS × S P is computed from Practical Salinity S P of Baltic seawater.Figure 23 shows the deviation from Practical Salinity conservation, FW /u PS , (6.15) as a function of salinity S SSW A and temperature T , where S BSW FW is estimated from the empirical relations (5.4), (5.5), and the abscissa value from Eq. (6.13), S R = S SSW A + f S SSW A ,T S BSW FW .The model results suggest that the measured salinity will vary by no more than 0.001 over a 15 degree temperature change at Practical Salinities of 5 to 10. Experimental evidence (Feistel and Weinreben, 2008) finds that any changes are smaller than this value, i.e., the violation of conservation does not exceed the measurement uncertainty of salinity.
Computation of properties from Practical Salinity readings
Regular oceanographic practice in Baltic Sea observation (Feistel et al., 2008b) ignores composition anomalies; readings of Practical Salinity are commonly inserted directly into SSW formulas to compute seawater properties.For conductive anomalies such as in the Baltic Sea, using Practical Salinity (or Reference Salinity S R ) rather than Chlorinity Salinity S Cl as the input of the Gibbs function can be expected to result in a better approximation of the anomalous property (Lewis, 1981).Nevertheless, the related error in density is known from direct density measurements (Millero and Kremling, 1976;Feistel et al., 2010a).The corresponding errors of other computed properties such as sound speed, freezing point or enthalpy are simply unknown even though they may be relevant for, say, echo sounding or submarine navigation.In this section we first estimate typical errors related to this practice and eventually provide algorithms for their reduction, based on the results of the previous sections.In Sect.5, the deviations from SSW properties are discussed for given Density Salinities S D which are not available from regular CTD measurements.However, our models directly estimate S BSW FW and S R as functions of S SSW A , so we can easily compute and display pairs (δq R , S R ) using S SSW A as a running dummy variable, where δq R is the error of a property computed from the Gibbs function g BSW between the salinity pairs (S SSW A , S BSW FW ), the "true salinity", and (S R , 0), the "conductivity salinity".At the end of this section we shall invert the relations used in this procedure in order to estimate S SSW A and S BSW FW from practically measured values of S R and eventually compute more accurate property estimates from the Gibbs function g BSW , but first we consider a more theoretical approach in which S SSW A is treated as if it were measured.
For a given point S SSW A ,T ,P , we compute the empirical Baltic Density Salinity anomaly from Eq. (5.4), 3. Numerical check values are available from Table A2.
Table 3: Coefficients of the correlation function f, eq.(6.14) i j aij i j aij 0 0 +0.578390505245625 0 1 -0.000180931852871 1 0 -0.089779871747927 1 1 -0.000294811756809 2 0 -0.001654733793251 2 1 -0.000012798749635 3 0 +0.012951706126954 3 1 +0.000079702941453Reference Salinity is then available from Eqs. (6.4), (6.13) and ( 6 The anomaly-related error of any considered property q available from the Gibbs function g BSW S SSW A ,S BSW FW ,T ,P , Eq. (3.19), is calculated as the difference between the best model estimate, q BSW , and the result q SW obtained using Reference Salinity, S R = u PS × S P , in the TEOS-10 Gibbs function: δq R = q BSW S SSW A ,S BSW FW ,T ,P − q SW (S R ,T ,P ). .The model results suggest that the measured salinity will vary by no more than 0.001 over a 15 degree temperature change at Practical Salinities of 5 to 10. Experimental evidence (Feistel and Weinreben, 2008) finds that any changes are smaller than this value, i.e., the violation of conservation does not exceed the measurement uncertainty of salinity.54 model estimate, q BSW , and the result q SW obtained using Reference Salinity, , in the TEOS-10 Gibbs function: The density deviation of the form (7.4), ( ) is displayed in Fig. 24.Comparison with experimental data (Feistel et al., 2010a) and with LSEA_DELS results shows reasonable agreement with each, with slightly better agreement with the experimental data.Compared to Figs. 9 or 20, the density anomaly is reduced by almost 50% as a result of the conductivity of the anomalous salt influencing SR and representing part of the associated density changes through the second term on the right side of eq.(7.5).Similarly, the conductivity effect changes the sign of the curvature and significantly reduces the temperature dependence of the density anomaly. is displayed in Fig. 25.The sound speed formula is given by Eq. (5.16).This figure is very similar to Fig. 13, i.e., the conductivity effect on the sound speed anomaly is only minor.
Consequently, CTD sound speed sensors with a resolution of 1 mm s −1 (Valeport, 2010) that are carefully calibrated with respect to SSW can be expected to be capable of measuring Baltic anomalies in situ and to observationally confirm the numerical model results shown here.
is displayed in Fig. 25.The sound speed formula is given by eq.(5.16).This figure is very similar to Fig. 13, i.e., the conductivity effect on the sound speed anomaly is only minor.
Consequently, CTD sound speed sensors with a resolution of 1 mm/s (Valeport, 2010) that are carefully calibrated with respect to SSW can be expected to be capable of measuring Baltic anomalies in situ and to observationally confirm the numerical model results shown here. .Since h depends on an arbitrary constant, only differences of enthalpies belonging to the same salinities are reasonable to be considered here.Compared to Fig. 14, the enthalpy changes are The relative enthalpy deviation of the form (7.4), is displayed in Fig. 27.Freezing temperature is computed from Eq. (5.29).Compared to Fig. 15, the error is reduced by about 80% due to the conductivity effect and is well below the experimental uncertainty of freezing point measurements.
The above examples show that in some cases it may be desirable to correct for the anomaly or at least to check its significance in the particular case of interest.Even though this may be unnecessary in some situations, we note that there is now a general method for the calculation of the Baltic property anomaly based on the empirical Gibbs and Practical Salinity functions developed in this paper.Two practical situations are considered, (i) only Practical Salinity (plus T and P ) is known for a given sample, and, (ii) a direct density measurement is also available for the sample.
(i) Practical Salinity S P is known: Since no direct information is available on the magnitude of the anomaly, an empirical relation is used for its estimate.The Eqs. (6.4), (6.13), (5.4) and ( 5 Here, the functions g and f are evaluated at salinity S R = u PS × S P .The constant u PS is given in Table A1.The Gibbs function (3.19) with the arguments S SSW A and S BSW FW can now be used to compute the corrected property.The above examples show that in some cases it may be desirable to correct for the anomaly or at least to check its significance in the particular case of interest.Even though this may be unnecessary in some situations, we note that there is now a general method for the calculation of the Baltic property anomaly based on the empirical Gibbs and Practical Salinity functions developed in this paper.Two practical situations are considered, (i) only Practical Salinity (plus T and P) is known for a given sample, and, (ii) a direct density measurement is also available for the sample.
(i) Practical Salinity SP is known
Since no direct information is available on the magnitude of the anomaly, an empirical relation is used for its estimate.The equations (6.4), (6.13), (5.4) and (5.5), (ii) Both Practical Salinity S P and density ρ are known: Since density ρ is known, the estimate, Eq. (7.10), is not required here and is replaced by a more reliable value.The remaining equations (7.17) The functions g and f are again evaluated at salinity S R = u PS × S P .(8.2) Note that a single salinity variable such as Eq.(8.2) is insufficient for the description of Baltic seawater properties.Rather, the Gibbs function (8.1) takes two separate salinity variables, one for the SSW part and one for the additional anomalous (freshwater-related) part.The anomalous part of the Gibbs function, g FW , is available from the correlation expression (4.8) with regression coefficients reported in Table 1 and numerical check values in Table A2.
Computed from the Baltic Gibbs function, g BSW , various property anomalies are quantitatively displayed in Figs.8-18 and discussed in relation to Millero's Rule which provides generally reasonable, and sometimes very good estimates although it cannot be assumed a priori to be valid in general.Density Salinity is a good proxy for the actual Absolute Salinity of the Baltic Sea when the composition anomaly is represented by Ca 2+ and 2HCO shows that these results are somewhat sensitive to the particular composition of the anomaly.The influence of dissolved calcium that is in charge balance and in chemical equilibrium with the marine carbonate system is estimated from LSEA DELS simulation results and is effectively represented by the conductivity factor f S SSW A ,T which correlates the anomalous mass-fraction salinity, S BSW FW , with Practical Salinity, S P , in the form, Eq. (6.13), The salinity conversion factor u PS is given in Table A1.The correlation function f S SSW A ,T has the mathematical form (6.14) with coefficients given in Table 3 and numerical check values in Table A2.The pressure dependence of f is unknown but is assumed to be of minor relevance for the relatively shallow Baltic Sea compared to the general uncertainties of the models and the scatter of the data employed here.
The above discussion regards the influence of anomalous solute as an addition to the preformed SSW part of the Absolute Salinity.When dealing with field measurements, it is often more convenient to consider anomalies from the Reference-Composition Salinity S R = u PS × S P .In this case, the conductivity effect of the anomalous solute influences the value of S R and reduces the anomalies in comparison to those computed with respect to estimates based on the preformed Absolute Salinity, S SSW A , as shown in Figs.24-27.This conclusion is similar to earlier studies on regional ocean waters (Cox et al., 1967;Lewis, 1981).
For some properties the use of S R = S P × u PS as the salinity argument of the TEOS-10 Gibbs function (IOC et al., 2010) proves sufficiently accurate for Baltic seawater but may be insufficient in cases such as for density or sound speed, depending on the actual application purposes.In these cases, estimates of S SSW A and S BSW FW are required for use in the Gibbs function, Eq. (8.1).Two alternative methods, Eqs.(7.12), (7.13) or (7.16), (7.17), are suggested to www.ocean-sci.net/6/949/2010/Ocean Sci., 6, 949-981, 2010 Fig.1.The Baltic Sea is a semi-enclosed estuary with a volume of about 20 000 km 3 and an annual freshwater surplus of about 500 km 3 a −1 ; direct precipitation excess accounts for only 10% of the latter value(Feistel et al., 2008b).Baltic seawater (BSW) is a mixture of ocean water (OW) from the North Atlantic with river water (RW) discharged from the large surrounding drainage area.Regionally and temporally, mixing ratio and RW solute are highly variable.Collected BSW samples consist of Standard Seawater (SSW) with Reference Composition (RC) plus a small amount of anomalous freshwater solute (FW), which we approximate here to be calcium bicarbonate, Ca(HCO 3 ) 2 .In dissolved form, depending on ambient temperature and pH, Ca(HCO 3 ) 2 is decomposed into the various compounds of the aqueous carbonate system with mutual equilibrium ratios(Cockell, 2008).
2.1) respectively.Regardless of the -usually unknown -precise origin in terms of N OW a and N RW a of the particle numbers finally found in the mixture, N BSW 0 , N BSW a , they actually define the composition of a given Baltic seawater sample and represent the starting point of our model.The aim of this paper is to estimate the deviation of thermophysical properties of BSW from those of SSW due to the excess of calcium ions in BSW.For this reason we formally divide the BSW particle numbers N BSW 0 , N BSW a into a major SSW fraction with Reference Composition, and a minor fraction of FW solute,
Fig. 2 .
Fig. 2. Specific volume anomaly of Baltic seawater at the standard ocean surface pressure and a typical salinity of S SSW A = 10.306gkg −1 for six different temperatures 0-25 • C as indicated by the curves, computed by the FREZCHEM model and by Millero's Rule (dashed lines, without temperatures indicated).The latter curves are the differences between the specific volumes computed from the TEOS-10 Gibbs function at salinities S BSW A , Eq. (3.21), and S SSW A , Eq. (3.19).Experimental uncertainties are considered in the following section.
Fig. 3: Heat capacity anomaly of Baltic seawater at the standard ocean surface pressure and a typical salinity of g/kg 10.306 SSW A = S for six different temperatures 0 -25 °C as indicated by the curves, computed by the FREZCHEM model and by Fig. 3. Heat capacity anomaly of Baltic seawater at the standard ocean surface pressure and a typical salinity of S SSW A = 10.306gkg −1 for six different temperatures 0-25 • C as indicated by the curves, computed by the FREZCHEM model and by Millero's Rule (dashed lines).The latter curves are the differences between the heat capacities computed from the TEOS-10 Gibbs function at salinities S BSW A , Eq. (3.21), and S SSW A , Eq. (3.14).Experimental uncertainties are considered in the following section.
Rule (dashed lines).The latter curves are the differences between the heat capacities computed from the TEOS-10 Gibbs function at salinitiesBSW A .(3.14).Experimental uncertainties are considered in the following section.
Fig. 4 :Fig. 4 .
Fig. 4: Activity potential anomaly of Baltic seawater at the standard ocean surface pressure and a typical salinity of g/kg 10.306 SSW A = S for six different temperatures 0 -Fig.4. Activity potential anomaly of Baltic seawater at the standard ocean surface pressure and a typical salinity of S SSW A = 10.306gkg −1 for six different temperatures 0-25 • C as indicated by the curves, computed by the FREZCHEM model and by Millero's Rule (dashed lines, different temperatures graphically indistinguishable).The latter curves are the differences between the activity potentials computed from the TEOS-10 Gibbs function at salinities S BSW A , Eq. (3.21), and S SSW A , Eq. (3.14).
5 Fig. 5 :
Fig. 5: Scatter of specific volume anomalies computed from FREZCHEM, i v δ , relative to the specific volume anomalies computed from the Gibbs function, ( ) c v δ , eq. (4.14), at 1260 given data points.The rms deviation of the fit is 1.5 mm³/kg.Symbols 0 -5 indicate the pressures of 0.1 MPa, 1 MPa, 2 MPa, 3 MPa, 4 MPa and 5 MPa, respectively.These residual anomalies should be compared with the total anomalies i v δ shown in Fig. 2.
Fig. 6 .
Fig.6.Scatter of heat capacity anomalies computed from FREZCHEM, δc P i , relative to the heat capacity anomalies computed from the Gibbs function, δc P (c), Eq. (4.15), at 210 given data points at atmospheric pressure.The rms deviation of the fit is 3.4 mJ/(kg K).Symbols 0-5 indicate the temperatures of 0-25 • C, respectively.These residual anomalies should be compared with the total anomalies δc P i shown in Fig.3.
Fig. 7 :
Fig. 7: Scatter of the activity potential anomalies computed from FREZCHEM, i ψ δ , relative to the activity potential anomalies computed from the Gibbs function, ( ) c ψ δ , eq. (4.33), at 1260 given data points.The rms deviation of the fit is 3.1E-5.Symbols 0 -5 indicate the pressures of 0.1 MPa, MPa, 2 MPa, 3 MPa, 4 MPa and 5 MPa, respectively.These residual anomalies should be compared with the total anomalies i ψ δ shown in Fig. 4.
c
000 = 0,c 010 = 0 and c 010 = −(40 K) × g FW T (S SO ,T SO ,P SO ) c 000 =0,c 010 =0.(4.38) .3) which is based on density measurements made at 20 • C and Chlorinity determinations at 3 different stations.Using Eq. (5.2) in the form S Cl ≈ S SSW A in Eq. (5.2), we have S D approximately given as a function of S SSW A for typical Baltic seawater conditions, S D = S SSW A + 130 mg kg −1 × 1 − S SSW A S SO .(5.4)This empirical relation is used here to conveniently present the comparisons for typical Baltic conditions as a function of Ocean Sci., 6, 949-981, 2010 www.ocean-sci.net/6/949/2010/ Fig. 8: Difference 0.8 provides an approximate conversion to the units used in Fig.8 so
Fig.9: Difference ρ δ , eq. (5.6), between the densities with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure and temperatures between 0 and 25 °C.The uncertainty of density measurements is 2 g m -3(Feistel et al., 2010a), indicated by the solid horizontal line.
Fig. 9 .
Fig.9.Difference δρ, Eq. (5.6), between the densities with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure and temperatures between 0 and 25 • C. The uncertainty of density measurements is 2 g m −3(Feistel et al., 2010a), indicated by the solid horizontal line.
Fig. 10: Difference α δ , eq.(5.7), between the thermal expansion coefficients (solid lines) with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure and temperatures between 0 and 25 °C, in comparison to estimates from Millero's Rule, and exceeds the range shown in the figure.
as a function of Density Salinity, computed from Eqs. (5.4) and (5.5).For comparison, the anomaly is estimated by Millero's Rule using Density Salinity S D , Eq. (5.1), from δα D = g SW T P (S D ,T ,P SO ) g SW P (S D ,T ,P SO ) Fig. 11: Difference β δ , eq. (5.12), between the haline contraction coefficients (solid lines) of the parent solution with respect to the addition of FW solute and of SSW solute for Baltic seawater.Values are determined at the standard ocean surface pressure and temperatures between 0 and 25 °C.The standard-ocean value of the haline contraction coefficient is 0.781 = 781 ppm g -1 kg.The haline contraction coefficient associated with the addition of calcium carbonate is within 20% of the haline contraction coefficient for Standard Seawater.
Fig. 11 .
Fig. 11.Difference δβ, Eq. (5.12), between the haline contraction coefficients (solid lines) of the parent solution with respect to the addition of FW solute and of SSW solute for Baltic seawater.Values are determined at the standard ocean surface pressure and temperatures between 0 and 25 • C. The standard-ocean value of the haline contraction coefficient is 0.781 = 781 ppm g −1 kg.The haline contraction coefficient associated with the addition of calcium carbonate is within 20% of the haline contraction coefficient for Standard Seawater.
Fig.12: Difference P c δ , eq. (5.13), between the specific isobaric heat capacity (solid lines) with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure and temperatures between 0 and 25 °C, in comparison to estimates from Millero's Rule, D δ P c , based on Density Salinity (dashed lines), eq.(5.14), and Fig.13: Difference cδ , eq. (5.17), between the sound speed (solid lines) with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure Fig. 14: Difference h δ , eq.(5.20), between the relative specific enthalpies (solid lines) with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure and temperatures between 5 and 25 °C, in comparison to estimates from Millero's rule, D δh , based on Density Salinity (dashed lines, only the 15 -25 °C results are labelled), eq.(5.21), and A δh , based on Absolute Salinity (dotted lines, temperatures not labelled), eq.(5.22).For the latter two, the responsible difference between BSW
Fig. 14 .
Fig.14.Difference δh, Eq. (5.20), between the relative specific enthalpies (solid lines) with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure and temperatures between 5 and 25 • C, in comparison to estimates from Millero's rule, δh D , based on Density Salinity (dashed lines, only the 15-25 • C results are labelled), Eq. (5.21), and δh A , based on Absolute Salinity (dotted lines, temperatures not labelled), Eq. (5.22).For the latter two, the responsible difference between S BSW A and S D is shown in Fig.8.The experimental uncertainty of the relative enthalpies is 0.5 J kg −1 × t/ • C.
Fig. 15: Difference T δ , eq. (5.30), between the freezing temperature (solid line) with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure, in comparison to estimates from Millero's Rule, D δT , based on Density Salinity (dashed line), eq.(5.31), and A δT , based on Absolute Salinity (dotted line), eq.(5.32).For the latter two, the responsible difference between BSW A S and D S is shown in Fig. 8.The experimental uncertainty of the freezing temperature of seawater is 2 mK, indicated by the solid horizontal line.
Fig. 15 .
Fig. 15.Difference δT , Eq.(5.30), between the freezing temperature (solid line) with and without the freshwater solute for Baltic seawater at the standard ocean surface pressure, in comparison to estimates from Millero's Rule, δT D , based on Density Salinity (dashed line), Eq. (5.31), and δT A , based on Absolute Salinity (dotted line), Eq. (5.32).For the latter two, the responsible difference between S BSW A and S D is shown in Fig.8.The experimental uncertainty of the freezing temperature of seawater is 2 mK, indicated by the solid horizontal line.
Fig. 16: Difference P δ , eq. (5.34), between the vapour pressures (solid line) with and without the freshwater solute for Baltic seawater at 20 °C, in comparison to estimates from Millero's Rule, D δP , based on Density Salinity (dashed line), eq.(5.35), and
Fig. 16 .
Fig.16.Difference δP , Eq. (5.34), between the vapour pressures (solid line) with and without the freshwater solute for Baltic seawater at 20 • C, in comparison to estimates from Millero's Rule, δP D , based on Density Salinity (dashed line), Eq. (5.35), and δP A , based on Absolute Salinity (dotted line), Eq. (5.36).For the latter two, the responsible difference between S BSW A and S D is shown in Fig.8.The related experimental uncertainty is 0.02% or 0.4 Pa, well beyond the range of this graph.and using Absolute Salinity, S BSW A , Eq. (3.21), shows the salinity difference S D (t) = S D S SSW A ,S BSW FW ,T SO + t,P SO −S D S SSW A ,S BSW FW ,T SO + 25 • C,P SO (5.37) as a function of the Density Salinity at 25 • C for typical Baltic anomaly pairs of S SSW A and S BSW FW computed from Eqs. (5.4) and (5.5).
Figure 17 is similar to Fig. 8 in which S BSW
Fig. 17 .
Fig. 17: Difference ( ) t S D Δ , eq. (5.37), between the Density Salinities computed at different temperatures from eq. (5.1) at the same mass-fraction salinities SSW A S and Fig.18.Deviation (5.37) between the density of Baltic seawater and the density computed from conservative Density Salinity, S dens A , Eq. (5.38).The experimental uncertainty of density measurements is 2 ppm(Feistel et al., 2010a), indicated by the solid lines.
Fig. 19 .
Fig. 19.(a) Comparison between 437 measured density anomalies (Feistel et al., 2010a), with LSEA DELS model predictions.(b) Comparison of model results with 3 observational estimates of the anomalies between Density Salinity S D and the Chlorinity Salinity S Cl(Feistel et al., 2010a) as well as Eq.(5.3), and model predictions.
Fig. 19
Fig.19 a) Comparison between 437 measured density anomalies(Feistel et al., 2010a), with LSEA_DELS model predictions.b) Comparison of model results with 3 observational estimates of the anomalies between Density Salinity SD and the Chlorinity Salinity SCl(Feistel et al., 2010a) as well as eq.(5.3), and model predictions
Fig. 20 :
Fig. 20: Comparison of the density anomalies between SSW and Baltic seawater of the same chloride molality, computed by the Gibbs function and by LSEA_DELS.Curves are drawn for temperatures of 0, 5, 10, 15, and 20 °C, with the highest curves corresponding to the lowest temperatures.
6. 4
Corrections to Practical Salinity required for Gibb function calculationsThe Gibbs function determined in Section 4 is a function of chloride molality and the calcium anomaly, or equivalently SSW A
Fig. 20 .
Fig. 20.Comparison of the density anomalies between SSW and Baltic seawater of the same chloride molality, computed by the Gibbs function and by LSEA DELS.Curves are drawn for temperatures of 0, 5, 10, 15, and 20 • C, with the highest curves corresponding to the lowest temperatures.
.012951706126954 3 1 +0.000079702941453 be accurately expressed as the product of a function, f , that depends only on the salinity associated with the base seawater and temperature, and the change in solute mass fraction S f on both T and S Cl is shown in Fig.21but curves corresponding to different values of δm Ca at a fixed temperature are visually indistinguishable at this scale.
Fig. 21: Anomaly of the Reference Salinity
Fig. 21 .
Fig. 21.Anomaly of the Reference Salinity S R , Eq. (6.13), as a function of S Cl at different temperatures and anomalies δm Ca , estimated using LSEA DELS.51
.14) as a function of S SSW A and S BSW FW , S R = S SSW A + f S SSW A ,T S BSW FW .(7.3) Fig.24.Comparison with experimental data(Feistel et al., 2010a) and with LSEA DELS results shows reasonable agreement with each, with slightly better agreement with the experimental data.Compared to Fig.9or 20, the density anomaly is reduced by almost 50% as a result of the conductivity of the anomalous salt influencing S R and representing part of the associated density changes through the second term on the right side of Eq. (7.5).Similarly, the conductivity effect changes the sign of the curvature and significantly reduces the temperature dependence of the density anomaly.The sound speed deviation of the form (7.4),δc R = c BSW S SSW A ,SBSW FW ,T ,P SO − c SW (S R ,T ,P SO ), (7.6) Ocean Sci., 6, 949-981, 2010 www.ocean-sci.net/6/949/2010/
Fig. 23 :Fig. 23 .
Fig. 23: Temperature dependence, eq.(6.15), of Practical Salinity relative to 15 °C of a given sample of Baltic seawater at atmospheric pressure Fig. 23.Temperature dependence, Eq. (6.15), of Practical Salinity relative to 15 • C of a given sample of Baltic seawater at atmospheric pressure.
Fig. 24 :
Fig.24: Error in density, eq.(7.5), if computed from measured Reference Salinity, using the Gibbs function for SSW.Results are shown for temperatures between 0 and 25 °C and at atmospheric pressure.
Fig. 24 .
Fig. 24.Error in density, Eq. (7.5), if computed from measured Reference Salinity, using the Gibbs function for SSW.Results are shown for temperatures between 0 and 25 • C and at atmospheric pressure.
Fig. 25 :
Fig. 25: Error in sound speed, eq.(7.6), if computed from measured Reference Salinity using the Gibbs function for SSW.Results are shown for temperatures between 0 and 25 °C and at atmospheric pressure
Fig. 25 .Fig. 26 :
Fig. 25.Error in sound speed, Eq. (7.6), if computed from measured Reference Salinity using the Gibbs function for SSW.Results are shown for temperatures between 0 and 25 • C and at atmospheric pressure.56 is almost completely captured by the conductivity effect and the enthalpy anomalies are therefore negligible.
Fig. 26 .
Fig. 26.Error in relative enthalpy, Eq. (7.7), if computed from measured Reference Salinity using the Gibbs function for SSW.Results are shown for temperatures between 1 and 25 • C and at atmospheric pressure.
.5), u PS × S P ≡ S R = S SSW A in linear approximation of the anomaly, δS R = S R − S SSW A .The solution reads
Fig. 27 :
Fig. 27: Error in freezing temperature, eq.(7.8), if computed from measured Reference Salinity using the Gibbs function for SSW.Results shown correspond to atmospheric pressure
Fig. 27 .
Fig. 27.Error in freezing temperature, Eq. (7.8), if computed from measured Reference Salinity using the Gibbs function for SSW.Results shown correspond to atmospheric pressure.
For
Baltic seawater with a simplified composition anomaly representing only inputs of calcium carbonate, Eq. (1.1), a Gibbs function is determined based on theoretical considerations and results from FREZCHEM model simulations.The new Gibbs function, Eq. (3.19), combines the TEOS-10 Gibbs function of Standard Seawater (SSW), g SW S SSW A ,T ,P , with an anomalous part, g FW S SSW A ,T ,P , proportional to the Absolute Salinity of the anomalous (freshwater) salt, S BSW FW , resulting in the form g BSW S SSW A ,S BSW FW ,T ,P = 1 − S of the "preformed" SSW part, the parent solution, is denoted by S SSW A , Eq. (2.26).From the mass balance, the Absolute Salinity of Baltic seawater is given by Eq. (3.21), FW is determined, the Gibbs function g BSW of Baltic seawater can be computed from Eq. (3.15), in the form Table 1, and the results of the fit in Table 2.The scatter of the FREZCHEM data points with respect to the resulting partial Gibbs function g FW is shown in Figs. 5, 6 and 7 for δv, δc P and δψ, respectively.Numerical check values are available from TableA2.Various salinity measures such as Reference Salinity S R , Absolute Salinity, S A , Density Salinity, S D , or Chlorinity Salinity, S Cl , have the same values for SSW but differ from each other for BSW.The estimate of Density Salinity based on inversion of the expression for density in terms of the Gibbs function for SSW at arbitrary values of temperature and pressure is represented by S D , and referred to as "measured" Density Salinity since it is based on whatever the conditions of the direct density measurement are.It is the Absolute Salinity of SSW (here assumed to have Reference Composition) that has the same density as BSW at given temperature and pressure, i.e., FW ,T ,P = g SW P (S D ,T ,P ).
Table 2 .
Results of the regression, Eq. (4.8), with respect to properties of Baltic seawater simulated with FREZCHEM.
SW (S R ,T ,P SO ) + h SW (S R ,T SO ,P SO ), by the conductivity effect and the enthalpy anomalies are therefore negligible.The freezing point deviation of the form (7.4), δT R = T BSW S SSW A ,S BSW FW ,T ,P SO − T SW (S R ,T ,P SO ), (7.8) is displayed in Fig.26.Enthalpy is computed from the Gibbs function by h = g − T g T .Since h depends on an arbitrary constant, only differences of enthalpies belonging to the same salinities are reasonable to be considered here.Compared to Fig.14, the enthalpy changes are is almost www.ocean-sci.net/6/949/2010/Ocean Sci., 6, 949-981, 2010 completely captured
Table A2 .
Numerical check values of the Gibbs function anomaly g FW , Eq. (4.8), and of the conductivity function, f , Eq. (6.14).
Table B1 .
Glossary of formula symbols.
|
v3-fos-license
|
2018-12-12T06:44:28.208Z
|
2017-10-30T00:00:00.000
|
55738331
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.assaf.org.za/index.php/per/article/download/3267/4201",
"pdf_hash": "40b7c39658eb1ff841a3a93d359a18f5571f0a53",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46095",
"s2fieldsofstudy": [
"Law"
],
"sha1": "40b7c39658eb1ff841a3a93d359a18f5571f0a53",
"year": 2017
}
|
pes2o/s2orc
|
Decolonisation and Teaching Law in Africa with Special Reference to Living Customary Law
The student protests in South African Universities, which started in 2015, demanded the decolonisation of certain aspects of higher education. While the primary demand is free education, issues of the curriculum and transformation connected with the country's history of colonialism and apartheid have also surfaced. In the field of law, demands for curriculum change are accompanied by the broad issue of the decolonisation of law, translating into questions of legal history, the concept of law, the role of law in African societies, the status of indigenous systems of law in the post-independent/apartheid legal system, and how law is taught in law schools. This paper examines the idea of the decolonisation of law in relation to the teaching of law in African states previously under the influence of English or Roman-Dutch colonial/apartheid legal history. The teaching of law is with special reference to the system of law that governs the majority of people in Africa in private law and aspects of governance – living customary law. The paper examines the design of legal education with respect to three elements that are essential to the decolonisation of law and legal education. The elements under review are the inclusion of living customary law in legal education, a shift in the legal theoretical paradigm within which law is taught, and the interdisciplinary study of law. Thus, the paper links the decolonisation of law to how law is taught, with special reference to living customary law. In discussing these elements, the paper draws examples from the South African legal system, because it has the most advanced jurisprudential conceptualisation of customary law on the African Continent.
Introduction
In 2015 South African universities witnessed a spate of protests under the banners of "Rhodes Must Fall" and "Fees Must Fall" and the broad issues of social justice and the transformation of universities from their colonial and Eurocentric heritages.These protests are likely to change the nature of tertiary institutions and teaching in these institutions in South Africa. 1 With regard to legal studies, the protests included demands for changes in the curricula in law schools.In my view, implicit in this demand is a call for a rethinking of broader legal issues, such as the decolonisation of law, which in turn raises questions about the legal history of African countries; the concept of law; the role of law in African societies; the status of indigenous systems of law in post-independence or post-apartheid legal systems; and how law is taught in African law schools.This proposition is partially confirmed by the views of some law students.Alex Hotz, for example, wrote: "As a law student, I believe decolonising the law faculty goes beyond the faculty and the institution.It speaks to what the law is and how it is used within society."2These demands have therefore kindled the need to reflect on these issues from different perspectives.
The aim of this paper is to explore the idea of decolonisation and the teaching of law in African institutions of higher learning.It focuses on the teaching of law in a decolonised African context, with special reference to living customary law.The paper argues that the redesigning of the teaching of lawinvolving three elementsis critical to the decolonisation of law in Africa.The three elements are the inclusion of living customary law in legal education, a paradigm shift in legal theory, and the interdisciplinary study of law.Underpinning this argument are three important premises.The first premise is that decolonisation cannot be achieved without the development Although we refer to 2015 as the year in which the protests started, the protests continued in 2016.
of indigenous systems of law through legal education.In essence, the development and survival of living customary law cannot be divorced from the decolonisation of law in Africa.
The second premise is that living customary law as a concept of law represents a move away from the colonial (and apartheid) heritage of the distortion of customary law.This makes this system of law the appropriate basis for, and object of, decolonisation of law in comparison with its counterpart, official customary law.The third premise is that living customary law has a distinctive character of its own, which requires special consideration for the purposes of teaching law in a decolonised context.
In advancing the argument of this paper, we draw examples from the South African legal system because, on the African continent, it has the most advanced jurisprudential conceptualisation of customary law as both living customary law and official customary law.
The argument of the paper is advanced in five sections.Following this introduction, the second section is a brief general background to the legality of protest action in tertiary institutions within the framework of the Constitution of South Africa.This section is intended to show that for the most part students were within their right to protest.The third section sets out the conceptual frameworks of decolonisation and customary law.The fourth section delves into the three elements of decolonisationthe teaching of living customary law, the theoretical paradigm within which law is taught, and interdisciplinary studies.The fifth section concludes the paper.
The legal status of protest action in tertiary institutions
The Supreme Court of Appeal pronounced itself on the legal status of protest action within the constitutional framework of South Africa in Hotz v UCT. 3 This was an appeal from the High Court decision granting an interdict to the University of Cape Town barring five participants in one of the protest actions on the campus from entering the premises of the University because they had allegedly committed unlawful acts in the course of their protest.
The Court held, inter alia, 4 that protest action is not itself unlawful and that the right to protest against injustice is protected by the Constitution, "not only specifically in section 17, by way of the right to assemble, demonstrate and present petitions, but also by other constitutionally protected rights, such as the right of freedom of opinion (s15(1)); the right of freedom of expression (s16(1)); the right of freedom of association (s18) and the right to make political choices and campaign for a political cause (s19(1))". 5However, the Court qualified these rights by holding that the mode of exercise of those rights is also the subject of constitutional regulation, that is: (a) the right to freedom of speech does not extend to the advocacy of hatred that is based on race or ethnicity and that constitutes incitement to cause harm (section 16(2)(c)); (b) the right to demonstrate is to be exercised peacefully and unarmed (section 17); and (c) all rights are to be exercised in a manner that respects and protects the foundational value of the human dignity of other people (section 10) and the rights other people enjoy under the Constitution.Citing its own decisions in other cases, the Court stated: 6 Our Constitution saw South Africa making a clean break with the past.The Constitution is focused on ensuring human dignity, the achievement of equality and the advancement of human rights and freedoms.It is calculated to ensure accountability, responsiveness and openness.Public demonstrations and marches are a regular feature of present day South Africa.I accept that assemblies, pickets, marches and demonstrations are an essential feature of a democratic society and that they are essential instruments of dialogue in society.The [Regulation of Gatherings] Act was designed to ensure that public protests and demonstrations are confined within legally recognised limits with due regard for the rights of others.
From this decision it may be inferred that protests in tertiary institutions are not to be treated any differently from other types of protest action in the country.Thus, although the actions of the appellants were restricted by an interdict, the right to lawful protest itself was unquestionable. of the teaching of law.This concept is discussed in this section along with the concepts of living customary law and official customary law.
Decolonisation
In this section we attempt to provide a definition of decolonisation that is relevant to the teaching of law in an African context.
In their joint reflection on decolonising the University of Cape Town, Max Price and Russel Ally stated that "decolonisation … should certainly not be reduced to some naïve … desire to return to a pristine, unblemished Africa before the arrival of the settlers". 8We agree with this statement because it seems to allude to a non-romanticised and non-rhetorical concept of decolonisation, on the one hand, and a dynamic meaning of decolonisation, on the other hand.Elsewhere we define decolonisation in a legal context as follows: [W]hile indigenous approaches should, in our view, be central to the decolonisation of law, this is not a call for the unconditional indigenisation of law in which an anti-colonial discourse, which is frequently trapped within the same colonial epistemology, is advanced uncritically.Instead, we suggest that a more meaningful point of departure in the decolonisation of law is the defining of law from a "non-colonial" position and from alternative legal epistemologies.In this respect, decolonisation draws from different sources of law and normative agencies to promote the transformative potential of law in achieving more social and economic justice. 9 Decolonisation is, furthermore, a move from a hegemonic or Eurocentric conception of law connected to legal cultures historically rooted in colonialism (and apartheid) in Africa to more inclusive legal cultures. 10 extend this notion of decolonisation to this paper.Additionally, decolonisation refers to locating the paradigmatic and theoretical shifts that are required for the teaching of law.
Official customary law and living customary law
Official customary law refers to a variety of sources of state law.In some countries, such as South Africa, official customary law may be divided into two categories for the purposes of this paper.The second category of official customary law, the new order official customary law, consists of legislation arising from the provisions of the Constitution that recognise customary law.For example, section 15 of the South African Constitution states that legislation may be enacted to recognise traditional forms of marriage or marriages concluded according to custom.Section 211(3) of the same Constitution provides that the courts must apply customary law subject to, among other things, legislation dealing specifically with customary law.In 1998, the Recognition of Customary Marriages Act was enacted to reform the customary law of marriage in line with the South African Constitution, including the constitutional provisions on gender equality.This Act makes provision for the application of both customary law and the common law. 12It is therefore a hybrid form of official customary law linked to new efforts aimed at transforming indigenous institutions within African constitutional frameworks. 13though both the old order and new order categories of official customary law bear the appellation of customary law, they often bear little resemblance to the living customary law regulating the day-to-day lives of people on the ground. 14Most importantly, the old order category of official customary law bears the marks of colonialism (and apartheid).This is because it was designed to advance colonial or apartheid state interests, 15 in the process of which it was distorted.For these reasons, the inclusion of old order official customary law in legal education would perpetuate the colonial legal legacy, which is contrary to the idea of decolonisation.Therefore, this category of official customary law should not form a core part of the legal curriculum.In contrast, the new order official customary law should be included in legal education, because it forms part of the constitutionalisation of customary 11 Black Administration Act 38 of 1927.This Act regulated several aspects of the lives of Black South Africans during the apartheid era, but it has now been largely repealed with the exception of a few provisions regulating the chiefs' and headmen's courts.
12 For the purposes of this paper, the term "common law" is used in a broad sense to represent the body of law in Africa which arose from Eurocentric sources due to colonialism.No distinction is drawn between the families of legal systems, such as common law, civil law and mixed legal systems.law, along with living customary law, which is the focus of this paper as elaborated in the next section, where the concept of living customary law receives further consideration.
Designing legal education for decolonisation
Arguably, three elements are essential for decolonising law and legal education.These are the inclusion of living customary law in legal education, a shift in theoretical paradigm within which law is taught and the interdisciplinary study of law.A discussion of each of these elements follows.
The inclusion of living customary law in legal education
As intimated in the discussion of the conceptual framework above, living customary law is the law that governs the legal relations of people who are subject to a given system of customary law in their day-to-day life.An equally fitting definition is that adopted by South African legislation: "the customs and usages traditionally observed among the indigenous African peoples of South Africa which form part of the culture of those people". 16he use of the term "culture" in this definition is significant, as it seems to allude to the dynamic nature of living customary lawas culture is dynamic, so is living customary law.Living customary law represents the practices or customs observed and invested with binding authority by the people whose customary law is under consideration. 17 Thus, living customary law is the law observed by, or rooted in, each ethnic group of Africa regardless of whether it is recognised by the state.As an unwritten store of legal ideas and knowledge, living customary law is passed down from one generation to the next orally.This store of knowledge is uniquely African in the sense that though not insulated from global conditions, its evolution is shaped within changing African social, economic and political contexts.Moreover, because of its oral nature and flexibility, living customary law can readily and easily be adjusted 18 to meet the varied needs of justice in a decolonised context.apartheid jurisprudence in South Africa.In this respect, Bennett has observed: [R]ules of an oral regime are porous and malleable.Because they have no clear definition, it is difficult to differentiate one rule from another, and, in consequence, to classify rules according to type.If rules cannot be classified, they cannot be arranged into a system, and without the discipline of a system, rules may overlap and contradict one another.In fact strictly speaking, the oral versions of customary law should not be called systems at all.They are probably better described as repertoires, from which the discerning judge may select whichever rule best suits the needs of the case. 19at emerges from this statement is a distinctive legal tradition whose logic and methodology does not place primary value on organisation or systemisation, and does not aspire to be a rigid framework of regulation, like other systems such as official customary law or the common law.
Similarly, the Constitutional Court of South Africa implicitly describes living customary law 20 as a distinctive and original source of law.Referring to the recognition of customary law by sections 211 and 39(2) 21 of the Constitution, the Court stated: "The Constitution thus 'acknowledges the originality and distinctiveness of indigenous law as an independent source of norms within the legal system…'." 22guably, the source of living customary law (ie the people subject to customary law), the value of its flexibility and adaptability as an evolving oral system, and its recognition as a distinctive and original source of indigenous law are all positive elements in the decolonisation of law.These attributes 19 See Bennett Customary Law fn 6, p 3.
20
In several cases, this court has recognised living customary law as a legitimate form of customary law in the post-apartheid legal system.See for example, the cases listed in note 22 below.
21
Section 211 of Constitution of the Republic of South Africa, 1996 states that: "(1) The institution, status and role of traditional leadership, according to customary law, are recognised subject to the Constitution.(2) A traditional authority that observes a system of customary law may function subject to any applicable legislation and customs, which includes amendments to, or repeal of, that legislation or those customs.
(3) The courts must apply customary law when that law is applicable, subject to the Constitution and any legislation that specifically deals with customary law"; section 39(2) provides that "When interpreting any legislation, and when developing the common law or customary law, every court, tribunal or forum must promote the spirit, purport and objects of the Bill of Rights".also qualify this system of law for inclusion as a core subject of study in a decolonised system of legal education.Moreover, these qualities of living customary law justify its development and retention in a decolonised legal system, also bearing in mind the fact that this system of law regulates the lives of the majority of the population in African legal systems.
Put differently, living customary law must be taught in all law faculties or law schools and at appropriate levels of the law degree that enable students to comprehend the significance and complexity of the subject within the constitutional frameworks of African countries.Future lawyers and judges need to have an understanding of important aspects of this customary law, including its its methodology in a broad sense, 23 and its development as a system of law within African constitutional frameworks.If future lawyers and judges are not given appropriate legal training about living customary law, they will not have the right lens 24 through which to view customary lawin its own right and not from the perspective of other legal systems.The relevant pronouncements of the South African Constitutional Court above will therefore be devoid of any practical significance.
The link between legal education and the development of a legal system is evident from the pronouncements of South African scholars about the development of Roman-Dutch law (RDL).In the words of Cowen, who we believe is one of the founders of legal education in South Africa, "taught law is tough law".With reference to the survival of RDL against the encroaching influence of English common law he stated: No legal system can survive unless it is taught scientifically … taught law is tough law by which [is meant] durable law.In short, the tide could not really turn in favour of the Roman-Dutch law in South Africa [in the nineteenth century] until a sound local tradition of tuition in its basic principles was built up … if I were asked to single out the cause which, more than any others, set back the prospects of the Roman-Dutch law in South Africa during much of the nineteenth century, I would point to the lack of scientific training in Roman-Dutch law. 25 According to this statement, legal education was seen as the antidote to the eminent death of RDL due to the influence of other legal traditions or cultures.In our view, this statement is no less true of the survival of living 23 For example, including case by case approaches and reconciliation as the goal of dispute resolution.
24
Alexkor Ltd v Richtersveld Community 2004 5 SA 460 (CC) para 51 where it stated: "While in the past indigenous law was seen through the common law lens, it must now be seen as an integral part of our law.Like all law it depends for its ultimate force and validity on the Constitution.Its validity must now be determined by reference not to common law, but to the Constitution."customary law or any other indigenous system of law against the influence of imported, yet dominant colonial and apartheid legal systems.Unless customary law is taught in law faculties it will die.
Furthermore, a host of issues concerning living customary law demand the attention of scientific thought in institutions of higher learning if this system is to develop into a modern African legal system.These issues include: (a) the long-standing challenge of how to ascertain living customary law, with the attendant question of how to ensure a measure of certainty about the rules of this system in the context of judicial decision-making; (b) the manipulation and distortion of living customary law, especially in the context of power relations among different sections of the community living under customary law and because of its evolving and oral nature; 26 (c) appropriate methods of aligning this system of law with constitutional principles and international and regional human rights; (d) the endurance and social legitimacy of living customary law; 27 and (e) issues of the universal application of human rights vis a vis cultural rights; and (f) whether and how the fundamentally different world views represented by the living customary law and common law can be merged and reconciled in one body of law -for example, that body of law which has to regulate commerce.We submit that there is no better place for addressing these issues, or for the development of customary law in relation to these issues, than in the academy, in legal education.
In sum, we argue that the teaching of living customary law as part of the core curriculum of legal education is essential to the process of the decolonisation of law, as well as the decolonisation of legal education itself.
In the next section we consider the importance of the legal theoretical paradigm within which law is taught to this process.
The legal theoretical framework
The predominant legal theoretical framework within which law is taught, at least in law schools under the historical influence of English and Roman-Dutch common law is legal centralism and positivism.This theory prepares future lawyers and judges to engage with western-type legal systems and legal cultures and not with non-western African legal systems, let alone oral legal traditions.For example, an important aspect of legal positivism is formalism.This strand of legal theory separates legal rules from "nonlegal normative considerations of morality or political philosophy" 28 and requires judges to apply the rules to the facts of the case before them deductively, with the value of legal certainty as a goal, among other things. 29However, the rules of living customary law cannot be abstracted from their social contexts.They are embedded in the social realities within which people live their lives.In addition, the values of certainty, stability and predictabilitywhich are core to western legal culturesare not necessarily the primary goals of dispute resolution in living customary law. 30 is therefore arguable that the legal education of judges and lawyers in Africa exclusively within the theoretical frameworks of legal positivism and centralism do not adequately prepare them to deal with the application of non-western legal orders, such as living customary law, in which law and its values are viewed differently.The result is that lawyers and judges view living customary law as non-existent, or regard living customary law as informal law that is irrelevant to state institutions.
South African judges, for example, have shown a remarkable willingness to step beyond the influence of the dominant mode of their legal education to embrace and recognise concepts of law, such as living customary law, that are located in non-western legal pluralistic theoretical frameworks.However, these judges sometimes seem to retreat into their predominantly western law and legal theoretical training and orientation when applying customary law.The result is that they bring ideas of legal centralism and positivism into the domain of customary law as well.
A classic example of this retreat is the decision of the majority in Bhe v Magistrate Khayalitsha. 31 In that case the Constitutional Court recognised the concept of living customary law, including its flexibility.This flexibility means that the system of law is relatively "processual", and hence less rule- bound than the "positivist/centralist" system of law, in the sense that the application of the rules to disputes follows the repertoire of norms approach Bennett alludes to above. 32.Inherently, this attribute of living customary law entails a case-by-case approach to the application of customary law in decision-making.It also entails some uncertainty in the outcomes of cases.
In other words, ideally there is no precedent value in cases decided under customary law, as each case is decided entirely on its own merits. 33nterestingly, however, the majority of judges in Bhe focused on the values of certainty and uniformity associated with legal centralism and positivism in deciding whether to develop customary law in accordance with constitutional provisions.The response of the Court to the argument on this issue is quoted at length in order to underscore this point.
It was argued by one of the parties that if the Court was not in a position to develop the rules of customary law in this case, it should allow for flexibility in order to facilitate the development of the law.The majority, rejecting this argument, reasoned as follows: The import of this [argument] was that since customary law is inherently flexible with the ability to permit compromise settlements, courts should introduce into the system those principles that the official system of succession violates.It was suggested that this could be done by using the exceptions in the implementation of the primogeniture rule which do occur in the actual administration of intestate succession as the applicable rule for customary law succession in order to avoid unfair discrimination and the violation of the dignity of the individuals affected by it.These exceptions would, according to this view, constitute the "living" customary law which should be implemented instead of official customary law.….There is much to be said for the above approach.I consider, however, that it would be inappropriate to adopt it as the remedy in this case.What it amounts to is advocacy for a case by case development as the best option.… The problem with development by the courts on a case by case basis is that changes will be very slow; uncertainties regarding the real rules of customary law will be prolonged and there will be different solutions for similar problems … 34 Arguably, underpinning this reasoning is the Court's support for the values of certainty and uniformity associated with the concept of law within the legal theoretical framework of centralism and positivism, as well as its affinity to the doctrine of precedent.Thus, the ghost of the training of judges in legal centralism and positivism sometimes seems to follow them when they apply customary law in decision-making. 35The training of lawyers and future judges should therefore equip them to deal not only with the dominant common-law systems of African countries but with living customary law as well.
This shift could be made by teaching law within legal theoretical frameworks that are closely associated with the concept of living customary law, the most appropriate of which is the theoretical perspective of legal pluralism.
Legal pluralism is the coexistence of distinctive legal systems in a specific social field where "laws and institutions are not subsumed within one system but have their sources in the self-regulatory activities of all the multifarious social fields present, activities which may support, complement, ignore or frustrate one another". 36Within this theoretical framework, the existence of one legal order does not depend on its recognition by other legal orders, including the legal order of the state.Living customary law fits perfectly into this theoretical framework. 37ditionally, the sociological theoretical framework that deals with the concept of living law 38 could also be explored for its relevance to the teaching of law in a decolonised context. 39wever, both the teaching of living customary law within the legal pluralistic (and sociological) theoretical framework and the decolonisation of law would benefit from an interdisciplinary approach to the teaching of law, to which we now turn.
The interdisciplinary teaching of law
An interdisciplinary approach to the study of a subject is defined as: [A]n approach that integrates information, data, techniques, tools, perspectives, concepts, and/or theories from two or more disciplines or bodies of specialised knowledge to advance fundamental understanding or to solve problems whose solutions are beyond the scope of a single discipline. 40e National Science Foundation (NSF) in the USA has observed that the meaning of interdisciplinary studies is the subject of scholarly debates.states that interdisciplinary studies are "continually emerging, melding and transforming". 41However, we submit that the dynamism of the interdisciplinary approach alluded to in this observation offers an interesting idea for the decolonisation of legal education.This is because dynamism represents the recognition of change and transformation which is needed to meld law with other disciplines continually, in order to enhance law's potential as an instrument of social justice.
Interdisciplinary studies of law therefore offer a platform for a more profound understanding of the relationship between law in all its manifestations and regulatory practice in society.Law, as a social practice or legal science, has often closed itself off in epistemological monism.Yet, the apparent assumption underlying this monism that law can explain itself, both as theoretical corpus and social practice, cannot be established.Legal science therefore needs to open itself up to other disciplines through interdisciplinary studies.This is also essential to enhance the power of law to elucidate and transform social reality.Moreover, the problems that law is supposed to address in society often lie beyond a single discipline.Understanding the epistemological problems and sharpening the instrumentality of law in solving these problems therefore requires an integration of knowledge from different disciplines.
The teaching and study of law in a decolonised context should aim to bring together contributions from various disciplines to focus on regulatory practices embodied by law in all its manifestations.A departure from pure legal studies is envisaged, in order to focus on conceptualising, developing, problematising and proposing hypotheses common to the various disciplines and interdisciplines involved, and to question and challenge legal approaches from these perspectives.For instance, as enabling disciplines, sociology, anthropology and history can enhance the understanding, teaching and research of legal phenomena in their various contextual manifestations in Africa.This in turn can be useful for exploring alternative epistemologies, hypotheses and methods that could lead to the rediscovery of legal studies via conceptual and methodological innovations.legal traditions and social, economic and political systems.In this respect, we argue that the teaching of law should take a view of decolonisation of law that goes further than mere vernacularisation.This argument is informed by law's capacity to facilitate interdisciplinary and innovative thinking, as well as to create a platform to engage, in a critical and constructive way, with the decolonisation of law as an epistemological question beyond its political and social implications.
More specifically, interdisciplinary studies in the fields of law and anthropology, law and sociology, and law and history would introduce students and the legal profession to a way of understanding social realities that is gleaned from the lived experiences of people.This would be achieved through the multiple layers of observing, interviewing, translating, writing and interpreting (asking questions such as how to capture and understand the norms of a community, how to understand a community or ethnic group etc.), as well as through understanding of how legitimacy and authority are multi-vocal and often contested. 42Furthermore, studies in these fields would help law students and the legal profession generally to better understand issues concerning inequality, modes of oppression, and social justice. 43tudies in law and history in particular would assist students to understand the neglect of the study of living customary law in African colonial (and apartheid) history, as well as the need for a paradigm shift in the thinking about customary law as a source of law in post-colonial contexts exhibiting new constitutional mandates regarding the recognition of customary law in the legal system.
Finally, interdisciplinary studies may be useful where students are required to understand a given subject in terms of multiple traditional disciplines. 44s stated above, the connection of legal science to other social sciences is important in legal settings characterised by pluralism, and because the problems that law is supposed to address in society often lie beyond a single discipline.
In sum, the teaching of law, particularly living customary law, should incorporate an interdisciplinary approach, in order to expand the depth and quality of legal studies, and to build an academic community knowledgeable about the relevance of other disciplines for law and vice versa.The current teaching of law as a discipline does not generally equip graduates with these broad-based skills.This deficiency reduces their ability to study and research living customary law and to contribute more effectively to the development of this law as a discipline.ability of graduates to deal with customary law within the constitutional mandates of decision-making in the adjudication of disputes.
Conclusion
In conclusion, the paper has attempted to link the decolonisation of law in Africa to the teaching and survival of living customary law as a distinct legal system which regulates the lives of millions of people in Africa.This not only reflects African legal realities, but also contributes to alternative epistemologies that reveal the transformative potential of law in dealing with the social realities of Africa.In this paper we have argued that the teaching of living customary law and law generally is critical to both of these contributions.The paper has also argued that unless law teaching is redesigned to shift the legal theoretical paradigm within which law is taught and to adopt an interdisciplinary approach to the teaching of law, the project of decolonising law in African legal systems will falter if not fail to materialise.
We have also attempted to show that the overall shift in the paradigm of teaching law will increase the potential of law to transform African societies and enhance social justice in a manner that is consistent with decolonisation.
Bibliography
25Cowen "Early Years of Aspiration to the 1920s" 8. See alsoHimonga 2010 Penn St Int'l L Rev 41-59 where Himonga first made this point.
Professor of Law in the Department of Private Law, University of Cape Town Law Faculty; holder of the DST/NRF SARChI Chair in Customary Law, Indigenous Values and Human Rights, University of Cape Town.Email: chuma.himonga@uct.ac.za.We acknowledge Professor Han Van Dijk for his contribution on ideas about decolonisation and interdisciplinary studies which we developed together in the proposal on the concept and structure of an Institute for Interdisciplinary Studies of Law in Africa (hereafter referred to as the IISLA proposal) submitted to the University of Cape Town by the DST/NRF SARChI Chair in Customary Law, Indigenous Values and Human Rights at the University of Cape Town in 2015.** Dr Fatimata Diallo.Bachelor in Public Law, Professional Masters, M.Phil (University Gaston Berger and Francophone University Association), PhD (Leiden University).Formerly Senior Research Fellow at the DST/NRF SARChI Chair in Customary Law, Indigenous Values and Human Rights at the University of Cape Town.Email: diallofatimaster@gmail.com.
3 The conceptual frameworks: decolonisation, living customary law and official customary law
Similar deficiencies reduce the 42Dr Elena Moore (oral exchange -24 May 16).43 Dr Elena Moore (oral exchange -24 May 16).44 Ponnusamy and Pandurangan Hand Book on University System 13.
Law, Custom and Social Order: The Colonial Experience in Malawi and Zambia (Cambridge University Press Cambridge 1985) Cowen "Early Years of Aspiration to the 1920s" Cowen D "Early Years of Aspiration to the 1920s" in Cowen D and Visser D (eds) The University of Cape Town Law Faculty: A History 1859-2004 (Siber Ink Cape Town 2004) 1-23Ehrlich Fundamental Principles of the Sociology of Law Ehrlich E Fundamental Principles of the Sociology of Law (Harvard University Press Cambridge Mass 1936) What is Legal Pluralism?"1986 J Legal Plur 1-55 Himonga (forthcoming 2017) Acta Juridica Himonga C "The Constitutional Court of Justice Moseneke and the Decolonisation of Law in South Africa: Revisiting the Relationship between Indigenous Law and Common Law" (forthcoming 2017) Acta Juridica Himonga and Bosch 2000 SALJ Himonga C and Bosch C "The Application of Customary Law under the Constitution of South Africa: Problems Solved or just Beginning?"2000 SALJ 306-341 Himonga and Moore Reform of Customary Marriage Himonga C and Moore E Reform of Customary Marriage, Divorce and Succession Living Customary Law and Social Realities (Juta Cape Town 2015) Himonga 2010 Penn St Int'l L Rev Himonga C "Goals and Objectives of Law Schools in their Primary Role of Educating Students: South Africa -The University of Cape Town School of Law Experience" 2010 Penn St Int'l L Rev 41-59 Hund 1998 ARSP Hund J "Customary Law is What People Say it is -HLA Hart's Contribution to Legal Anthropology" 1998 ARSP 420-429 Kameri-Mbote, Odote and Nyamu-Musembi Ours by Right Kameri-Mbote P, Odote C and Nyamu-Musembi C Ours by Right: Law, Politics and Realities of Community Property in Kenya (Strathmore University Press Nairobi 2013) Odgaard and Weis Benton 1998 EJDR Odgaard R and Weis Benton A "The Interplay Between Collective Rights and Obligations and Individual Rights" 1998 (10)2 EJDR 105-116 Ponnusamy and Pandurangan Hand Book on University System Ponnusamy R and Pandurangan J A Hand Book on University System (Allied New Delhi 2014) Posner 1986-7 Case W Res L Rev Posner R "Legal Formalism, Legal Realism, and the Interpretation of Statutes and the Constitution" 1986-7 Case W Res L Rev 179-217 UCT 2015 Year in Review University of Cape Town "Decolonising UCT" 2015 A Year in Review 22-23 Yale LJ Winerib E "Legal Formalism on the Immanent Rationality of Law" 1988 Yale LJ 949-958 Bhe v Magistrate, Khayelitsha (Commission for Gender Equality as Amicus Curiae); Shibi v Sithole; South African Human Rights Commission v President of the Republic of South Africa 2005 1 SA 580 (CC) Ex parte Chairperson of the Constitutional Assembly: In re Certification of the Constitution of the Republic of South Africa, 1996 1996 4 SA 744 (CC) Hotz v University of Cape Town 2017 2 SA 485 (SCA) /www.groundup.org.za/article/academics-and-fallist-movement/accessed 17 November 2016 Kane, Oloka-Onyango and Tejan-Cole 2005 http://siteresources. worldbank.org/INTRANETSOCIALDEVELOPMENT/Resources/reassessingcustomary.pdfKane M, Oloka-Onyango J and Tejan-Cole A 2005 Reassessing Customary Law Systems as a Vehicle for Providing Equitable Access to Justice for the Poor http://siteresources.worldbank.org/INTRANETSOCIALDEVELOPMENT/Resources/reassessingcustomary.pdf accessed 28 July 2017
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
1976-08-01T00:00:00.000
|
10438682
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "pd",
"oa_status": "GOLD",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1475217/pdf/envhper00491-0112.pdf",
"pdf_hash": "e5a6f5ca0fdcc6cbbd56f202bddc732107dbe948",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46096",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "e5a6f5ca0fdcc6cbbd56f202bddc732107dbe948",
"year": 1976
}
|
pes2o/s2orc
|
Pulmonary changes induced by amphophilic drugs.
Administration of amphophilic drugs to experimental animals causes formation of myeloid bodies in many cell types, accumulation of foamy macrophages in pulmonary alveoli, and pulmonary alveolar proteinosis. These changes are the result of an interaction between the drugs and phospholipids which leads to an alteration in physicochemical properties of the phospholipids. Impairment of the digestion of altered pulmonary secretions in phagosomes of macrophages results in accumulation of foam cells in pulmonary alveoli. Impairment of the metabolism of altered phospholipids removed by autophagy induces an accumulation of myeloid bodies. In summary, administration of amphophilic compounds causes a drug-induced lysosomal disease or generalized phospholipidosis.
In recent years it is becoming increasingly apparent that certain drugs administered systemically to man have serious side effects because of their affinity for the lungs. The* pulmonary pathology produced by these drugs has received relatively little clinical attention because of the insiduous and chronic nature of its development. With increasing awareness that a large variety of seemingly harmless drugs can induce such lung changes, it is imperative that more rigorous studies be carried out before such drugs are put to clinical use. The purpose of this report is to review a group of drugs which lead to intraalveolar histiocytosis.
These drugs induce essentially similar histological and ultrastructural changes in the lung, although the time required for the appearance of the foam cells and their quantity and size may be quite different in various animal species (1,7,8,12). The lungs of the treated animals increase in weight (1,11) and contain whitish plaques and nodules on gross examination (1,2). Microscopically these plaques consist of intraalveolar accumulations of foam cells (1,2).
The foam cells are between 20 and 80 ,Am in diameter (3,4,6,8) and have a centrally placed nucleus. The cytoplasm is abundant, pale, and finely reticulated in hematoxylin-eosin stained sections (Fig. 1) cells with Baker's acid hematein for phospholipids is positive, while staining with Sudan III and by the periodic acid-Schiff method are negative (2,6,11). Other staining reactions identify the material in the foam cells as choline-containing phosphoglycerides, including lecithin (11). Histochemical studies reveal high activities of acid phosphatase and P-glucuronidase, implicating the foam cells as macrophages (6,8,13,16). The time sequence of pulmonary changes associated with foam cell accumulation has been studied at light and electron microscopic levels. The first change observed in iprindole-treated rats is interstitial pulmonary edema associated with degenerative changes in capillary endothelia (16). Subsequently endothelia and alveolar epithelia become swollen and the septa are infiltrated by interstitial macrophages (16). The lungs of chlorphentermine-treated rats show hyperemia, aggregation of leucocytes in venules and perivascular infiltration by monocytes during the first week of treatment (8). In general a few intra-alveolar macrophages appear early and become progressively larger and more numerous (2,6,11). These macrophages are derived from interstitial macrophages (6,14), which in turn originate from blood monocytes (8,10). In histological sections the alveolar lumina contain abundant amorphous material (Fig. 2), which somewhat resembles the material seen in alveolar proteinosis (4). On ultrastructural examination the material is identified as secretions derived from secretory vacuoles of granular pneumocytes (Figs. 4 and 5). It has been shown (2,9,20) that the intra-alveolar macrophages phagocytize the secretions, which are rich in dipalmytoyl lecithin (3), and become foam cells (Figs. 3 and 6). After 3-6 weeks of treatment, the foam cells are numerous (2,4,8,13).
A marked decrease in the secretory activity of type 2 pneumocytes is observed after 9 months of treatment (15). After 12 month of iprindole feeding, the accumulations of foam cells are replaced by pale eosinophilic granular material in the intra-alveolar spaces as a result of cellular breakdown, and the histological picture is that of alveolar proteinosis (15). Degeneration of macrophages is also seen in rats treated with AY-9944 for prolonged periods (1). If the administration of the drug is withdrawn after several weeks of treatment, the size and the number of foam cells will decrease in 2-3 weeks (2,6). During this process the secretions in phagosomes of macrophages are replaced by electron dense heterogeneous material (2) (Fig. 7).
Environmental Health Perspectives Other pulmonary cells also show striking changes. Type 2 pneumocytes become hypertrophic and hyperplastic after triparanol and chlorcyclizine treatment (2,4) but do not show changes after chlorphentermine treatment (6). The secretory vacuoles of type 2 pneumocytes are large (2,15). Myeloid bodies (21,22) and heterogeneous dense bodies are found in the cytoplasm of type 1 pneumocytes (Fig. 8), ciliated bronchiolar epithelia and Clara cells, smooth muscle cells, fibro-FIGURE 7. Portion of a pulmonary macrophage from a rat treated with chlorcyclizine for 14 weeks and allowed to recover for 2 weeks. x5600.
blasts and capillary endothelia (2)(3)(4)15,16,19). Other investigators refer to the myeloid bodies as concentric lamellar inclusion bodies (3,6) and membrane-bound lamellated inclusion bodies (23). Heterogeneous ( Fig. 9), although in some cells they may contain reticular or crystalloid structures (21,22). Such reticular or crystalloid myeloid bodies were found in various pulmonary cells, but not in pulmonary macrophages (3). The drugs which induce accumulations of pulmonary foam cells are also known to induce formation of myeloid bodies in many other cell types of various tissues (7,22,24). Because the inclusions in foam cells and the myeloid bodies are modified lysosomes, knowl-114 edge of lysosome formation by heterophagy and by autophagy is necessary to understand their nature (22). In autophagy, sequestering cisternae surround a portion of the cytoplasm with organelles and form an autophagic vacuole (Fig. 9).
The thin membranes of the sequestering cisterna are transfromed into a thick limiting membrane of the lysosome. Hydrolytic enzymes formed in endoplasmic reticulum are brought to the autophagic vacuole within primary lysosomes. The sequestered organelles are broken down by the hydrolytic enzymes and the autophagic vacuole changes into a heterogeneous dense body (secondary lysosome). Further digestion leads to the appearance of smaller homogeneous dense bodies (residual bodies). In heterophagy, the extracellular material enters the cell within pinocytotic vesicles and phagocytic vacuoles. Primary lysosomes empty their enzymes into this vacuole, and the digestion proceeds as in autophagy.
If the degradation of substrates sequestered within lysosomes is impaired, the substrates will accumulate, and the lysosomes will become storage bodies. The impairment of lysosomal digestion may have a variety of causes. Leprosy bacilli August 1976 are known to inhibit lysosomal digestion in phagosomes. Human storage diseases are caused by the absence of specific lysosomal enzymes. Myeloid bodies are storage bodies containing membranes whose digestion is impaired by drugs (20). The inclusions in foam cells are phagosomes storing phagocytized pulmonary secretions whose digestion is impaired by drugs.
The inclusions in foam cells and myeloid bodies are sometimes grouped together as "concentric lamellar inclusion bodies" (3,6,9). They are, however, different structurally and etiologically (2,15,16). The material within pulmonary macrophages is less densely packed and arranged in a less orderly fashion than are the myeloid membranes within myeloid bodies (6,16). Myeloid body is an autophagosome while the inclusion in the macriphage is a heterophagosome. Some investigators believe that in chorphentermine treated animals autophagy may not precede or be increased during the formation of myeloid bodies (3,23,25). Increased autophagy has been, however, observed with other drugs (2,21,26). The fact that myeloid bodies show acid phosphatase activity (6,21,27,28) and are lysosomes (6,16,26,28) indicates that autophagy or heterophagy must be involved in their formation. Myeloid bodies are surrounded by a lysosomal membrane and should be distinguished from myeloid figures lying free in the cytoplasm or on the surface of mitochondria (22,29). Presence of myeloid figures protruding into mitochondria in granular pneumocytes (30) and in cells of murine pulmonary tumors originating from type 2 pneumocytes (31) does not indicate that myeloid figures are precursors of membrane bound secretions of granular pneumocytes. The structure of pulmonary secretions and of myeloid figures is quite different (19) (compare Figs. 5
and 9).
Biochemical studies show that the accumulation of pulmonary foam cells is associated with an increase of lipids. Chlorphentermine treatment leads to a twofold increase of total pulmonary lipids in the rat (32) and to a tenfold increase of the lipid content of pulmonary macrophages (33).
The sphingomyelin. cholesterol, and cholesterol ester fractions are markedly increased while phosphatidylcholine is increased fivefold (32). Cloforex increases the cholesterol content of rat lung by 50% and the phospholipid content five times (1). A marked increase of the levels of total phospholipids, total sterols, lyso-bisphosphatidic acid, and desmosterol was demonstrated in the lamellar body fraction isolated from lungs of rats treated with diethylaminoethoxyhexestrol (19).
Desmosterol was also found in fractions containing myeloid bodies induced by inhibitors of cholesterol synthesis (27,34). Finally, it has been demonstrated that the drugs inducing foam cell accumulation have a very high affinity for the lung (4,33,(35)(36)(37).
The factors which should be considered in the pathogenesis of intra-alveolar foam cell accumulation are: increased production of surfactant and/ or decreased clearance of pulmonary macrophages (2,4,15); lipolytic action of the drugs with subsequent excretion of lipids by the lung (11); inhibition of fusion of primary lysosomes with phagocytic vacuoles (38); accumulation of cholesterol precursors in macrophages (1,2,4,27); formation of abnormal "foreign" phospholipids which can not be eliminated by normal pathways (39); interaction of drugs with lysosomal lipiddegrading enzymes leading to enzyme inactivation (6,19,32,39); interaction between the drugs and lipids leading to an alteration in physicochemical properties of the lipid (2,3,6,8,32,39). The last possibility has gained the most support. The molecules of chlorphentermine, triparanol and of the other drugs mentioned earlier have amphophilic character (3,7,8). One part of the molecule contains protonated nitrogen and has hydrophilic properties. The aromatic portion of the molecule, particularly with certain substitutions on the ring, is hydrophobic. The amphophilia of the drugs facilitates complex formation with amphophilic phospholipids (3). An interaction between chlorphentermine and phospholipids has been demonstrated by nuclear magnetic resonance studies (40).
It is believed that the formation of a complex between the drug and the phospholipid leads to an alteration in physicochemical properties of the phospholipid and impairs its metabolism in phagosomes and in lysosomes (2,3,40). In the lung the drugs bind to phospholipids such as dipalmitoyl lecithin of pulmonary secretions in granular pneumocytes. These secretions are released into alveolar lumina, where they are taken up by pulmonary macrophages. Because of impaired digestion, the secretions persist and accumulate, and the macrophages become foam cells. Similar drug binding occurs in various cells in the body. The drugs react with phospholipids of cellular membranes and the altered membranes are removed by autophagy. The sequestered proteins and carbohydrates are broken down by lysosomal enzymes while the drug-lipid complexes resist digestion and are transformed into myeloid membranes. Thus a myeloid body is formed. For these reasons, the formation of foam cells and of myeloid bodies can be considered a drug-induced lysosomal disease (28) or a drug-induced generalized phospholipidosis (6)(7)(8).
The drug-induced lipidoses are side effects of drugs and do not depend on the pharmacological actions of the drugs (8). The anorectic drugs have been associated with pulmonary hypertension in man. These vascular effects are related to añ aRepe4 metabolism of serotonin rather than to foam cell accumulation (41). The accumulation of pulmonary foam cells and of myeloid bodies may, however, be associated with serious clinical problems. Functional impairment of overloaded mac-rophages may lead to a decreased resistance to bacterial and fungal infections (21). Massive accumulation of myeloid bodies may cause cellular damage and death. Administration of chloroquine has been associated with retinopathy in man (42). Hyperlipidemia, hepatosplenomegaly, liver cell necrosis and cirrhosis have been reported in patients treated with diethylaminoethoxyhexestrol (19,26,43).
Side effects of some of the discussed drugs were discovered only after they were used in men. Most probably other drugs will have similar side effects (44). It is therefore imperative to search for myeloid body formation when new drugs are introduced. In animal experiments, massive accumulation of pulmonary foam cells is an excellent indicator of a drug-induced lipidosis. It should be remembered, however, that foam cells occur in other pathological entities (1) and in old normal rats (45). In clinical trials the peripheral blood is easily accessible for ultrastructural studies. Lymphocytes and plasma cells respond to amphophilic drugs by formation of myeloid bodies in animals and in man (12,23,28).
Administration of busulfan, hexamethonium, apresoline and antituberculous drugs may be associated with hypertrophy of granular pneumocytes. The essential feature of the "busulfan lung" is chronic pulmonary fibrosis and does not resemble lesions induced by amphopilic drugs (52)(53)(54).
In summary, the drug-induced pulmonary histiocytosis is a manifestation of a generalized drug-induced lysosomal storage disease (or lipidosis). The use of the electron microscope greatly facilitates the search for drug-induced side effects at the cellular level.
|
v3-fos-license
|
2019-02-17T14:20:38.234Z
|
2018-06-25T00:00:00.000
|
67437834
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://pubs.sciepub.com/jbms/6/3/7/jbms-6-3-7.pdf",
"pdf_hash": "510f7e95f08e840c4afa6edbad34453b99e5cc5a",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46097",
"s2fieldsofstudy": [
"Computer Science",
"Medicine",
"Political Science"
],
"sha1": "38afd22149845eeb80b522c5dae788698cda1bc0",
"year": 2018
}
|
pes2o/s2orc
|
Big Data and Data-Driven Healthcare Systems
Data analytics has been used in healthcare. Healthcare systems generate big data. Traditional data management techniques are often unable to manage the voluminous amounts of data produced in healthcare systems. Big Data analytics which is overcoming the limitations of traditional data analytics will bring revolutions in healthcare systems. Big data and Big Data analytics in healthcare systems are presented in this paper. Information security, privacy, and challenges of Big Data analytics in healthcare are also discussed.
Introduction
Medical science has long relied on clinical trials to demonstrate the efficacy of interventions, whether pharmaceutical, surgical, or device based. Medical interventions should frequently be tailored to the specific characteristics of each individual patient. There has been an increased focus on personalized medicine in recent years, which relies on tailoring to individuals to provide "the right drug for the right patient at the right dose and time" [1]. Electronic health records and the data automatically collected from devices such as wearable devices are often sources of big data. It is not easy to perform perfect data curation and quality control. While approaches driven by Big Data accelerate the discovery of new therapies and diagnostics, all computational predictions must still be thoroughly validated in experimental and clinical settings before widespread use. People are moving toward big data-based healthcare, including data-driven methodologies to accelerate the discovery of new diagnostics and drugs [2].
Some data related to healthcare is characterized by a need for timeliness, such as data from implantable or wearable biometric sensors or the heart rate or SpO 2 which is commonly gathered and analyzed in real time. Suitable large-scale analysis typically requires the gathering of data from numerous sources (or heterogeneous data), for example, obtaining a patient's (or a population's) comprehensive health status requires the integration and analysis of patient health records beyond Internet-available, environmental data, or assorted meter readings (e.g., accelerometers, remote, wearable, or local cardiac monitors, or glucometers) [3]. Big Data approaches are being used to build models of healthy aging. Age-related conditions are the leading causes of death and healthcare costs. Reducing the rate of aging would have enormous medical and financial benefits. Myriad genes and pathways are known to regulate aging in model organisms. Challenges and pitfalls of commercialization include reliance on findings from short-lived model organisms, poor biological understanding of aging, and hurdles in performing clinical trials for aging [4].
Analytics in healthcare is driven by the gradual shift from disease-centered to patient-centered care (PCC). From a general practitioner's desktop computer to cardiac monitors in an emergency room, a multitude of clinical information systems capture patient information. This information exists at different levels of granularity, in diverse formats and recorded at varying frequency. A patient can record blood glucose levels at different times during the day when at home whereas a clinic may capture a single measurement but derive a different measure (glycated haemoglobin) to determine the three months average. This difference in granularity can be an extra dimension of information for Big Data analytics (BDA) when paired with medication, demographic or behavioral information. BDA can be used to identify changes in medical images and relate these to changes in medications or dosage. The inherent limitations of most data collections, such as missing data, null values, incorrect values and unmatched records were observed and accounted for in the BDA process. The fusion of structured and unstructured data is aptly demonstrated in the outcomes and is of significant value in a clinical context. BDA architecture for healthcare applications should overcome the complexities of granular data accumulation, temporal abstraction, multimodality, unstructured data and integration of multisource data to provide a robust platform for effective workflows and improved engagement [5].
The variety of big data is not solved only by parallelizing and distributing problems. Variety is mitigated by capturing, structuring, and understanding unstructured data using artificial intelligence (AI) and other different analytics [6]. Clinical data is expressed within the narrative portion of the EMRs, requiring natural language processing techniques to unlock the medical knowledge referred to by physicians [7]. Research on big data has mostly focused on addressing technical issues. However, organizations will not acquire the full benefits of leveraging big data analytics unless they can address managerial challenges effectively, orchestrate strategic choices and resource configurations, as well as understand the managerial, economic, and strategic impact of big data analytics. Moving a deeper understanding on the ways and means to create business value from big data analytics will result in reducing a resistance to adopt big data analytics and an ineffective use of analytics. Thus, exploring the path to big data analytics success for healthcare transformation is currently one of the most discussed topics in the fields of computer science, information systems (IS), and healthcare informatics [8].
Data Sources in Healthcare and Big Data Advantages
Healthcare datasets collected in both clinical and nonclinical segments are in various forms and their source are described in Figure 1. Some keywords related to big data in the biomedical area are listed in Table 1. Big-Omic Data are the data containing a comprehensive catalog of molecular profiles (e.g., genomic, transcriptomic, epigenomic, proteomic, and metabolomic in biological samples that provide a basis for precision medicine. Big EHR Data can be unstructured (e.g., clinical notes) or structured (e.g., ICD-9 diagnosis codes, administrative data, chart, and medication). Omic and EHR big data analytics is a challenge due to data frequency, quality, dimensionality, and heterogeneity [9]. Developing a detailed model of a human being by combining physiological data and high-throughput -omics techniques has the potential to enhance the knowledge of disease states and help develop blood-based diagnostic tools. Medical image analysis, signal processing of physiological data, and integration of physiological and -omics data face challenges and opportunities in dealing with disparate structured and unstructured big data sources [11]. Big data technologies are increasingly used for processing next-generation sequencing (NGS) data, motivated by the volume and velocity at which sequencing data is produced. Existing implementations of cloudenabled NGS tools often use the MapReduce (MR) paradigm. MR is included in frameworks such as Hadoop that enable distributed processing of large-scale NGS datasets on a cloud [12].
Infectious disease surveillance is one of the most exciting opportunities created by big data because these novel data streams can improve timeliness, spatial and temporal resolution. These streams can also go beyond disease surveillance and provide information on behaviors and outcomes related to vaccine or drug use [13]. Big Data can be used in the health care to get innovative outcomes in the following areas [14]: • Public and population health: BDA solutions can mine web-based data and social media data to predict the trend of diseases (e.g. flu). • Evidence-based medicine: it involves the use of statistical studies and quantified research by doctors to form diagnosis. • Clinical decision support: BDA technologies can be used to predict outcomes or recommend alternative treatments to clinicians and patients at the point of care. • Personalized care: predictive data mining or analytic solutions may offer early detection and diagnosis before a patient has disease symptoms. Pattern detection can be fulfilled through real time wearable sensors for elderly or disabled patients to alert the physicians if there is any change in their vital parameters or post-market monitoring of drug effectiveness. • Fraud Detection: fraud in medical claims can increase the burden on the society, Predictive models like decision tree, neural networks, regression etc. can be used to predict and prevent fraud at the point of transactions. • Secondary usage of health data: dealing with aggregation of clinical data from finance, patient care, administrative records to discover valuable insights like identification of patients with rare disease, therapy choices, clinical performance measurement etc.
Case Studies of Big Data in Diseases
In diabetes, a multidimensional approach to data analysis is needed to better understand the disease conditions, trajectories and the associated comorbidities. Elucidation of multidimensionality comes from the analysis of factors such as disease phenotypes, marker types, and biological motifs while seeking to make use of multiple levels of information including genetics, omics, clinical data, and environmental and lifestyle factors. A significant role is played by both environmental and genetic factors in Type-2 diabetes (T2D) [15]. Predictive analysis algorithm was used in Hadoop/MapReduce environment to predict the diabetes types prevalent, complications associated with it and the type of treatment to be provided. The healthcare industry is moving from reporting facts to discovery of insights, toward becoming data-driven healthcare organizations. Big Data holds great potential to change the whole healthcare value chain from drug analysis to patients caring quality [16].
Medical images help in early detection, diagnosis and prognosis of neurological disorders. Diagnosis of this disease by the radiologists is achieved through the neuroimaging techniques. The major constituents of human brain are Gray Matter (GM), White Matter (WM) and Cerebrospinal Fluid (CSF). Cranial volume is a significant metric by which the abnormality in the size and shape of brain is detected. Hence quantitative analysis of brain tissues plays a key role in the diagnosis of these illnesses. Performing this measurable analysis on MRI brain images from a medical imaging perspective requires image processing and/or machine learning techniques. On the other hand, understanding why there is loss of neurons is viewed from a bioinformatics perspective. Studies have confirmed that one of the reasons owes to protein misfolding where the proteins fail to fold appropriately. This leads to severe concerns resulting in neuronal death [17]. Big Data has a great potential in the study of brain science. Figure 2 shows Big Data-driven discovery in gastroenterology and hepatology: 1) Big Data-driven discovery can provide new approaches to long-standing or emerging unmet needs in gastrointestinal and liver diseases; 2) systematically and/or automatically collected heterogeneous data from patients and publicly or privately available databases are integrated into a highly rich datasets and analyzed; 3) mining assembled big data by specialized methodologies (translational bioinformatics) efficiently yields diagnostic devices, tools, and/or therapeutics [2].
Advances of Big Data in Healthcare
Big data analytics comprises an integrated array of aggregation techniques, analytics techniques, and interpretation techniques that allow users to transform data into evidence-based decisions and informed actions. Data aggregation aims to collect heterogeneous data from multiple sources and transforming various sources data into certain data formats that can be read and analyzed. Data will be aggregated by three key functionalities from data aggregation tools: acquisition, transformation, and storage. Data analysis aims to process all kinds of data and perform appropriate analyses for harvesting insights. Data interpretation generates outputs such as various visualization reports, real-time information monitoring, and meaningful business insights derived from the analytics components to users [18]. An online healthcare monitoring system was developed that is shown in Figure 3. Figure 4 shows an advanced process of data collection. Various healthcare data are collected by data nodes and are transmitted to the cloud through configurable adapters that provide the functionality to preprocess and encrypt the data [20]. Figure 5 [22] shows the working flow of a healthcare monitor system based on the healthcare cloud, in which the webpage interface provides four basic query options, including real-time dynamics, status overview, device distribution, and patient-healthcare record. Fog computing (shown in Figure 6) is an emerging paradigm that provides storage, processing, and communication services closer to the end user. Fog computing does not replace cloud computing. Rather, it extends the cloud to the edge of the network [21]. . Fog computing architecture [21] Big data computing is a new trend for future computing with a large amount of data sets and can be divided into three paradigms: batch-oriented computing, real-time oriented computing (or stream computing), and hybrid computing. Apache Hadoop (Hadoop) is an example of batch-oriented computing. However, the output time will vary depending upon the amount of data that is given as input. In contrast, real-time oriented computing involves continuous input and outcome of data. A big data input stream has three main characteristics namely high speed, real time, and large volume [10]. New technologies, such as platforms and infrastructures, are required for handling big data. A historical perspective of the frameworks of these technologies in data processing is shown in Figure 7 [23]. Researchers have presented a novel cloud platform for fast statistics and analysis based on big data processing technology. In this platform, medical service information is transformed to a new data structure in a columnoriented Data Base Manage System (DBMS); Spark cluster is used to satisfy the real-time computing requirements. Hadoop is one of the most important open-source big data platforms, and it simplifies the processing and management of big data by means of the MapReduce model and sophisticated ecosystem. The fast-statistical analysis platform composes of the following basic components: Data ETL Servers, Distributed storage, Spark cluster and Application Web Server. The fast computing becomes the critical step in the statistic and analysis of big medical service data. Therefore, it is imperative to use new big data processing platform for accelerating the computing and utilization of those medical service data. Fast statistical and analysis platform of medical service big data should provide the following functions: the design of new data structure in the distributed database(HBase) which is capable for processing hundreds of millions of records, transforming source-data from Oracle database to HBase database, executing statistics and analysis by using math methods and friendly output of computing results [24]. Table 2 describes the main components of the Spark framework.
Big data analytics includes the various analytical techniques such as descriptive analytics and mining/predictive analytics that are perfect for analyzing a sizeable quantity of text-based health documents and other unstructured clinical data (e.g., physician's written notes and prescriptions and medical imaging). Novel database management systems such as MongoDB, MarkLogic and Apache Cassandra for data integration and retrieval, allow data being transferred between traditional and new operating systems. To store the enormous volume and numerous formats of data, there are Apache HBase and
Journal of Business and Management Sciences
NoSQL systems, which are tools with sophisticated functionalities that facilitate clinical information integration and provide innovative business visions [26]. New features of big data processing, such as insufficient samples, uncertain data relationships and unbalance (or even uncertain) distributions of value density, should be fully considered. Scalability and timeliness are two issues with high priorities regarding big data. The challenges of big data visualization come from the large sizes and high dimensions of data. Current visualization techniques mostly suffer from poor performances in functionalities, scalability and response time. Moreover, the effectiveness of visualization may be challenged by uncertainties of data sources [23]. As a successor of ipython, Jupyter was a successful interactive and development tool for data science and scientific computing. The HBDA platform was developed and showed high performance tested for healthcare applications. With moderate resources, users are able to run realistic SQL queries on one billion records and perform interactive analytics and data visualization using Drill, Spark with Zeppelin or Jupyter. The performance times proved to improve over time with repeated sessions of the same query via the Zeppelin and Jupyter interfaces. An ingesting and using CSV file on Hadoop also had its advantages but was expensive when running Spark. Drill offers better low latency SQL engine but its application tool and visualization were very limited to customization; therefore, had lower usability for healthcare purposes [27]. A medical prototype was implemented in Centos64 operating systems. The distributed storage and Spark cluster are composed by 4 virtual machine nodes. The specific software configuration is shown in Table 3. Contains basic Spark functionality. Sparks fundamental programming abstraction, RDD Set represents a collection of items spread across parallel computing nodes. Spark provided an API for creating and managing RDDs. This API also takes care of parallel processing and management of RDDs.
Security, Privacy and Challenges of Big Data in Healthcare
Medical data is highly sensitive and the federal Health Insurance Portability and Accountability Act (HIPAA) requires protecting the confidentiality and security of healthcare data. Various approaches have been developed based on privacy preserving data mining (PPDM) for protecting the privacy of individuals or groups within a dataset while maintaining the integrity of the knowledge contained within the data for knowledge discovery purposes. Sensitive data spanning multiple organizations result in not only data syntax and semantic heterogeneity but also diverse privacy requirements, which results in additional challenges to data sharing and integration. Data sharing for purposes such as billing and joint ventures is permissible under HIPAA regulations. Healthcare data such as an electronic medical record (EMR) are valuable. Besides implications for patient privacy, a security breach has repercussions for healthcare providers such as diminished reputation, litigation, or imposed penalties [28].
The rise of big data generated by mobile sources has brought unprecedented opportunities for researchers to explore new possibilities. Opportunities presented by mobile big data (MBD) have been introduced. Mobility can amplify the effects of big data on both operational efficiency and customer intelligence by making everything instantly actionable, which can change business processes. However, solving MBD problems while respecting the privacy of customers is one of the biggest concerns of enterprises. MBD comprises personal location-based data, which users do not wish to reveal. Therefore, some research is required to identify new methods and technologies that can allow customers to dynamically verify their data privacy according to the rules and regulations of their service level agreements. The development of such methods can ensure customer privacy. Without the proper assurance of privacy, enterprises may not be able to obtain complete data from customers, thereby possibly being misled in their decision making [29].
There are following challenges in the whole big data process [30]: • Scalability. When we consider the integration of streams coming from all healthcare sport services with other IoT applications such as GPS sensors inside cars or air pollution sensors, the data flow can easily reach up to millions of tuples per second. Centralized servers cannot process flows of this magnitude in real time. Thus, the main challenge is to build a distributed system where every node has a local view of the data flow. These local views must then be aggregated to build a global view of the data with an off-line analysis. • Heterogeneity and incompleteness. The IoT ecosystem generates heterogeneous data flows coming from different types of applications and devices. Therefore, the main challenge here is to integrate and structure massive and heterogeneous data flows coming from the IoT to prepare their analysis in real time. • Timeliness. Speed in big data is important in both input and output. The input is represented by a huge dataset coming from multiples sources that must be processed and structured for analysis. The output is represented by results of analysis or queries over the dataset. The main challenge here is how to implement a distributed architecture that is able to aggregate local views of data inside every node into a single global view of results with minimal communication latency between nodes. • Privacy. People generate and share personal data that are not always protected. Data generated from healthcare sport services contains sensitive personal information. A key challenge here is to propose techniques that protect this kind of data before its analysis. The application of Software as a Service (SaaS) in healthcare domain is clearly a possible solution to handle large set of data on cloud. The available security measures help handle the data on cloud in a secured manner. Having a great service on the cloud that helps users to analyze the data from a remote location will be helpful for both patients and healthcare industry. This can solve overhead of people traveling to hospitals for every medical checkup [31]. Health Information Exchanges (HIEs) which support electronic sharing of data and information between health care organizations are recognized as a source of big data in healthcare and have the potential to provide public health with a single stream of data collated across disparate systems and sources. However, given these data are not collected specifically to meet public health objectives, it is unknown whether a public health agency's (PHA's) secondary use of the data is supportive of or presents additional barriers to meeting disease reporting and surveillance needs. The following challenges have been uncovered for effective utilization of big data by public health [32]: • While PHAs almost exclusively rely on secondary use data for surveillance, big data that has been collected for clinical purposes omits data fields of high value for public health. • Big data is not always smart data, especially when the context within which the data is collected is absent. • Data collected by disparate, varying systems and sources can introduce uncertainties and limit trustworthiness in the data which may diminish its value for public health purposes. • The process by which data is obtained needs to be evident in order for big data to be useful to public health. • Big data for public health purposes needs to answer both 'what' and 'why' questions.
Conclusion
Data-driven management in healthcare systems has become a strategic choice in achieving sustainable growth, meeting the challenges of global competition, and explore the potential innovation for the future. Novel data analytics such as Big Data analytics are key to advancing healthcare systems. Big Data can be used in the health care to get innovative outcomes in public and population health, evidence-based medicine, clinical decision support, personalized care, fraud detection, etc. Artificial intelligence (AI) and Big Data analytics could reshape the healthcare systems with greater performance in productivity, efficiency, and the quality of care. Challenges in the big data process lie in scalability, heterogeneity and incompleteness, timeliness, privacy, etc.
|
v3-fos-license
|
2019-08-22T23:28:13.888Z
|
2019-06-28T00:00:00.000
|
201390514
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "http://ejournal.umm.ac.id/index.php/celtic/article/download/8751/6656",
"pdf_hash": "c88d227d3d936c4fb5500817b5af4a3963f65aa5",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46100",
"s2fieldsofstudy": [
"Education"
],
"sha1": "c88d227d3d936c4fb5500817b5af4a3963f65aa5",
"year": 2019
}
|
pes2o/s2orc
|
VIRTUAL REALITY SYSTEM FOR JOB INTERVIEW APPLICATION : A DEVELOPMENT RESEARCH
Technology has become more sophisticated and widely used in pedagogical process. One of sophisticated technologies that is recently known and became attractive towards the public attention is Virtual Reality. This system applied simulated object and artificial environment like the real-world version. A need analysis conducted by Jailani (2017)on the use of VR for Business English students indicated that the students were interested and enthusiastic to learn job interview.The objective in the current research is to develop Virtual Reality System for Job Interview in Business English class at University of Muhammadiyah Malang. In doing so, the present research applied seven procedures in Research and Development design which aimed to develop some features and contents that are applied on Job Interview application. The subjects of this research are students from Business English (BE) class, English Language Education Department of UMM academic year 2016/2017. Observation, questionnaire, and documentation were employed to collect the data. The data were then transcribed and analyzed to reach the research purpose.The result revealed that the application is in very valid category that reached 93.33% for product and design validation; while 93.75% for content validation. Moreover, additional features were developed on stuffs, scoring system, and button to make the users more interested in using VR. In addition, it was found that the students of Business English as users were motivated and attracted in conducting job interview simulation through Virtual Reality.
INTRODUCTION
To a large degree, the notion of technology becomes significantly urging on the aspect of utilizing people to help their activities in their daily life nowadays.It is known that technology is always updated rapidly in every single space of human needs.Motteram (2013) stated that technology continues to be used for all sorts of specific language learning activities, such as the development of speaking, writing, and reading.In line with this, the use of smarthphone as one of media in develompment of technology has emerged in education for learning.Learning activities with smartphones, facilitate and attract students to learn, so that their perspective extended (Rambitan, 2015).In line with that idea, Clayton & Murphy (2016) reported that students use smartphone or mobile phone in the percentage of 88% nowadays in overall.
One of the sophisticated technologies that uses smartphone to run the system is Virtual Reality System.This term came to the public's attention in the late 1980's and 1990's that may be manipulated and moved through by a user in real time in describing a computer-generated virtual environment (Mandal, 2013).It relates with simulated objects in that environment or situation like the real-world vision.This technology becomes well-known and fashionable in current decade because of the idea of life view when user used it and interacted with Synthetic environment.Synthetic Environment is used to describe computer-mediated human interaction with the simulated environment which also includes physical reaction (Draper et al., 1999, as cited in Ma & Kaber, 2006).
Since Palmer Luckey created Oculus Rift to make Virtual Reality System working inside the software connected with the glasses in March of 2014, it becomes popular in the field of new technology expanded (Clark, 2014).Recently, Virtual Reality (VR) known for having game simulation like the VR product from Palmer Luckey applied, users can use it like they are in the real-life situation.The development of Virtual Reality System in educational environment emerges in huge interaction such as educational games and Virtual Reality as Disruptive Technologies (Psotka, 2013), Virtual Reality in teaching environmental engineering (Burnley, 2017), Virtual Reality for learning style and teaching learning process (Gutiérrez, Mora, Diaz, & Marrero, 2017), Virtual Reality in medicine (Gutiérrez et al., 2017), Virtual Reality Engineering Education (Abulrub, Attridge, & Williams, 2011), Virtual Reality in Biology Education (Shim et al, 2010).
A recent study on pointing Virtual Reality for an incredible future technology besides implementing on game simulation has already been done by some experts.Wilson, Soranzo, & Sheffield (2015) concluded that the use of VR in psychological study has increased because its benefits afforded over traditional experimental apparatus in the possibility of creating more ecologically valid stimulus presentation and response protocols and more strict control of the environment.In healthcare field, students can directly examine and interact with virtual patient as well as learn sill like in a real world so that they can surgery and even perform procedures on a virtual patient in safe and controlled environment, and this system has wide applications ranging from diagnosis, counselling, treatment, and rehabilitation, to designing of hospitals (Chaudhury, 2014).The other way of developing Virtual Reality System on educational fields would involve the students of Business English in terms of practicing job interview before they apply for job.
Business English subject is part of English for Specific Purposes as a distinct field from General English (Bereczky & Gabor, 2009).Kučírková, Vogeltanzová, & Jarkovská (2011) stated that Business English course concerns on the use of knowledge in business and management sphere, in negotiations with foreign partners, in the sphere of research etc.One of the courses in Business English that deals with the preparation of job fields is Job Interview.Job Interview is the current issue that author takes away from the needs of students that they plan to have a job after they have graduated.Job Interviews Guide (2011) identified what the candidate of employees are looking for, know what you can offer them, prepare yourself well and promote yourself as the best match to their needs.This is the effort of the author to attain the aim of VR function on educational term to make the learners more enjoyable in learning and increase the speaking skill by using media supported device.
Regarding that field of study, the researcher develops Virtual Reality in educational environment, by focusing on the implementation of Virtual Reality System on Business English students.Business English (BE) in the University of Muhammadiyah Malang (UMM) is an elective course at English Language Education Department (ELED) Faculty of Teacher Training and Education.This study learns about the matters found in business place.In the early level, it talks about how to write a systematic and good Business letter as its tool to interact or to make a convention with the others.
Needs analysis media in Job Interviews presented that the use of Virtual Reality System on VR Glasses is needed to be developed for students of Business English UMM in practicing job interview before they face the real environment of interview (Jailani, 2017).Students thought that the early existance of Virtual Reality technology facilitates them in interesting and fun learning on educational fields because they only know about the concept of interview without getting a chance to directly train themselves in the interview simulation.Hence, VR Glasses with the use of software inside smartphone should be developed in terms of the features and additional questions in order to make the students more practice especially in speaking activity.
In this research, the author will develop a Virtual Reality System on media to increase students' speaking ability and prepare them for practicing job interview which is used in software application of Job Interview Simulation by VR Glasses.The author will conduct Research and Development Program with the title "Developing Software of Job Interview Application on Virtual Reality System Using VR Glasses for Business English Students".Therefore, the research of the problem is focused on how Virtual Reality System can develop the software of Job Interview Application using VR Glasses.
Research Design
This research uses Research and Development design which can be interpreted as the use of research method for researching, producing, and examining a new product so that it can be developed according to the needs (Sugiyono, 2011).Additionally, Sukmadinata (2013, as cited in Novitasari, 2016) stated that this kind of research method is the step or process to develop a new product or complete design of the existing product that can be accounted.Moreover, Borg andGall (2003, as cited in Walisongo, 1983)supported that the process of R & D studying research finding that will be developed, developing the product, field testing, and revising the product.
The Research and Development design will be used by the researcher to develop the software of job interview application by using Virtual Reality System.This research focuses on the development of design, content, and features of the job interview software.The aim of the Virtual Reality application development is to develop materials that will be applied in the VR application in the context of job interview.
Research Subject
The subjects of this research are 10 students from 65 students that already knew about the technology of virtual reality before.The students are from business English (BE) class English Language Education Department at one of Universities in Malang, East Java academic year 2016/2017.The selected subjects have studied Business English as the one of the elective courses in English Language Education Department; this course also gives a chance to the students to do the apprenticeship program.An apprenticeship program gives a chance to the students in entering the workforce to combine on-the-job training with academic instruction that they learned before.This apprenticeship helps students implement their academic skills toward practical use in various work-fields.It aims to identify the potential and problems on having a good interview and getting information to design the purpose of the study.The researcher analyses some aspects, which are curriculum, teaching materials, and teaching media.
Data Collection
Once the potentials and problems can be demonstrated factually and up to date, further information needs to be gathered as material for specific product planning that is expected to address the problem.Collecting the data is also designing the use of Virtual Reality System to apply the application of job interview.
Product Design
The products that produced in Research and Development research is the media of learning by using Virtual Reality Glasses in the field of technology.The used of application to apply on that related media was designed on the system of Virtual Reality with the real situation of job interview session.This product will be developed on the features and job interview question that needed by the interviewee.
Design Validation
Design validation is the process involving someone who have been expert to assess and evaluate the product design.This validation proposes to optimize the work of application before trial.There are two experts who will validate this product.Education Department and the Director of Kursus Bahasa Asing (KBA) which has doctoral degree and qualified in recruiting staff and doing interview for the job in her office.
Design Revision
After the expert validated the design of the product, the researcher develops the product by doing some revision about the design.This term can be concerned to the comments, suggestion, or even evaluated items in the rubric of validation.
Trial of user
After testing the product successfully, then the next product that has been revised by expert applied new additional aspects on application.The users will try the product with the need analysis before.Therefore, the researcher used data collection which is the most strategic step in the research because the main purpose of the research is to get data.Data collection conducted in this study is as follows: -Observation In this study, the researcher used observation in order to collect information and get descriptive condition to provide answers on how Business English students learned about interview in learning process.The activity can be related to the way teacher teaches, student learns, media uses, and etc.Additionally, this observation aimed to not only observing the subject but also knowing about the existing of Virtual Reality System that can be used for media in learning interview by the students in facing jobmarket process.
In line with this, this study uses participant observation when the researcher is involved in learning process of Business English class with the other students.Therefore, this kind of observation can be easily analyzed by the researcher to get description and some related problems in interview session. -Questionnaire The author gives a list of questions (questionnaires) that must be filled and submitted.The type of questionnaire used is closed which is a set of questionnaires with possible answers available, where respondents only choose one of the possible answers.
To complete the answer, this kind of questionnaire applied multiple choice that have to be filled by the respondents.
In this study, questionnaires areequipped to obtain the expert' validation and users' validation.The questionnaires which are given to the expert have the aim to assess and test the validity of the product of job interview application.The expert will give score, comments, and suggestions in terms of three criteria of validation, namely Product Validation, Design Validation, and Content Validation.Product and Design validation need to be evaluated with the expert regarding on the appropriate technology that applied on the Virtual Reality System.Meanwhile, for content validation, the expert ISSN: 2356-0401, E-ISSN: 2621-9158, VOL. 6, NO. 1, June 2019.
will concern on the material of job interview that has been implemented in the form of questions for student's practice.
The questionnaires for the users have the purpose to know the users' feedback about the use of media after they tried it.The users which are the students of Business English Class in English Language Education Department will give their score, comments, and suggestions in terms of three criteria, namely Product, Content, and Design.The proposed of statements in users' validation questionnaire address to decide the application of the product.Therefore, these questionnaires were used to find out the respond about job interview application that developed using Virtual Reality System.
-Documentation Documentation process was used to get the data concerning on developing the design for job interview application.The document analysis that used by the researcher is the lecturer's material in the form of slide presentation about job interview that have been delivered to the Business English students.This method was proposed as the evidence developing the media to the Business English students that is through analyzing document that have been taken at the moment when the author observed the learning process (Sukmadinata, 2016).The documentation focuses on related job interview questions that will use as consideration in developing content material in job interview application.
Final Product
Making this final product will be done if the product has been tested effectively and ready to use.It concerned about the result of expert validation and the user's trial.The final product hopefully can be applied on regular practice for the students if they need to train themselves in interview session.This final product also will be able to be used in different context for university students who need to practice for job interview.
Data Analysis
The researcher analyzed the data from observation and questionnaire session based on procedures in order to answer the research problems.The procedures of analyzing data are described as follows: 1. Analyzing lecturer's material in the form of slide presentation about job interview to review about the use of question for job interview application.2. Classifying the data from observation to develop the job interview application design in order to make it appropriate based on the needs of Business English student to practice with the question of interview.3. Classifying the data from the experts' questionnaire regarding on the validation of technology and content.4. Analyzing the data from first experts regarding on technology validation in the form of product and design validation. 5. Analyzing the data from second experts regarding on content validation in the form of the appropriate material for job interview questions.6. Analyzing the score of experts' validations by the formula below.
𝑁𝑖𝑙𝑎𝑖 =
The number of validation scores obtained The maximum number of validation scores 100% The result of data percentage was converted based on the criteria in table 1 below.7. Analyzing the data from users' questionnaire to find out the response of students after trying to use the media.8.The researcher draws the conclusion based on data analysis.
FINDINGS AND DISCUSSIONS Findings
The researcher presents the findings of the study which include application validity, trial of users, and product design.
-Application Validity
In order to ensure that the application of Virtual Reality is ready to use for practicing job interview by the students, validation was needed and conducted by the researcher toward two experts.The first expert focused on the validation of product and design in the implementation of job interview on Virtual Reality.The second expert concerned to the three types of rubric, namely rubric for interviewee, interviewer, and interview content.The application validity by the validator was done before the application try out toward the students.The first validation about the product and the design of the application was done by the expert from Informatics Engineering Department with the following result: The Application of Job Interview in terms of the system was validated based on the assessment of two aspects, namely: (a) product validation and (b) design validation.According to the validation by the expert, the score of the product and the design validation reach 93.33% which was obtained from the calculation of the scores of statements in the rubric.
The next validation is about the content which focused on the interviewee, interviewer, and the interview question.It was done by the expert from English Language Education Department who teaches the elective course of Business English subject.The expert is familiar to the recruitment of employee and interview session.So that the rubric is more detail on the content of the application with the following result: The content validation for Job Interview was assessed based on three aspects, namely: (a) content for interviewee (b) content for interviewer, and (c) content for interview questions.According to the validation by the expert, the score of content validation reach 93.75% which was obtained from the calculation of the score of statements in the rubric.
After the experts validated the application based on their field, the researcher identified the result of the score through their assessments on every single statement in the rubric.The total score showed that the score of the product and the design validation reach 93.33% and the score of the content validation reach 93.75%.It means that the score of all validation can be categorized as very valid level (based on Akbar, 2013 Table1).Furthermore, the application can directly try to the users/students in order to know the responses about the impact of application for them. -
Trial of User
As the score classified into very valid category, the application of job interview on Virtual Reality can be implemented to the students directly.The respond from the students as the users for Virtual Reality Application shows that the interview questions provided are easy to understand so that the students have no difficulty in answering some available questions.The students stated that the interview questions provided are in accordance with their learning.It showed that they were able to answer the questions appropriately and directly.They felt comfortable when doing interviews through the Virtual Reality Application.Moreover, the display in job interview application looks right to the students when doing interview.
The students as the users felt motivated and enthusiastic in conducting the interview so that it made them felt more confident to learn interview independently.They could measure their ability to answer questions through the result score and time provided.It could indicate their interview skills through the score after answering the questions.Therefore, after conducting the interview through the application, they knew more about the interview questions so that they can learn to give the best answers.Moreover, they also give recommendation that their answers from the interview questions hopefully can be recorded or even saved to the application so that they can evaluate more toward their answers.
-Product Design
The design of job interview application appliedinVirtual Reality System was created based on the real-view of interview room and situation.Some features which included interviewer, table, chair, cupboard, and other supporting stuffs designed by the researcher in order to make the atmosphere of interview session looks just like real.The additional menu and scoring provided on the application to give information to the users about the instruction and the result of score that will appear after they answer the questions.There was a time to answer the question for about 30 seconds so that the users are aware of what they are talking about and drive the users to give the best and specific answers.Here are the following displays of the initial product design before it was developed based on the expert validation and the users' response.The design of the product still needs improvement to perform better for the users, especially to educate them.Hence, the researcher also concerns to the suggestions and comments from the experts and users about the development of the design, content, and system to display better.The development consists of features, scoring system, and additional button as follows: ISSN: 2356-0401, E-ISSN: 2621-9158, VOL. 6, NO. 1, June 2019.The features focused on the additional supported stuff such as books, paper, lamps, and flower vase in order to give more interesting situation.The scoring system was made to display in detail result score and instruction how the users will get the score so that the users could evaluate themselves through the result.The additional button was designed to go for the next question so that if the users answered the questions before the limited time given, they could direct the cursor to the "skip" button and get another question.
The list of the questions applied in the job interview application was based on the suggestion of the Business English lecturer which had been taught to the students by using her experiences to be a job interviewer.The questions were divided into three parts, namely: personality, company knowledge, and working agreement.It will affect the score decision for each part/session in the application.The used questions used for the Job Interview application are listed below.
Discussions
The result of implementing job interview application through Virtual Reality System is inspected further in discussion.The discussion covers the meaning of the findings which include the result of validation from the expert, the students' reaction on rubric, and the development of the job interview application.
According to the rubric, the product validation has the overall assessment in the score of 4 which means that the application of job interview exceeds expectation or no modification is needed.In the other side, several statements show that this product still has evaluation to meet expectation but could be improved with minor changes with the score of 3. The expert hopes that the ability of virtual reality to create real context of job interview should be more relevant for the users.This expert's expectation relates with the argument of Held (1993, as cited in Mandal, 2013) which expanded that telepresence happened when the manipulators have the expertise to allow operators to perform normal human functions.
The result of the assessment on the design validation is different from the product validation.Although the overall score is 4, there are three statements that need minor changes or could be improved to be better.Based on the expert, the arrangement and location of the text and images, the environment presented with the real context, and the environment presented have to be more attractive for and appropriate with the users.It is in line with Desai et al. (2014) who stated that Virtual Reality (VR) gives effect as if the users are in concrete existence as well as 3-Dimensional computer-simulated environment.
The expert of the product and design validation gave comments and suggestions toward the improvement of job interview application.He said that interviewee session in several questions is too long.It will be better if there is a button in directing to the next question.Moreover, it will be more useful if the sound is recorded along with its score.The study about the need analysis in using Virtual Reality for Job Interview simulation that has been done by Jailani (2017)showed that users have expectation that the application can improve to be better and run well.
The overall scores show that the content of the interview is easy to understand and attractive.It can be seen from the score of 4 that the expert gave.On the other side, there are statements with the score of 3 regarding on estimated time given and questions for interview that implies the researcher have to develop it to be better.It is supported by Jailani (2017) who stated that the users felt comfortable to answer the questions because they feel confident, and it can increase the ability of their speaking.
The expert assessed the content for interviewer to be very good.It can be seen from the score of experts given is 4 which means that the pronunciation, language, and intonation made by the researcher are clear and understandable.As Harmer (2001) ISSN: 2356-0401, E-ISSN: 2621-9158, VOL. 6, NO. 1, June 2019.stated that the interaction happens when one or more person doesspeak.Hence, speaking as an effective skill involves a good deal of listening, understanding of feeling, and knowledge about the linguistics terms.
According to the rubric of the validation, the scores of contents for interview questions are generally classified in very well or score of 4. It means that the interview questions for users are easy to understand regarding on the use of grammar and relevant question with interview context.It supported by Miller et al. (2014)suggest that interviewee should develop a clear intellectual understanding of how to interview effectively and learning to apply these ideas in practicing interviews.That is why the questions is important to be comprehended well in job interview session.
The expert of content validation gave comments and suggestions toward the improvement of job interview application.She said that this kind of product for interview practice can be developed for further research in order to make users always update about their knowledge in interview fields.
Job Interview application that has been said to be feasible and validated from the experts are then tested to the users.It is tested to 10 students from Business English Class.The students try to use Virtual Reality Glasses with the application insides the Glasses.They follow the instructions and choose the menu to point them on Job Interview simulation.Afterward, the students were asked to fill the rubric in order to express their feeling and responses after using the application.In order to make the students feel more comfortable in using the application, the new variety of features are recommended to be added (Jailani, 2017).Therefore, the result of the responses was used for developing the application.
Students gave more attention to the interview questions provided and the usage of application that can create a comfortable situation.The application is very interesting and sophisticated because it can help the students to explore themselves in being interviewee through interview questions provided (Jailani, 2017).The interview questions are easy to understand for the student and suit with the level of interview that have been learned by the student in Business English Class.Miller et al. (2014) has been observed that interviewees should provide their self with the self-analysis questions to have chance in exploring another related interview question.Therefore, the researcher arranges the questions on that way to help the students, at least, they know and enjoy the basic questions of interview.According to the lowest score that reach 10%, some students have different point of view on the statement of related questions with the material and the room is less attractive.According to Gutiérrez et al. (2017), virtual technologies can increase students' engagement and motivation in their teaching and learning activities.Therefore, the lowest percentage responses from the students will be developed features for researcher to make the application performs well.
Research and Development approach by adapting Sugiyono's Model is conducted.Sugiyono (2011) describes the ten steps of research and development implementation strategy as follows:
Figure 1 .Figure 2 .
Figure 1.Steps of Research and Development However, the current research modifies the steps into 7 procedures.The procedure of Research and Development in developing job interview application on Virtual Reality System for Business English students is explained as follow: in ISSN: 2356-0401, E-ISSN: 2621-9158, VOL. 6, NO. 1, June 2019.
Figure 3 .
Figure 3.The display of Job Interview Menu
Figure 4 .Figure 6 .Figure 7 .
Figure 4.The instruction to use the application
Figure 8 .
Figure 8.Initial and Developed Products The first expert focuses on validating the use of technology on application.This expert currently works at the Department of Informatics Engineering and has Master Degree of Science.He ever did research in information Science, Algorithms and Artificial Intelligence.His current project is 'Indonesian Twitter NLP' and 'Indexing for multi-feature data'.His skills and expertise are in Feature Extraction, Image Segmentation, Information Retrieval, Image Retrieval, and Abstracting and Indexing as Topic.He also ever became Head of Informatics and Engineering Department.The second expert focuses on validating the content of application related to the job interview.The second expert is the lecturer of Business English.This expert is senior lecturer in English Language ISSN: 2356-0401, E-ISSN: 2621-9158, VOL. 6, NO. 1, June 2019.
Table 2
is The Result of Product and Design Validation on the Implementation of Job Interview on Virtual Reality
Table 3
is The Result of Content Validation of Job Interview on Virtual Reality
|
v3-fos-license
|
2024-05-09T06:16:34.215Z
|
2024-05-07T00:00:00.000
|
269624129
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcgeriatr.biomedcentral.com/counter/pdf/10.1186/s12877-024-05019-9",
"pdf_hash": "cc4cde2b7b562f1234d92683d0f7ab7aef4dff17",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46102",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "25631fdc7c76db7ccd3dcf5a2de7dae9541e63d4",
"year": 2024
}
|
pes2o/s2orc
|
Development and validation of the osteoporosis scale among the system of quality of life instruments for chronic diseases QLICD-OS (V2.0)
Background Quality of life of osteoporosis patients had caused widespread concern, due to high incidence and difficulty to cure. Scale specifics for osteoporosis and suitable for Chinese cultural background lacked. This study aimed to develop an osteoporosis scale in Quality of Life Instruments for Chronic Diseases system, namely QLICD-OS (V2.0). Methods Procedural decision-making approach of nominal group, focus group and modular approach were adopted. Our scale was developed based on experience of establishing scales at home and abroad. In this study, Quality of life measurements were performed on 127 osteoporosis patients before and after treatment to evaluate the psychometric properties. Validity was evaluated by qualitative analysis, item-domain correlation analysis, multi-scaling analysis and factor analysis; the SF-36 scale was used as criterion to carry out correlation analysis for criterion-related validity. The reliability was evaluated by the internal consistency coefficients Cronbach’s α, test-retest reliability Pearson correlation r. Paired t-tests were performed on data of the scale before and after treatment, with Standardized Response Mean (SRM) being calculated to evaluate the responsiveness. Results The QLICD-OS, composed of a general module (28 items) and an osteoporosis-specific module (14 items), had good content validity. Correlation analysis and factor analysis confirmed the construct, with the item having a strong correlation (most > 0.40) with its own domains/principle components, and a weak correlation (< 0.40) with other domains/principle components. Correlation coefficient between the similar domains of QLICD-OS and SF-36 showed reasonable criterion-related validity, with all coefficients r being greater than 0.40 exception of physical function of SF-36 and physical domain of QLICD-OS (0.24). Internal consistency reliability of QLICD-OS in all domains was greater than 0.7 except the specific module. The test–retest reliability coefficients (Pearson r) in all domains and overall score are higher than 0.80. Score changes after treatment were statistically significant, with SRM ranging from 0.35 to 0.79, indicating that QLICD-OS could be rated as medium responsiveness. Conclusion As the first osteoporosis-specific quality of life scale developed by the modular approach in China, the QLICD-OS showed good reliability, validity and medium responsiveness, and could be used to measure quality of life in osteoporosis patients.
Introduction
Osteoporosis is a chronic metabolic bone disease [1].At present, about 200 million people worldwide suffer from osteoporosis.Its incidence has jumped to the 7th place among common and frequently-occurring diseases [2].China has the largest elderly population in the world.It is estimated that by 2050 the number of osteoporosis patients in China would reach 212 million [3].A new study conducted by the Osteoporosis Foundation shows that the total prevalence of osteoporosis in China is 6.6-19.3%, with an average of 13% [4].One-third of osteoporosis patients are disabled with 19% of them requiring long-term care.Compared with the general population, patients with osteoporosis had more challenges in physical and mental health.While suffering from the disease, patients with osteoporosis also had to bear financial pressure, adverse drug reactions brought about by anti-osteoporosis drug treatment, psychological burden caused by family neglect and decline in social function.Therefore, the loss of labor function, disability, mental pain and the corresponding psychological burden caused by osteoporosis to patients had severely affected their quality of life (QOL) [5].
The premise and key of Quality of Life research was the appropriate measurement scale, which mainly included the generic scale and the specific scale.The generic scale could be used for the general population and multiple disease groups to assess general health status.Although the prevalence of different diseases could be directly compared with this type of scale [6,7], it ignored the main functions affected by the disease and led to the loss of clinically important influencing factors.Thus, the responsiveness was poor when used for specific diseases.Disease-specific scales had the advantage of assessing domains related to specific diseases and capturing the sensitivity of small changes [6,7].As far as we knew, some major foreign specific scales currently include Osteoporosis Quality of Life Questionnaire (OQLQ) [8], Japanese Osteoporosis Quality of Life Questionnaire (JOQLQ) [9,10], Osteoporosis Assessment Questionnaire(OPAQ) [11], Osteoporosis Functional Disability Questionnaire (OFDQ) [12], Quality of Life Questionnaire of the European Foundation for Osteoporosis (QUALEFFO) [13] and Assessment of health related quality of life in osteoporosis (ECOS-16) [14,15].OPAQ was the first special scale for osteoporosis compiled in 1993.It contained 79 items in four aspects, i.e. symptom, physical, psychological, and social conditions.It was mainly used in patients with non-vertebral fractures.QUALEFFO was developed by the European Foundation for Osteoporosis and included 48 items in five aspects, covering pain, physical function, social function, general health concepts, and psychological factors.It was mainly used to evaluate vertebral fracture patients with severe osteoporosis.JOQLQ was developed in Japan and contained 38 items in six aspects, covering pain, activities of daily living, entertainment and social activities, general health, posture and body shape, falls and psychological factors.It was used to assess the quality of life of Japanese osteoporosis patients.ECOS-16 contained 16 items in four aspects, and was mainly used to evaluate postmenopausal women with osteoporotic vertebral fractures.There was a special scale for the quality of life of primary osteoporosis compiled by Jian Liu in China [16,17].According to Liu, OQOLS was mainly used to assess patients with primary osteoporosis, including 75 items in five aspects, i.e. symptoms, physiology, psychology, society, and satisfaction.This scale did not involve the evaluation of adverse drug reactions and special psychological problems of the disease.The scales mentioned above were developed independently and lacked systematic coherence.In addition, they may not reflect Chinese culture well.Therefore, it was necessary to develop a scientific, reasonable, reliable and suitable quality of life measurement scale for Chinese osteoporosis patients.
To this end, our QOL team developed a system entitled Quality of Life Instruments for Chronic Diseases (QLICD), which included a general module (QLICD-GM), and some specific modules for different diseases [18,19].The latest version of the system QLICD (V2.0) contained 34 chronic disease-specific scales [19], including QLICD-CG for Chronic Gastritis [20], QLICD-PT for Pulmonary Tuberculosis [21], QLICD-RA for Rheumatoid Arthritis [22] and QLICD-SLE for Systemic Lupus Erythematosus [23] etc.Among them, QLICD-OS (Quality of Life Instruments for Chronic Diseases-Osteoporosis) was developed by combining the general module of chronic diseases and the specific module of Osteoporosis, with the purpose to suit for osteoporosis patients under Chinese cultural background.It was both specific and comparable (comparing common parts of various diseases).
This article aims to report the development and validation process and results of QLICD-OS (V2.0).
Development of QLICD-OS
QLICD-OS was compiled by combining the general module of chronic diseases QLICD-GM [18,19], and the newly developed osteoporosis disease-specific module.
Development of QLICD-GM
The development of the QLICD-GM (V2.0) strictly followed the internationally recognized method of programmatic decision-making, including the following steps: (1) Established a scale research team; (2) Defined and decomposed the concept of quality of life measurement to form a theoretical framework; (3) Proposed a pool of alternative items; (4) Screened items to form a preliminary scale; (5) Conducted pre-survey item screening to form a test scale; (6) Test survey and item rescreening; (7) Scale evaluation; (8) Formed a formal scale.
Development of osteoporosis specific module
Similar to QLICD-GM [18,19] and other specific modules for hypertension, coronary heart disease and peptic ulcers [24][25][26], the osteoporosis disease-specific module was completed through the efforts of two independent groups.The nominal group consisted of 14 people, including 5 doctors, 2 nurses, 2 medical educators, and 5 teachers/researchers (1 quality of life researcher, 1 statistician, 1 sociologist, and 2 psychologists), which proposed the item pool using programmatic decisionmaking method.The focus group was composed of 10 experts, including 4 doctors, 1 medical educator, and 5 teachers/researchers (2 quality of life research scholars, 1 statistician, 1 sociologist, and 1 psychologist), which proposed the conceptual framework using programmatic decision-making method and selected items proposed by the nominal group.In general, the nominal group was responsible for item presentation, while the focus group was responsible for item selection and organization.In the item selection process, both qualitative analysis methods such as group discussions, in-depth interviews as well as quantitative statistical methods for pre-tests data such as variation analysis, correlation analysis, and factor analysis were used.
The scale was developed based on the literature review, nominal group/focus group discussion, and the experience of setting up the scale at home and abroad.The 22-item pool of the osteoporosis disease-specific module was initially screened, evaluated and modified through a combination of qualitative interviews and quantitative investigation and analysis to form a preliminary scale.Questionnaire surveys and interviews were conducted on osteoporosis patients and medical experts, including 25 patients and 25 doctors/ nurses.The data were analyzed using variability method, correlation coefficient method, factor analysis, patient importance scoring and doctor importance scoring.
In the end, the final specific module was formed including 3 facets of clinical symptoms (CLS), drug side effects (DSE), and special Effects on Mentality and Life (EML) of osteoporosis, and a total of 14 items (coded as OP1-OP14) [27], (See Fig. 1 in detail) .
The entire development and evaluation process was summarized in Fig. 1.
Validation of QLICD-OS
Based on the measured data scores, the measurement characteristics of QLICD-OS were evaluated from the perspectives of validity (construct validity and content validity), reliability (internal consistent reliability and test-retest reliability), and responsiveness [28].
Data collection
Similar to other instruments under the system of QLICD [18][19][20][21][22][23][24][25][26], the QLICD-OS scale was designed particularly suitable for the Chinese population and was used for onsite investigation and evaluation of patients with osteoporosis.The survey was conducted at Pingle Orthopedics Hospital in Shenzhen, Guangdong Province, China.The research objects were osteoporosis patients with certain reading comprehension ability and ability to fill out the questionnaire independently.The investigators in the research include doctors/nurses and medical graduate students.The investigators explained the purpose and significance of the study to the patients, and obtained the informed consent of the patients who agreed to participate in the study.The research protocol and informed consent form were approved by the Ethics Committee of the survey institution.
In the first round of assessment process, each subject (n = 127) completed a questionnaire when he or she was admitted to the hospital for treatment.On the 2nd day, some respondents (n = 117) were selected to participate in the second round of assessment for test-retest reliability.After one week of treatment, a total of 127 subjects participated in the third round of assessment for responsiveness assessment.
Due to the lack of a recognized gold standard for assessing the quality of life of patients with osteoporosis, we used the Chinese version of the 36-item Health Measurement Scale (SF-36) [29] for evaluation of the criterion-related validity as well as convergent and discrimination validity of QLICD-OS at first round.SF-36 was considered one of the commonly used universal QOL scale, including 8 dimensions: Physical function (PF), role physical (RP), body pain (BP), general health (GH), vitality (VT), social function (SF), role of emotion (RE), and mental health (MH).
Scale scoring method
Similar to other instruments under the system of QLICD [18][19][20][21][22][23][24][25][26], each item of QLICD-OS was scored based on the five-level Likert scale (namely, not at all, a little bit, somewhat, quite a bit, and very much).The positively stated items directly scored from 1 to 5, while reverse entries were scored from 5 to 1.The higher the score of the positive item, the higher the quality of life, and the opposite is true for the reverse item.Specifically, GPH1, GPH2, GPH4, GPH6, GPH7, GPH8;GPS1, GPS3, GPS10; GSO1, GSO2, GSO3, GSO4, GSO5, GSO8 are positively stated items, and the others are negatively stated items.The content of items can be found in item brief description in relevant table.
By adding up the domain/facet item scores, we obtained the raw scores of facets and domains.The total score of the scale was the sum of the scores in all domains.For comparison, the following equation was used to linearly convert all domain scores into standardized scores (SS) between 0 and 100: SS=(RS-Min)×100/R, where RS, Min, and R represented the original score, the lowest score, and score range.
Validity evaluation
There are several types of validity that can be distinguished.The content validity adopted a qualitative evaluation method.Due to the lack of gold standard, SF-36 scale was used as the criterion and Pearson correlation coefficient between similar domains of QLICD-OS and SF-36 was calculated to evaluate the criterion-related validity.Gerry believed that the ideal correlation coefficient was between 0.4 and 0.8 [30].Multi-trait scaling analysis [31] was applied to test the convergent and discrimination validity of QLICD-OS, which was an aspect of the construct validity.It has the following two standards: (1) item-domain correlation which was 0.40 or higher supported the convergent validity; (2) itemdomain correlation which was higher than other domains supported the discrimination validity.
Also the factor analysis with Varimax rotation was adopted to perform to test the consistency between the components extracted from the data and the theoretical structure of the scale, confirming the construct validity.
Reliability evaluation
Reliability refers to the degree to which the instrument is not affected by random errors and is evaluated by internal consistency and repeatability.Cronbach's α was a common method to assess the internal consistent reliability in the scale development.Coefficient between 0.6 and 0.7 was the minimum acceptable value, coefficient between 0.7 and 0.8 was quite good, and coefficient value between 0.8 and 0.9 was very good [32].In order to evaluate internal consistency, Cronbach's α for each domain was calculated separately.Test-retest reliability for the QLICD-OS was assessed using correlation r with the threshold being recognized as 0.80.
Responsiveness evaluation
Responsiveness referred to the ability of the scale to detect small clinically important changes over time [28,33,34].Responsiveness was measured by comparing the average difference between pre-treatment and post-treatment assessments.Meanwhile, standardized response mean (SRM) was calculated to represent the degree of responsiveness, and 0.20, 0.50 and 0.80 represented small, medium, and large responsiveness respectively [28,33,34].
Content validity
Content validity referred to whether the designed item could represent the content or topic to be measured.QLICD-OS was compiled according to a strict procedural method with the items of the scale including all the dimensions required by WHO QOL group.Also QLICD-OS was developed after repeated discussions by the nominal group and the focus group, which included aspects of physical, psychological, social condition and clinical symptoms, drug side effects, and special psychological characteristics of patients with osteoporosis.These aspects fully reflected the connotation of the quality of life of patients.
Construct validity
From correlation analysis, it can be seen that there were sufficiently associations between items and their own domains to which they belonged, but weak associations between items across domains (Table 1).For example, most correlation coefficients between items of GPH1-GPH9 with physical function (in bold) are greater than 0.4, and greater than those across domains.
The specific item data in the QLICD-OS passed the Bartlett spheroid test and the results showed that the variables were significantly correlated with KMO statistic being 0.643, indicating that factor analysis can be performed.According to eigenvalues > 1, 5 principal components were extracted for the specific module with the cumulative explained variation being 62.896%.After the Varimax rotation method, it can be seen that the first principal component included items OS3, OS4, OS5, OS14, and the variance contribution rate was 16.62%; the second principal component included OS6, OS10, OS11 with the variance contribution rate being 15.47%; the third principal component included items OS7, OS8, OS9 with the variance contribution rate being 12.11%; the fourth principal component included items OS2 and OS12 with the variance contribution rate being 9.35%; the fifth principal component included items OS1 and OS13 with the variance contribution rate being 9.34%.These 5 main components basically reflect the clinical symptoms of bone and digestive system, drug side effects, and special psychological problems of the disease in patients with osteoporosis.The structure of the scale is roughly consistent with the theoretical conception, indicating good construct validity (Table 2).
Criterion-related validity
Table 3 lists the correlation coefficients between the domain scores of QLICD-OS and SF-36, indicating that the correlation between the same and similar domains was generally higher than the correlation between different and dissimilar domains.For example, except for the low correlation coefficients of physical function, physical role, physical pain, and emotional role with general modules, the correlation coefficients between the general module of QLICD-OS and the 8 domains of SF-36 were between 0.62 and 0.65.The correlation coefficients between the specific module of QLICD-OS and the 8 domains of SF-36 were relatively low in physical roles, physical pain, emotional role, and mental health, confirming that the criterion-related validity was reasonable.
Specifically, the correlation coefficient between the physical function of QLICD-OS and the general health of SF-36 was 0.43; the correlation coefficient between QLICD-OS's mental function and SF-36's mental health was 0.62; the correlation coefficient between the social function of QLICD-OS and that of SF-36 was 0.58.The correlation coefficient between the specific module of QLICD-OS and the 8 domains of SF-36 was between 0.12 and 0.34.The correlation coefficient between the general module of QLICD-OS and the 8 domains of SF-36 was between 0.16 and 0.65.The correlation coefficient between the overall QLICD-OS and the 8 domains of SF-36 was between 0.19 and 0.64.
Reliability
Analysis took place of the internal consistency and splithalf reliability of the general module and specific module of the QLICD-OS.Except for the specific module, the internal consistency reliability of each domain was above 0.7 and the overall internal consistency reliability was 0.88.The split-half reliability was between 0.37 and 0.86 and the split-half reliability of the entire scale was 0.72.The test-retest reliability for all domains were higher than 0.80.See Table 4 in detail.Note Correlations between each item and its designated scale are in bold type ** There was a significant at the level of 0.01.* There was a significant at the level of 0.05
Responsiveness
The results in Table 5 showed that the changes of physical function, psychological function, social function, general module, specific module and total scale before and after treatment were statistically significant (P < 0.05), and the SRM was 0.35-0.79.It is can be seen that the specific module domain was less responsive for SRM was lower than 0.20.
Discussions
Based on modular approach, a Quality of Life Scale for Osteoporosis Patients (QLICD-OS) was developed by combination of the general module (QLICD-GM) in well-developed system of quality of life instruments for chronic diseases and a newly developed osteoporosisspecific module.The general module QLICD-GM including 3 domains of physical function (9 items), mental function (11 items) and social function (8 items) can be used for all various chronic diseases, and the specific module is only for osteoporosis.Up to now, the updated QLICD system includes 34 common chronic disease such as hypertension, coronary heart disease, COPD, etc [19]. . .As far as we know, although a number of instruments have been developed for QOL in patients with osteoporosis [8][9][10][11][12][13][14][15], none of them was developed by the modular approach.In contrast, the QLICD-OS has two significant advantages over existing instruments: (1)it can compare QOL for various diseases through the generic module and capture symptoms and side effects through the specific module, showing both general and specific attributes; (2) it is of a clear hierarchy (items→ facets→ domains→ overall) so that mean scores can be computed at different levels.It can be analyzed not only at the domain (four domains) and the overall levels but also at the different facet levels (12 facets) to detect changes in detail; (3) It can be used for all type of osteoporosis (with or without fragility fractures) at any stages because the specific module includes 3 facets and different and diverse 14 items.
The general module is of core and highlighted significance for the instrument system by modular approach.There are currently two general modules for quality of life reported.One is the general module QLQ-C30 [35] of the European QLQ series.It consists of 5 functional subscales (physical, role, cognitive, emotional and social function), 3 symptom subscales (fatigue, pain, nausea, and vomiting), 1 general health status subscale and 6 single items (dyspnea, insomnia, loss of appetite, constipation, diarrhea, and financial difficulties).The other one is the general module of the FACT (Functional Assessment of Cancer Therapy) series (FACT-G), which consisted of 27 items in 5 domains including physical status (7 items), social/family status (7 items), emotional status (6 items), and functional status (7 items).These two modules were only used to determine the QOL of cancer patients, not for various chronic diseases patients.Although FACT was renamed FACIT (Functional Assessment of Chronic Illness Therapy) later [36], the general module applied FACT-G was also for cancer patients.In terms of chronic diseases, only our QLICD-GM was directly developed for patients with chronic diseases.The QOL measurement scale for specific chronic diseases could be developed on the basis of the general module, and disease-specific items could be added to fully reflect QOL of patients with specific diseases.This facilitated the comparison of the QOL among patients with complex and diverse chronic diseases.
Usually, a practical QOL should be validated on psychometric properties at least three aspects: validity, reliability and responsiveness [33,34].In this study, the qualitative analysis confirmed content validity.Correlation analysis showed that the item had a strong correlation with its own domains, and a weak correlation with other domains.Factor analysis showed that the components extracted from the data were consistent basically with the theoretical structure of the scale.These results confirmed good construct validity.Correlation coefficient between the similar domains of QLICD-OS and SF-36 showed reasonable criterion-related validity, with all coefficients r being greater than 0.40 exception of physical function of SF-36 and physical domain of QLICD-OS (0.24).
Our results indicated that the instrument has good reliability given Cronbach's α coefficients above 0.70 (exception of the specific module 0.55) and test-retest correlation coefficients above 0.80.The Possible reasons for only a weak Cronbach's alpha value of the "specific module" (0.55) are: (1) the small sample size, (2) it includes three facets of clinical symptoms, drug side effects, special effects on mentality and life, the number of items are of relative large and heterogeneity.
Responsiveness analysis (Table 5) showed that the possibility of improvement and deterioration (if any) of quality of life over time could be detected at the domain level.Comparison of the results showed that the changes of physical function, psychological function, social function, general module, specific module and total scale before and after 1 week of treatment were statistically significant (P < 0.05), and the SRM was 0.35-0.79.The specific module domain was less responsive; perhaps because osteoporosis was a chronic metabolic bone disease that required long-term treatment, and the patient's hospital stay was short, the specific module was not expected to change significantly before and after treatment in a short period of time.In other words, the instrument revealed the changes of domain scores which are expected to change.Therefore, it can be inferred that the QLICD-OS could be rated as moderate responsiveness.
Limitations of the research
QLICD-OS is also subject to various restrictions.First, Osteoporosis patients participating in the research are limited to individuals who can read and understand the questionnaire.Second, QLICD-OS is developed based on participants with Chinese cultural background.When translating QLICD-OS into languages other than Chinese, the level of cultural proficiency should be carefully evaluated.In addition, the sample size of the study is not very large, which may also affect the results related to factor analysis and responsiveness.
Conclusion
The QLICD-OS was developed by combining the general module of chronic diseases and the specific module of osteoporosis.We recommend it to be used in measuring the quality of life of Chinese patients with osteoporosis considering the Chinese cultural background and good psychometric properties (validity, reliability and responsiveness).It needs further large-scale studies to confirm psychometric properties in different settings (community etc.).
Fig. 1
Fig. 1 Steps towards development and validation procedure of QLICD-OS
Table 1
Correlations between items and domains of QLICD-OS for osteoporosis patients
Table 2
Factor loadings of factor analysis on the specific module after maximum rotation of variance
Table 3
Correlation coefficients between domains of QLICD-OS and SF-36 (n = 127)Note PHD: physical domain, PSD: psychological domain, SOD: social domain, SPD: specific domain, CGD: Core/General domain (general module), TOT: total scale ** There was a significant at the level of 0.01.* There was a significant at the level of 0.05
Table 4
Internal consistency and split-half reliability of the QLICD-OS for osteoporosis patients
Table 5
Responsiveness results of the QLICD-OS for osteoporosis patients
|
v3-fos-license
|
2020-07-16T15:11:43.008Z
|
2020-06-30T00:00:00.000
|
220541760
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://hqlo.biomedcentral.com/track/pdf/10.1186/s12955-020-01484-z",
"pdf_hash": "801481eb65306722a2b9c0c905ee2716f7735378",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46103",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "801481eb65306722a2b9c0c905ee2716f7735378",
"year": 2020
}
|
pes2o/s2orc
|
Health-related quality of life in women with polycystic ovary syndrome attending to a tertiary hospital in Southeastern Spain: a case-control study
Background Polycystic ovary syndrome (PCOS) is a chronic condition with symptoms affecting many women at reproductive age and evaluating their health-related quality of Life (HRQoL) is an important issue. Moreover, differences in the HRQoL between women with different PCOS phenotypes have never been analyzed. Therefore, the aim of our study was to compare the HRQoL between women with PCOS -and its phenotypes- and controls attending to a tertiary hospital. Methods A group of 117 women with PCOS and 153 controls were studied between 2014 and 2016. Controls were women without PCOS attending the gynecological outpatient clinic for routine examinations. Cases were women attending the same setting and diagnosed with PCOS. PCOS diagnose was performed following the Rotterdam Criteria and women were further classified by anovulatory or ovulatory phenotypic subtype. Women underwent physical and gynecological exams and completed health questionnaires including the Short Form-12v2. Eight scales and two component summary scores [Physical (PCS) and Mental (MCS), respectively] were calculated. Bivariate and multivariate analyses were performed to assess differences in HRQoL between women with PCOS and controls. Results All women with PCOS and anovulatory PCOS presented lower score in PCS compared to controls [mean (95%CI): 53.7 (52.5–54.9) and 52.9 (51.5–54.4) vs. 55.8 (54.8–56.8); p-values< 0.01], as well as lower scores for five out of the eight scales (p-values < 0.05) after adjusting by age, body mass index, infertility, educational level and current occupation. No significant differences were observed for the MCS between women with or without PCOS or its phenotypic subtypes. Conclusions HRQoL was significantly decreased in adult women with PCOS and its anovulatory phenotype compared to controls attending the outpatient clinic of a tertiary hospital. These results may have implications for the clinical practice and suggest the need for specific interventions in women with PCOS.
Background
Polycystic ovary syndrome (PCOS) is one of the most common chronic endocrinopathies affecting between 5 and 10% of reproductive age women [27]. Clinical manifestations of this syndrome such as obesity, infertility, hirsutism, biochemical and hormonal disturbances has been widely described [4]. Yet, these symptoms are often related to a deterioration in the woman's self-esteem and self-image and may affect their health-related quality of life (HRQoL), particularly in relationship with psychosocial domains [1,6,32].
HRQoL is a multidimensional concept widely used in medical research, but its usage in routine medical practice is increasing. It is defined as "individual's perception of their own life in the context of their cultures and believes, and their personal goals and concerns" [3,36]. Important areas such as physical health, psychological health, level of independence and social relationships are included in HRQoL evaluation. Over the past years, there has been a growing tendency to incorporate assessment of HRQoL in clinical studies and routine clinical management of PCOS.
Consequently, several investigations conducted over the world have shown associations between HRQoL and the presence of PCOS [4,5,12,21,25,28,32]. Women with PCOS may be at a higher risk of low HRQoL [7,8,16,18,37,38]. However, several of the previous studies have focused on series of women with PCOS or evaluated the effect of an intervention (lifestyle or medical treatments) on HRQoL of women with PCOS [17,34] without adequate control. Therefore, the interpretation and generalization from these studies is challenging, due to relatively small sample sizes, heterogeneities between study populations, tools evaluating HRQoL, or the inadequate control of confounding. The impact of potential confounders such as age, body mass index (BMI), educational level or even professional activity upon HRQoL in PCOS women is uncertain as they may not have been properly evaluated [2,32]. Besides, there are differences in PCOS symptoms presented across geographical locations and between differing race or ethnic groups [11,41].
Moreover, the Rotterdam ESHRE/ASRM definition recognizes four different phenotypes of this syndrome [27,31], but whether there are differences in the HRQoL between the different phenotypes has never been analyzed. It is also important to know more about HRQoL in women suffering from this common problem in order to develop strategies and interventions to enhance their HRQoL.
Therefore, the goal of this work was to compare the HRQoL of adult women with PCOS -and its phenotypes-and controls. We hypothesize that women with PCOS, especially those with anovulatory phenotype, would show worse HRQoL compared to women without PCOS.
Study population
This was a case-control study conducted from September 2014 to May 2016 at the Department of Obstetrics and Gynecology of the University Clinical Hospital "Virgen de la Arrixaca" in the Murcia Region (southeastern Spain). The study conception and design have been previously described elsewhere [33]. Women were excluded if they: were < 18 and > 40 years old, had endocrine disorders (e.g. Cushing's syndrome, congenital adrenal hyperplasia, androgen-secreting tumors, hyperprolactinemia and hyperand hypothyroidism) or were taking any hormonal medication (including contraception) during the 3 months prior to the study; were pregnant or lactating; had been exposed to oncological treatment; or had genitourinary prolapse. For both groups, women with PCOS and controls, gynecologists recruited consecutive women attending the clinic (total n = 307), and more than 95 % of the approached women fulfilling the study criteria agreed to participate (n = 14 declined and n = 23 were excluded). Those that declined to participate was due to a lack of time for filling out questionnaires. Women with PCOS (n = 117) were women attending the gynecology unit of the hospital, and included newly diagnosed cases as well as prevalent ones. Diagnosis of PCOS was established following the Rotterdam criteria [31] which included a complete medical history with a modified Ferriman-Galwey (mF-G) score [19], transvaginal ultrasound (TVUS) and serum sexual hormones. Diagnosis of PCOS required completion of at least two of the following three criteria: (i) hyperandrogenism either biochemical (total testosterone level ≥ 2.6 nmol/l) or clinical (mF-G score ≥ 12) [1] with or without acne or androgenic alopecia; (ii) oligo-and/or anovulation (menstrual cycles > 35 days or amenorrhea > 3 months); (iii) polycystic ovarian morphology (POM) on TVUS (≥12 follicles measuring 2-9 mm in diameter, mean of both ovaries) [15]. Possible phenotypic subtypes were: phenotype A (oliganovulation + hyperandrogenism + polycystic ovary morphology); phenotype B (oliganovulation + hyperandrogenism); phenotype C (hyperandrogenism + polycystic ovary morphology); and phenotype D (oliganovulation + polycystic ovary morphology) [22]. Finally, A, B and D phenotypes were reclassified as "anovulatory phenotypes" (n = 84) and phenotype C as "ovulatory phenotype" (n = 33) and evaluated separately in the current study.
On the other hand, controls (n = 153) were women without PCOS (or other major gynecological conditions, e.g. endometriosis) attending the gynecological outpatient clinic for routine gynecological exams. The same procedures were performed in both women with PCOS and controls: anamnesis and questionnaires, physical examination including weight and height measured using a digital scale (Tanita SC 330-S, Amsterdam, The Netherlands). Uterine and ovarian morphology were evaluated with TVUS with Voluson E8® and 4-9 MHz transducer (General Electric Healthcare, USA) and blood draw between days 2.5 of the menstrual cycle. Written informed consent was obtained from all subjects. This study was approved by the Ethics Research Committee of the University of Murcia and the University Clinical Hospital (no. 770/2013, approved 3 October 2013).
Health-related quality of life measurement
The Short Form (SF)-12v2 Health Survey is a validated shorter version of the SF-36 generic questionnaire that encompasses 12 items, evaluating physical and mental health from the participant's point of view (4 weeks recall period) [20,30,39,40]. The questionnaire generates eight scales: physical functioning, role physical, bodily pain, general health, role emotional, vitality, social functioning and mental health. All raw scale scores were converted to a 0-100 scale, with higher scores representing higher levels of HRQoL. Additionally, the subscales were also transformed to normative-based scores according to the SF-12v2 recommendations to give a mean of 50 and a standard deviation of 10, using a representative sample of the 1998 US general population [20,30,40]. This transformation allows to obtain two summary measures: Physical and Mental Component Summary (PCS and MCS, respectively) that may be directly compared with other scales and scores. As the mean score is set to 50, scores ≥50 or < 50 indicate better or worse physical or mental health than the 1998 US general population. Scores bounds are set at 48 (0.2 SD) for a small effect on HRQoL, 45 (0.5 SD) for a moderate influence and, f ≤ 42 (0.8 SD) for a large effect on HRQoL [13,14].
Statistical analyses
Descriptive statistics are presented using raw data. Continuous variables were compared using unpaired Student T tests, and categorical variables with chi-squared. Analysis of covariance was employed to calculate adjusted crude (0-100) and norm-based scales and component summaries differences between women with PCOS and controls. Multiple logistic regression was used to explore associations between women with PCOS and controls and norm-based scales and summary measures' score with a cut-off of above/below 50 using odds ratios (OR) and 95% confidence intervals (95%CI). In both cases, several relevant covariates were considered (e.g. age, BMI, infertility problems, educational level, current employment, etc.) as potential confounders. When inclusion of a potential covariate resulted in a change in the β-coefficient of more than 10%, the variable was retained in the final models. These variables included factors previously related to PCOS in this or other studies, regardless of whether they had been previously described as predictors of PCOS. From previous publications [28,29], we aimed to detect a difference of at least 3 points (with a standard deviation of about 7 points) in the global scores (PCS or MCS) between women with PCOS and controls. For an of alpha error of 0.05 and 80% statistical power to detect differences, a minimum of 90 women would be required in each group. All tests were twotailed at 0.05 significance level. Analyses were conducted with IBM SPSS 25.0 (IBM Corporation, Armonk, New York, USA).
Results
Overall, PCOS women were younger, had higher BMI, more infertility problems, and showed lower educational and current occupation level than controls. Regarding marital status and other lifestyle factors both groups were comparable (Table 1). Table 2 shows the subscales (0-100) of the SF-12v2 questionnaire in women with PCOS and controls. In unadjusted analyses (data not shown), women with PCOS (vs. controls) scored significantly lower in the scales, except for physical functioning (p = 0.06), social functioning (p = 0.07) and mental health (p = 0.08). After adjustment, differences remained in four scales: role physical (p < 0.001), general health (p = 0.01), vitality (p = 0.04) and role emotional (p = 0.02). When women with ovulatory or anovulatory PCOS were compared to controls, in adjusted models, women with anovulatory PCOS scored lower in three scales: role physical (p < 0.001), vitality (p = 0.03) and role emotional (p = 0.02), while women with ovulatory PCOS scored lower in two scales [general health (p = 0.02) and mental health (p = 0.04)].
The assessment of the norm-based scales and summary measures' scores of the SF-12v2 between all PCOS women -and phenotypic subtypes-and controls is shown in Table 3. Crude data are available in supplementary Lastly, crude and adjusted OR and 95%CI for normbased scales and summary measures' scores of SF-12v2 between phenotypic subtypes (women with ovulatory or anovulatory PCOS) and controls are shown in Table 5. Final models showed that women with anovulatory PCOS were 2.65 (95%CI: 1.14-6.20) times more likely to present worse PCS (< 50) and all their subscales but physical functioning. Moreover, women anovulatory PCOS were 2.35 (95%IC:1.23-4.48) times more likely to have role emotional scores below 50. On the other hand, for women with ovulatory PCOS only, the subscales general health and mental health reached a significant association, showing that these women were 2.42 (95%IC: 1.03-5.78) and 2.98 times (95%IC:1.20-7.37), respectively, more likely to have scores below 50 compared to controls.
Discussion
HRQoL of women with PCOS -and especially, anovulatory PCOS-was significantly decreased compared to controls. Overall, these results suggest that PCOS may play an important role and have a potential effect on HRQoL in these Mediterranean women. To the best of our knowledge, this is the first study evaluating phenotypic subtypes of PCOS in relation to HRQoL.
It is known that PCOS have a significant negative impact on women's HRQoL. Several authors have reported that PCOS women show worse HRQoL compared to women without the disorder [5,12,21,25]. Moreover, problems with sexual satisfaction and increased psychological disturbances have been reported as well [21]. A recent meta-analysis concluded that having PCOS significantly reduced HRQoL in adolescent girls [24].
In our study, patients with PCOS had significantly lower scores in several subscales and in the PCS, which is somewhat consistent with the previously published literature on the matter [10,28,29]. Benson et al. [10] carried out a nation-wide survey in Germany using the SF-12 scale in a cohort of women with PCOS and observed that women with PCOS were at higher risk of common psychiatric disorders such as anxiety, depression or both, and these disorders were related to lower HRQoL. Other authors reported significantly lower scores in the short form 36 (SF-36) questionnaire in women with PCOS compared to controls [28] in both PCS and MCS [35]. Lastly, Panico et al. [29] reported worse HRQoL in women with PCOS compared to controls in the subscales of vitality and role emotional, although no differences for body pain were found, using the SF-36 questionnaire. However, they also reported significant Fig. 2 Adjusted means and 95%CI of the four scales and Mental Component Summary (MCS) of SF-12v2 questionnaire between all women with PCOS and its phenotypic subtypes (anovulatory and ovulatory) and controls. Model adjusted by age, BMI, infertility problems, educational level and current occupation. (@): significant differences between all women with PCOS and controls; ($) significant differences between anovulatory PCOS and controls differences regarding mental health and social and physical functioning, which were not found in our study population. On the other hand, changes in role physical and general health found in our study population were not observed by Panico et al. [29]. The discrepancies between those findings and our study might be attributed to differences in the reported results, since in those studies only crude results are shown, and no further adjustments are made by potential confounders (e.g. age, BMI, etc.). An alternative explanation, though unlikely explanation that would require further study, might be that there are true specific differences in the HRQoL of PCOS women in Southern Spain.
In a meta-analysis of Li et al. [25], five studies using SF-36 were included to evaluate the impact of PCOS on specific HRQoL domains. They concluded that women with PCOS obtained lower scores in all the analysed domains compared to controls and that the most affected one was the emotional role. These findings are in agreement with ours, since the emotional role domain was one of most affected one in both, women with PCOS and controls.
It is important to bear in mind that our participants were enrolled in a tertiary care center, therefore results may vary from other kind of populations (secondary care, patient's associations, etc.). Nonetheless, controls from our sample also presented relatively low MCS scores (mean = 46.2), which is lower than previous studies [ ]. This might be the reason why no difference was observed between women with PCOS and controls for MCS in our study population. Moreover, women with anovulatory PCOS are mainly characterized by oliganovulation and hyperandrogenism. Both features are quite related to infertility and self-esteem or self-concept issues, therefore we hypothesize this might be one of the main reasons we observed more significant differences in HRQoL for women with anovulatory PCOS instead of ovulatory PCOS.
There are studies suggesting that interventions focusing on changes in lifestyles or medical treatments [17,34] might help to improve HRQoL in women with PCOS. According to our results, these suggested interventions might also be appropriate when it comes to phenotypic subtypes -mainly anovulatory women-but the current evidence is, to our knowledge, very limited and further interventional research regarding improvement of HRQoL in PCOS phenotypes is warranted. The Polycystic Ovary Syndrome Questionnaire (PCOSQ) [23] and SF-36 are the most frequently instruments used for the assessment of HRQoL in PCOS women [9]. However, PCOSQ has not been validated in Spanish and is a specific questionnaire for PCOS which was not considered appropriate for a case-control study. We chose to use the SF-12v2 in the present study. It is shorter than the SF-36 and offers a measurement of health with a multidimensional nature, it is easy handling and worldwide used. Moreover, it has been validated and is extensively used in Spanish studies [26].
This research is not without limitations. Selection and measurement bias has always to be considered in casecontrol designs. Nonetheless, controls were women attending the public hospital in the same period and they stem from the same population from which women with PCOS emerged. Misclassification of disease status or the exposure (HRQoL) may have occurred, but, if present, it would contribute to underestimate the true magnitude of associations. Lastly, from the four phenotypes in the Rotterdam criteria we chose to dichotomize into two phenotypes (ovulatory and anovulatory) due to small numbers in the groups and that might have affected the results. However, this dichotomization has been previously used before supporting our current approach [42].
Conclusions
Our results support the hypothesis that HRQoL is significantly decreased in adult women with PCOS and its anovulatory phenotype compared to controls. PCOS is a chronic and highly prevalent disorder in reproductive age women, therefore it may be important to assess HRQoL as a way of measuring their progression alongside the treatment in a follow-up. If confirmed, these results may have important implications for prevention, clinical Data presented as number (N) and percentage (%). Norm-based scores in the US general population have a mean of 50 and a standard deviation of 10 Mean score is set to 50 (Cut-off), therefore scores ≥50 or < 50 indicate better or worse physical or mental health than the mean US population, respectively PCS Physical Component Summary, MCS Mental Component Summary Bold values are statistically significant (p < 0.05) a Multiple logistic regression model adjusted by age and BMI, infertility problems, educational level and current occupation practice and intervention in women with this condition, especially those with the anovulatory phenotype who seem to be the most affected ones in terms of HRQoL. These women could benefit from the implementation of medical and psychological actions to improve their quality of life.
|
v3-fos-license
|
2020-10-31T13:06:01.334Z
|
2020-10-29T00:00:00.000
|
226205395
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.26508/lsa.202000742",
"pdf_hash": "90374d45b02e9c5b890104472c19ffb55be700fd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46106",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "0283f15bc7b167e41fdf23fd4b0b836c949bfe83",
"year": 2020
}
|
pes2o/s2orc
|
Leishmania-infected macrophages release extracellular vesicles that can promote lesion development
Macrophages infected with Leishmania donovani release extracellular vesicles that are composed of parasite and host-derived molecules that have the potential to induce vascular changes in tissues.
As you will see, the reviewers point out that your conclusions are not supported by the data provided. They provide constructive input on how to address the issues they note, and we would thus like to invite you to submit a revised version of your manuscript to us. Importantly, all three reviewers point to missing controls that need to get included. For example, a control showing that EVs do not derive directly from the parasites needs to get included as well as loading controls and controls on EV purity. Further, reviewer #2 points out that your findings do not support a role for macrophage-derived EVs in chronic Leishmania infection, and the manuscript text therefore needs to get re-written and conclusions toned-down. Finally, this reviewer points out that the figures are of insufficient quality. I don't know whether there were some conversion issues or whether you downsized the figures prior to upload, but we agree with this concern and it needs to get addressed, too.
In our view these revisions should typically be achievable in around 3 months. However, we are aware that many laboratories cannot function fully during the current COVID-19/SARS-CoV-2 pandemic and therefore encourage you to take the time necessary to revise the manuscript to the extent requested above. We will extend our 'scooping protection policy' to the full revision period required. If you do see another paper with related content published elsewhere, nonetheless contact me immediately so that we can discuss the best way to proceed.
To upload the revised version of your manuscript, please log in to your account: https://lsa.msubmit.net/cgi-bin/main.plex You will be guided to complete the submission of your revised manuscript and to fill in all necessary information. Please get in touch in case you do not know or remember your login name.
We would be happy to discuss the individual revision points further with you should this be helpful.
While you are revising your manuscript, please also attend to the below editorial points to help expedite the publication of your manuscript. Please direct any editorial questions to the journal office.
Please note that papers are generally considered through only one revision cycle, so strong support from the referees on the revised version is needed for acceptance.
When submitting the revision, please include a letter addressing the reviewers' comments point by point.
We hope that the comments below will prove constructive as your work progresses.
Thank you for this interesting contribution to Life Science Alliance. We are looking forward to receiving your revised manuscript.
--A letter addressing the reviewers' comments point by point.
--An editable version of the final text (.DOC or .DOCX) is needed for copyediting (no PDFs).
--High-resolution figure, supplementary figure and video files uploaded as individual files: See our detailed guidelines for preparing your production-ready images, http://www.life-sciencealliance.org/authors --Summary blurb (enter in submission system): A short text summarizing in a single sentence the study (max. 200 characters including spaces). This text is used in conjunction with the titles of papers, hence should be informative and complementary to the title and running title. It should describe the context and significance of the findings for a general readership; it should be written in the present tense and refer to the work in the third person. Author names should not be mentioned.
B. MANUSCRIPT ORGANIZATION AND FORMATTING:
Full guidelines are available on our Instructions for Authors page, http://www.life-sciencealliance.org/authors We encourage our authors to provide original source data, particularly uncropped/-processed electrophoretic blots and spreadsheets for the main figures of the manuscript. If you would like to add source data, we would welcome one PDF/Excel-file per figure for this information. These files will be linked online as supplementary "Source Data" files. ***IMPORTANT: It is Life Science Alliance policy that if requested, original data images must be made available. Failure to provide original images upon request will result in unavoidable delays in publication. Please ensure that you have access to all original microscopy and blot data images before submitting your revision.*** ---------------------------------------------------------------------------Reviewer #1 (Comments to the Authors (Required)): In the manuscript by Gioseffi et al. they isolated exosome-enriched extracellular vesicles (EV) from un-infected and Leishmania donovani-infected macrophages. Unlike previous studies evaluating EV released from Leishmania-infected macrophages, the authors extended the infection time and isolated EVs 48 hours post-infection of Raw264.7 cells. They evaluated the EVs for host and parasite proteins and performed some functional studies. In their analysis they identified a number of host proteins which were unique to EVs isolated from infected macrophages including proteins involved in the angiogenesis. The potential role for EVs, isolated from infected macrophages, to induce angiogenesis was confirmed using the scratch assay and the tube formation assay. This is a well-designed study that provides additional information regarding the protein composition of EVs released from Leishmania-infected macrophages. It also defines a potential link between the increased angiogenesis observed at the site of a mouse L. major and L. donovani infection and the ability of ieEVs to induce secretion of angiogenic molecules by endothelial cells. They used complementary assays to demonstrate that ieEVs can induce an angiogenesis-like process in vitro. However, they did not show whether this was mimicked in vivo. Nevertheless, the studies are supportive of ieEVs as indices of vascularization. I also appreciate that they used two different isolation techniques to obtain material for MS, as the use of both methods increases protein coverage.
The one control that is missing is that they do not show that the macrophage culture media used for EV isolation does not contain extracellular parasites. If the culture media from infected cells have live Leishmania these could also be releasing EVs that would co-purify with host-derived EVs. Although this is unlikely, it is important to show an absence of Leishmania in the culture media used for EV isolation.
Reviewer #2 (Comments to the Authors (Required)): Comments to the author: In this study, Gioseffi et al. investigate the composition of extracellular vesicles (EVs) released from macrophages infected with Leishmania. The proteomic analyses demonstrated that infected macrophages release EVs with different compositions of proteins implicated in promoting vascular changes, such as Vash. They suggest that EVs from infected macrophages induce the release of angiogenesis molecules by endothelial cells and promotes epithelial cell migration and tube formation by endothelial cells in vitro. From these results, the authors propose that EV from infected macrophages promote vascularization in chronic Leishmania infections. Although the topic of this study is interesting and relevant to the disease, the conclusions are beyond what is shown by the data. The authors have used the appropriate methodology to isolate and physically characterize the EV (NTA and electron microscopy). However, the analyses to evaluate the role of EV in angiogenesis were only done in vitro, and whether they will have any role in chronic Leishmania infection is unknown.
Specific comments: 1) Structural concerns of the manuscript: The results section needs to be modified, and the combination of the results and discussion is not helpful in this manuscript. Even more important, the figures are of extremely low quality, some of them are unreadable, and the figure legends do not contain sufficient information. For example, the abbreviations used are not described, the statistical analysis and the number of replicates were not indicated.
2) The purity of EV should be assessed.
3) EVs have different membrane compositions, depending on the cell from which they have originated. The authors cannot rule out the possibility of EVs being release directly from leishmania, instead of macrophage origin. Figure 3: The western blot for the GP63 band is not clear in the Ldcen-/-. A loading control should be included. Figure 4D: When closely examining the western blot, the LdVash 24h EV band is stronger than LdVash 72h and 96h. This result contrasts with the images of Figure 2) Annexins expression levels are adjusted to the functional state of the cell. The discussion might benefit from some reflection on how Annexin A3 can be involved in macrophages infection and the EV release.
5)
3) The Annexin A3 and GP63 data appears to have no connection with the main point of the paper. 4) "Protein concentrations determined by BCA agree with the NTA analyses that there is greater total protein content in the ieEV samples; however, ceEVs and ieEVs appear to contain equal protein content per particle." It is data not shown? The authors should discuss why ceEVs and ieEVs appear to contain equal protein content per particle. 5) Overstate sentence: "Together, these studies demonstrated that there are impressive changes in vascularization of both visceral and cutaneous lesions that appear to be promoted by molecules that are released from infected macrophages in infected tissues." The mentioned papers (Horst et al, 2009, Weinkopff et al, 2016, Yurdakul et al, 2011and Dalton et al, 2015 demonstrated that the molecules were released by macrophages present in infected mice. None of these papers has shown that infected macrophages release the molecules. 6) Figure 4: The results section is different from the figure order. The description of Figure 4B in the text is referring to 4C in the figure. The authors state that after infection with the recombinant parasites there was an increase in the total number of EVs over time but there is no statistical test information in the figure and the legend.
Reviewer #3 (Comments to the Authors (Required)): Leishmania donovani is the causative agent of a potentially fatal visceral infection. In this manuscript, Gioseffi, et al. explored the composition and role of extracellular vesicles (EVs) released from infected macrophages. Leishmania infected macrophage EVs (LiEVs) contain several host proteins that can cause endothelial cells to migrate, engage in tube formation and release angiogenesis promoting factors such as IL-8, GCSF/CSF-3 and VEGF-A. Additionally, more than 50 L. donovani -derived proteins were identified in LiEVs, including a homolog of mammalian Vasohibins (LdVash), an angiogenesis promoting mediator. Taken all together, these results show that L. donovani infection can alter the composition of LiEVs and plays a role in vascularization during chronic disease. This is one of the first reports on leishmanial proteins found in host exosomes, and the first on L. donovani. The manuscript has good logical experimental flow and discussion of the results and implications of the paper. Minor issues are highlighted below: Minor: 1. The present studies were performed using RAW264.7 macrophages. The authors should include a brief explanation about why they chose immortalized cells over primary cells such as bone marrow derived macrophages, which are more physiologically relevant 2. The authors mention Hassani and Oliver 2013 (PMID: 23658846). It would be interesting to discuss and draw a parallel between the proteins found in L. donovani and those previously found by Hassani and Oliver in L. mexicana-derived exosomes 3. Figures and Tables: a. Table 1 cannot be found in the manuscript b. Include a loading control in the western blots of figure 2B, 3B and 4D c. Part of the content of Figure 2 and 3 might be better represented into tables, consider separating them up d. In the legend of Figure 5 there should be a closed parenthesis after "E" instead of a period 4. Minor grammatical errors: a. In the result section Differential host protein composition in LiEVs it says: "low abundant proteins". The word "abundant" should be substituted with "abundance" b. In the result section Extracellular vesicles derived from Leishmania-infected cells activate angiogenesis there should be a period between "cell tube formation" and "Cell migration" c. All the "et al." have to be italicized throughout the paper 5. Please consider moving part of Isolation and characterization of extracellular vesicles released from Leishmania donovani infected RAW264.7 macrophages to the methods section Reviewer #1 (Comments to the Authors (Required)): In the manuscript by Gioseffi et al. they isolated exosome-enriched extracellular vesicles (EV) from un-infected and Leishmania donovani-infected macrophages. Unlike previous studies evaluating EV released from Leishmania-infected macrophages, the authors extended the infection time and isolated EVs 48 hours post-infection of Raw264.7 cells. They evaluated the EVs for host and parasite proteins and performed some functional studies. In their analysis they identified a number of host proteins which were unique to EVs isolated from infected macrophages including proteins involved in the angiogenesis. The potential role for EVs, isolated from infected macrophages, to induce angiogenesis was confirmed using the scratch assay and the tube formation assay.
1st Authors' Response to Reviewers September 13, 2020 This is a well-designed study that provides additional information regarding the protein composition of EVs released from Leishmania-infected macrophages. It also defines a potential link between the increased angiogenesis observed at the site of a mouse L. major and L. donovani infection and the ability of ieEVs to induce secretion of angiogenic molecules by endothelial cells. They used complementary assays to demonstrate that ieEVs can induce an angiogenesis-like process in vitro. However, they did not show whether this was mimicked in vivo. Nevertheless, the studies are supportive of ieEVs as indices of vascularization. I also appreciate that they used two different isolation techniques to obtain material for MS, as the use of both methods increases protein coverage.
The one control that is missing is that they do not show that the macrophage culture media used for EV isolation does not contain extracellular parasites. If the culture media from infected cells have live Leishmania these could also be releasing EVs that would co-purify with hostderived EVs. Although this is unlikely, it is important to show an absence of Leishmania in the culture media used for EV isolation.
We thank reviewer 1 for the complimentary statements about our studies. The reviewer remarked on our study design that we think distinguishes this study. When Leishmania infections are initiated with promastigotes, internalized parasites transform into amastigote forms after 16 -18hrs. This transformation is accompanied by some changes in expressed proteins and their levels of abundance. gp63, for example, is highly expressed in promastigotes. Upon transformation to the intracellular form, it becomes less abundant and even changes its localization in the parasite from the cell surface to the cell cytosol. Our objective was to evaluate the composition of proteins in EVs from older infections., which should be more representative of infected cells in chronic infections. We reasoned that by thoroughly washing off the media after 24 hours of infection and replacement with media that is supplemented with exosome depleted serum, we would minimize the contribution of uninternalized parasites to the overall EVs recovered after an additional 48 hours of infection. Unlike bacterial infections, where extracellular bacteria can be eliminated by treatment with gentamicin, there is no comparable compound that can selectively kill external parasites. We interpreted the absence of gp63 in EVs that are recovered following our protocol, to mean that external promastigote stage parasites, if still present, may contribute only a negligible number of particles to our EV preparation. We have included a bright field microscope image of infected cells at the time of culture medium recovery that shows that external parasites are absent.
Reviewer #2 (Comments to the Authors (Required)): Comments to the author: We thank the reviewer for raising concerns about the structural presentation of our manuscript. In this submission, the manuscript layout has been revised. The results section and the discussion section are now separate. We apologize for the poor quality of the figures in the previous submission. The figures have been updated and should be of higher quality. The Figure legends have been edited by adding more information about abbreviations, statistical analysis, and number of replicates.
2) The purity of EV should be assessed.
This is an important issue. First, we made several new preparations of EVs, following the protocols described in the paper. It is important to highlight the fact that cultures are washed 3X with PBS to remove uninternalized parasites. Unfortunately, we cannot rule out the possibility that some parasites that are not internalized, remain stuck to macrophages. As we stated above, in response to Reviewer 1, We interpreted the absence of gp63 in EVs that are recovered following our protocol, to mean that external promastigote stage parasites, if still present, may contribute only a negligible number of particles to our EV preparation. We have included a bright field microscope image of infected cells at the time of culture medium recovery that shows that external parasites are absent. To confirm the purity of our LieEV and ceEV preparations, they were monitored for the presence of calnexin. Several studies have shown that calnexin is not a component of exosomes. Our analysis confirms this. As a control for protein loading, western blots were stained with Ponceau S. Samples from 3 separate isolations were analyzed and the results presented in the edited manuscript.
3) EVs have different membrane compositions, depending on the cell from which they have originated. The authors cannot rule out the possibility of EVs being release directly from leishmania, instead of macrophage origin.
We agree with this statement from the reviewer. In response to Reviewer 1, we addressed the unlikely possibility that external parasites may contribute to EVs that are recovered following our protocol. If it indeed occurs, their contributions to the EV proteome are expected to be negligible.
Although it is known that Leishmania secrete exosomes, at this time we do not have markers that would differentiate parasite derived exosomes from host exosomes that are loaded with parasite molecules. We also cannnot not rule out the possibility that exosomes that are fully formed in internalized parasites, traffic through the macrophage, and are released to the extracellular milieu. Future studies will characterize the trafficking of parasite derived molecules within infected cells in greater detail.
4) Figure 3: The western blot for the GP63 band is not clear in the Ldcen-/-. A loading control should be included.
In response to a suggestion by this reviewer, discussed below, we decided to remove this experiment from this resubmission. Figure 4D: When closely examining the western blot, the LdVash 24h EV band is stronger than LdVash 72h and 96h. This result contrasts with the images of Figure 4A, which show an increase in the LdVash expression with the time. Consider quantify the western blot bands.
5)
The reviewer's observations are spot on. We are not certain why the mNG tagged proteins are less abundant in the Western blot analysis as compared to NTA. Your suggestion prompted us to quantify the bands in our Western blots.
6) Figure 5: The authors should use skin epithelial cells or spleen endothelial cells to test the effect of EV since these are the tissues related to Leishmania infection.
We understand the usefulness of this suggestion. We plan to include other cell lines from different tissues in future studies.
We should note that the Editor agrees that this issue is best handled at a different time (see Editors comment above) 7) Figure 5: The cell lineage used in angiogenesis assays should preferably be murine since the assay involves EVs from the mouse.
We agree with the suggestion of the reviewer. However, the human cell lines are more widely used in these assays and so they were more readily available and easier to trouble shoot. Mouse cell lines will be used in future experiments.
We should note that the Editor agrees that this issue is best handled at a different time (see Editors comment above) 8) Figure 5: CeEV Disrupted image in (A) is not a good representative figure of the graph (C).
We thank the reviewer for pointing that out. The figure has been replaced.
9) Figure 5: The status of angiogenesis molecules in the supernatant fluid from HUVEC cells incubated with disrupted EVs should be provided as a control.
We understand the value of this suggestion. Unfortunately, at the time that these experiments were performed, the supernatants from the incubations with disrupted EVs were not saved. Figure 5E: It appears the positive control (VEGF) did not work.
10)
Indeed, these cells are maintained in a medium that contains VEGF; they apparently become unresponsive to the concentrations of VEGF that are suggested for these studies. It is also likely that the potency of our VEGF preparation had diminished. The response of the HUVEC cells to the other samples in these assays suggested that the poor response to VEGF may have beenlimited to that particular VEGF preparation.
11) The authors do not formally investigate whether LiEV promotes vascularization in chronic
Leishmania lesions since the analyses were done in vitro. Thus, their conclusions are not justified.
The reviewer correctly points out that LiEVs were not evaluated on chronic infections. They were instead evaluated in surrogate assays that are widely used in studies of angiogenesis. We were careful to state in our conclusions that our findings suggest that LieEVs have the potential to promote these responses in Leishmania infections that are chronic. We have toned down our conclusions, as suggested by the Editor.
Minor comments:
1) On what basis were 20 parasites per macrophage chosen?
In initial experiments to develop the EV isolation protocol, infections for varying lengths of time were evaluated. To ensure that sufficient parasites were internalized when short term infections were evaluated, we settled on a 20:1 parasite to macrophage infection ratio. This then became a part of our standard protocol even though we presently evaluate older infections.
2) Annexins expression levels are adjusted to the functional state of the cell. The discussion might benefit from some reflection on how Annexin A3 can be involved in macrophages infection and the EV release.
We agree with the reviewer that monitoring of Annexin A3 levels may provide greater insight into characteristics of the infection. With the change in the format of the manuscript we have included more of our rationale for monitoring Annexin A3 levels. Annexin A3 was rarely detected by mass spectrometry in the ceEV samples. The Western blot results in which up to 1X10 10 particles per lane were analyzed, confirm that Annexin A3 is preferentially expressed in LieEVs samples as compared to ceEVs.
3) The Annexin A3 and GP63 data appears to have no connection with the main point of the paper.
As mentioned above, monitoring Annexin A3 can help confirm the mass spectrometry results. We agree with the concern that gp63 may not be connected to the main point of the paper. In this re-submission, in response to this reviewer's suggestion, we elected to not include the gp63 results in the figures.
4) "
Protein concentrations determined by BCA agree with the NTA analyses that there is greater total protein content in the ieEV samples; however, ceEVs and ieEVs appear to contain equal protein content per particle." It is data not shown? The authors should discuss why ceEVs and ieEVs appear to contain equal protein content per particle.
After many experiments, we are still a bit unsure about the true link between the protein content as determined by BCA and the particle number per cell calculated from the NTA. We have elected to remove this statement as it distracts from the main points of the paper.
5) Overstate sentence: "
Together, these studies demonstrated that there are impressive changes in vascularization of both visceral and cutaneous lesions that appear to be promoted by molecules that are released from infected macrophages in infected tissues." The mentioned papers (Horst et al, 2009, Weinkopff et al, 2016, Yurdakul et al, 2011and Dalton et al, 2015 demonstrated that the molecules were released by macrophages present in infected mice. None of these papers has shown that infected macrophages release the molecules.
We thank the reviewer for this insightful and critical analysis of the studies in the literature that have commenced dissecting the mechanisms that underly changes in vascularization of Leishmania infections. We have changed that statement in the manuscript. We have included a more critical statement that is consistent with your suggestion. Figure 4: The results section is different from the figure order. The description of Figure 4B in the text is referring to 4C in the figure. The authors state that after infection with the recombinant parasites there was an increase in the total number of EVs over time but there is no statistical test information in the figure and the legend.
6)
We thank the reviewer for picking up this error in the manuscript. The figure legend and the text that describes this figure has been changed accordingly.
Reviewer #3 (Comments to the Authors (Required)): We thank the reviewer for this truly clear understanding of the novelty of our findings.
Minor: 1. The present studies were performed using RAW264.7 macrophages. The authors should include a brief explanation about why they chose immortalized cells over primary cells such as bone marrow derived macrophages, which are more physiologically relevant We thank the reviewer for raising this issue. In light of the novelty of these experiments, it was necessary that we use a cell line that offered uniformity and that we could scale up without sacrificing a lot of animals. The composition of EVs, is cell type specific. The use of a cell line helps to ensure the rigor of our experiments, With our new understanding of EV composition and the dynamics of EV release from infected cells, we are presently better equipped to perform similar experiments on primary cells, which we agree, are more relevant in the studies of a chronic infection. We have included the rationale for using RAW264.7 cells in the Materials and Methods section (Mammalian cell culture).
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2010-06-16T00:00:00.000
|
22993586
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/1745-6215-11-71",
"pdf_hash": "df85cd63f72338198add68466873c90b4b0bdadf",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46112",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "df85cd63f72338198add68466873c90b4b0bdadf",
"year": 2010
}
|
pes2o/s2orc
|
P3MC: A double blind parallel group randomised placebo controlled trial of Propranolol and Pizotifen in preventing migraine in children
Background A recent Cochrane Review demonstrated the remarkable lack of reliable clinical trials of migraine treatments for children, especially for the two most prescribed preventative treatments in the UK, Propranolol and Pizotifen. Migraine trials in both children and adults have high placebo responder rates, e.g. of 23%, but for a trial's results to be generalisable "placebo responders" should not be excluded and for a drug to be worthwhile it should be clearly superior, both clinically and statistically, to placebo. Methods/Design Two multicentre, two arm double blind parallel group randomised controlled trials, with allocation ratio of 2:1 for each comparison, Propranolol versus placebo and Pizotifen versus placebo. The trial is designed to test whether Propranolol is superior to placebo and whether Pizotifen is superior to placebo for the prevention of migraine attacks in children aged 5 - 16 years referred to secondary care out-patient settings with frequent migraine (2-6/4 weeks). The primary outcome measure is the number of migraine attacks during trial weeks 11 to 14. Discussion A strength of this trial is the participation of clinically well defined migraine patients who will also be approached to help with future longer-term follow-up studies. Trial Registration ISRCTN97360154
Background
In the last 20 years the International Headache Society (IHS) has fostered the development of high quality research in headache including migraine. The 2004 (2 nd ) edition of the International Classification of Headache Disorders [1] provides a framework for headache research, including clinical trials. It is vital that trials use this classification which has become rather more child relevant in this last edition.
Inclusion of patients with Probable Migraine (PM), as defined in the classification, previously called "migrainelike headache" or "mixed headache" is vital for the trial to be of general use to most children presenting to paediatricians with severe headaches, because many general paediatricians will be unfamiliar with the precise operational criteria for Migraine subtypes.
A recent Cochrane Review [2] has demonstrated the remarkable lack of reliable clinical trials of migraine treatments for children, especially for the two most prescribed preventative treatments used in the UK, Propranolol and Pizotifen.
For a trial's results to be generalisable it should reflect the high placebo responder rates (around 23% [3]) typically found and not exclude "placebo responders". In addition for a drug to be worthwhile it should be clearly superior, both clinically and statistically, to placebo.
The clinical course of migraine is especially difficult to predict in children. Migraine will come for weeks, months or a few years then remit for months or years, sometimes returning unpredictably later on. Long term follow-up is difficult and studies have demonstrated this variability [4].
The aim of this trial is to confirm or refute superiority of Propranolol to placebo and of Pizotifen to placebo for the prevention of migraine attacks in children aged 5 -16 years referred to secondary care out-patient settings with frequent migraine (2-6/4 weeks).
Both the active trial treatments or "Investigational Medicinal Products" (IMPs), Propranolol and Pizotifen have been in common clinical use for this indication in children for over 20 years. The trial does not therefore expose this group of patients to a new therapeutic risk, but will systematically evaluate efficacy and adverse events. The results will ascertain if one or both are superior to placebo in the prevention of migraine in children, and quantify other useful clinical outcomes such as quality of life, school attendance, any prolonged benefit after drug withdrawal, and adverse effects.
Study design
Two simultaneous multicentre parallel group doubleblind randomised placebo controlled trials of Propranolol and Pizotifen.
Setting
Secondary care paediatric headache or neurology clinics.
Inclusion criteria
1. age 5 years 0 months to 16 years 11 months 2. with Migraine with Out aura (MO), Migraine with Aura (MA), Probable Migraine (PM) as defined by IHS [1] (see Appendix E), 3. with 2 to 6 migraine or probable migraine attacks/4 weeks by history during the previous 3 months 4. and 2 to 6 migraine or probable migraine attacks/4 weeks during the 4 week run-in 5. and treating paediatrician and parent/guardian and child or young person believe the attacks are currently frequent and severe enough to merit a try of twice daily preventative medication 6. Satisfactory completion of headache diary during the run-in period at discretion of the investigator
Exclusion criteria
1. Asthma, bronchospasm or nocturnal or exercise induced cough or wheeze within the last 12 months or currently on daily asthma preventative treatment 2. children under paediatric cardiology review, at the discretion of their paediatric cardiologist, e.g. if Propranolol or Pizotifen were contraindicated 3. children with any of the following: uncontrolled heart disease, the presence of second or third degree heart block, in cardiogenic shock, bradycardia, severe peripheral arterial disease, metabolic acidosis, sick sinus syndrome, untreated phaeochromocytoma, prone to hypoglycaemia (e.g. after prolonged fasting) or Prinzmetal's angina. 4. previous severe adverse event probably related to Propranolol or Pizotifen 5. on Propranolol, another beta-blocker, Pizotifen or Cyproheptidine in the last 3 months 6. currently in or have been in another prospective drug trial in the last 3 months 7. fewer than 2 or more than 6 eligible attacks during the 4 week run-in, and stay excluded for 3 months at least 8. child or family unable to identify their migraine or probable migraine headaches confidently (as may happen with some patients with both mild headaches and migraine on different days, e.g. with chronic daily headache [15 or more headache days/month]). 9. females of child bearing potential who are not using a reliable contraceptive strategy such as abstinence, barrier methods, oral contraceptive pills and contraceptive injections. See Pregnancy section below. 10. Informed consent not given by parents/guardian, or assent/consent not given by patient
Interventions Propranolol
A non-selective beta-blocker which crosses the bloodbrain barrier exerting central as well as peripheral effects and which has been used in migraine prevention since the 1960s [5]. It is generally well tolerated but in high dose can be associated with fatigue or sleep disturbance. It can also cause bronchospasm and exacerbate asthma.
Pizotifen
An antihistamine with histamine-1 antagonist and serotonin (5-HT2) antagonist properties that is structurally related to the tricyclic antidepressants. It has been used for over 20 years in the United Kingdom for migraine prophylaxis in children, young people and adults. It is generally well tolerated but can cause drowsiness, so it is commonly given as a once a day evening dose. Other adverse effects include increased appetite and weight gain.
Placebo
Both liquid and tablet formulations of the placebo will be manufactured using the same excipients used in the active formulation of the drugs minus the active ingredients. The placebo, Propranolol and Pizotifen tablets are matched in appearance; the liquid placebo matches the liquid Propranolol in appearance and taste but the liquid Pizotifen has a slightly different flavour.
All participants will be offered a choice of liquid or tablet preparations of the trial treatments. Because Pizotifen only has an evening dose, while Propranolol is given twice daily, to maintain blinding participants in the Propranolol and placebo arms will receive morning and evening doses from separate bottles; those in Pizotifen arm will take a morning placebo dose and an active evening dose (also from separate bottles).
For both active arms in this trial, starting, titration and age specific maintenance doses are consistent with the recommendations in "Medicines for Children" [6] and the new "British National Formulary (BNF) for children" [7] for 80% of participants by standard growth charts for the tablet preparations. The maximum dose of the liquid preparation is less to comply with World Health Organisation (WHO) recommendations on maximum intake of propylene glycol (used as a preservative for the liquid preparation of Propranolol).
Concomitant therapy Permitted medication
Any other regular medication (apart from Propranolol or other beta blocker, Pizotifen or Cyproheptidine in the 3 months before recruitment) Other migraine preventative medication should normally be withdrawn first, but it may be continued as long as the dose does not change during the 12 week assessment.
Rescue medication and additional treatment(s)
All participants will be given an individual rescue treatment plan for migraine headaches, depending on their and their paediatrician's experience and preference. All rescue treatments used and their doses and effects will be recorded in the diary during the 4 weeks baseline assessment block 1 and the assessment blocks 2, 3 and 4 (weeks 11-14, 25-28, 37-40).
Restrictions
Rizatriptan should be avoided by the trial participants while taking the trial treatments and for 5 days after stopping the trial treatments, because of a drug interaction with Propranolol.
Different non-steroidal anti-inflammatory drugs (NSAIDs) should not be used together. Aspirin should be avoided in children under 16 years. Use of any over-thecounter remedies will be checked by the research nurse.
Compliance
This will be assessed in 2 ways: 1) by verbally questioning the participant and parent/ guardian at visits as to roughly how often a week a dose is missed. The % compliance is defined as 100 -% missed doses.
2) by examination of returned medication bottles, and measurement of the observed residual tablet numbers or residual liquid volumes. Missed doses by tablet number or volume will be expressed as:. S = amount supplied, R = amount returned P = amount planned to be taken The level of acceptable compliance with study medication will be set at >50% on both measures for the main outcome assessment period (weeks [11][12][13][14].
Criteria for terminating trial
The study may be stopped as a whole because of a regulatory authority decision, change in opinion of the REC or overwhelming evidence of efficacy/inefficacy, safety concerns or issues with trial conduct at the discretion of the Sponsor.
Recruitment at a centre may be stopped particularly for reasons of low recruitment, protocol violation or inadequate data recording.
Primary hypothesis
• To test whether Propranolol or Pizotifen are superior to placebo for the prevention of migraine attacks in children aged 5 -16 years old with frequent migraine (2-6/4 weeks), who are referred to secondary care out-patient settings.
Secondary hypotheses
• To test whether any therapeutic effect out lasts the period of drug administration. • To test whether a dose (in mg/kg/day) -response relationship exists at the doses used. • To test whether active treatment improves participation by school attendance, and parent/guardian time off work, and health related and non-health related quality of life, and health status. • To estimate cost-effectiveness if either active treatment proves superior to placebo.
Sample size
The number of attacks per month is assumed to follow an over-dispersed Poisson distribution. A mean attack rate in the last month of treatment of 3 episodes with variance of 4 was assumed for the placebo arms based on the review of Victor & Ryan (2003) [2].
The sample size for the primary endpoint was estimated using formula 9.13 on page 176 of Machin et al [8] assuming a 33% reduction in the attack rate in the active arms. This formula gives the total number of attacks that must be observed based on a Poisson distribution, and this was divided by 2.5 (average number of attacks per person) to give the total number of participants in each study.
The over-dispersion in comparison with a Poisson distribution was allowed for by multiplying the required number of participants based on the standard formula by a factor of 1.33.
The sample size estimates for each trial assumed a power of 80% and 5% two-sided significance, with a 2:1 allocation of active: placebo treatment within each trial.
On these assumptions the required sample size is 226 evaluable participants for each trial, i.e. 452 in total, to detect a reduction in mean attack rate in both arms from 3 to 2 per month.
The target of 600 for recruitment also leaves a margin for drop out of up to 25% (=1-452/600) for the primary outcome but only 2% (=1-588/600) for the proportion of responders outcome.
Randomisation and blinding
The randomisation will be based on a computer generated pseudo-random code using random permuted blocks of randomly varying size, created by the Nottingham Clinical Trials Unit (CTU) in accordance with their standard operating procedure (SOP) and held on a secure server. The randomisation proceeds in two stages: firstly a randomisation to one trial or the other; secondly a randomisation within each trial to active or placebo arm. The randomisation within each trial will be stratified by age (5-11 years vs 12-16 years), type of migraine (two categories) and recruiting centre (10 centres).
Investigators will access the treatment allocation for each participant by means of a remote, internet-based randomisation system developed and maintained by the Nottingham CTU. The sequence of treatment allocations will be concealed until interventions have all been assigned and recruitment, data collection, and all other trial-related assessments are complete.
Procedures and observations Consent
The process for obtaining participant informed consent or assent and parent/guardian informed consent will be in accordance with the REC guidance, and Good Clinical Practice (GCP) and any other regulatory requirements that might be introduced.
Potential participants will be identified in clinic based on their clinical diagnosis and history of migraine or probable migraine attack frequency for the previous 3 months. If the potential participant and their parent/ guardian are willing to participate but cannot estimate the migraine attack frequency in the previous 3 months they will be given a standard headache diary to complete over the next 3 months, and appropriate advice and treatment will be given as is normal practice.
In the event of their withdrawal data collected so far will not be erased and will be used in the final analyses where appropriate (this will be explained in the Participant and Parent/Guardian Information Sheets).
Baseline measurements
1. A standard headache diary will be completed by the participant and their parent/guardian. This is adapted for the trial from a migraine clinic diary developed by the British Paediatric Neurology Association's (BPNA's) Governance & Audit group [9]. 2. Headache intensity scale [10] -a four point selfrated scale (assisted if needs be by the parent/guardian). This is the functionally based scale recommended by the IHS to assist in diagnosis, monitoring treatment clinically and in trials. 3. Pediatric Migraine Disability Assessment Scale (PedMIDAS), a standardised validated health-related quality of life scale for children with migraine [11]. 4. Generic Child Quality of Life Measure (GCQ10), a standard validated non-health related quality of life scale for children [12]. 5. EQ-5D [13], a standardised validated health outcome measure providing a simple descriptive profile and a single index value for health status. For parents/ guardians and participants aged 12-16 years old. 6. Child-friendly EQ-5D [14] For younger participants; a child friendly version of EQ-5D has been developed and will be used during the trial for participants aged 7-11 years. 7. UK Proxy EQ-5D [15] 8. Age, sex, height, weight, blood pressure and heart rate at baseline and at all clinical visits 9. Parent's/guardian's stage at leaving full-time education 10. Full post code (to derive deprivation score for area of residence).
Outcome measurements Primary outcome
The number of migraine attacks during weeks 11 to 14 from randomisation, as recorded in the participant diary, with an attack being defined as in the IHS International Classification of Headache Disorders [10]. 6.2 the non-health-related Generic Child Quality of Life measure GCQ. Health Economics Sufficient data to allow cost effectiveness comparisons from NHS and family perspectives will be collected but not analysed at this stage. If one or other active treatments proves effective in comparison to placebo, then separate funding for a formal cost comparison and cost effectiveness study will be sought.
1. Parent's/guardian's time off work mainly related to child's migraine, for those in full time paid employment, or pro-rata for those in part-time paid employment, during weeks 11 to 14, as recorded in the participant diary. 2. Costs of Propranolol, Pizotifen, and placebo (study medications) 3. Cost of rescue medications 4. Number and length of emergency hospital admissions and Emergency Department attendances, and non-trial hospital and GP surgery appointments, related to migraine, with dates and place will be recorded in the participant diaries so the cost of investigations can be determined later 5. Cost of "child half days off school" (4 above) 6. EQ-5D for parents/guardian and participants aged 12-16 years; UK Proxy EQ-5D for younger participants.
Adverse events
Safety and tolerability variables 4. Time until withdrawal from allocated trial medication, from randomisation to end of week 14. Reasons for withdrawal will be recorded in CRF.
Follow up procedures
Participants in all three groups will follow the same schedule of study visits up to the end of the trial, regardless of their compliance with the trial medication. Screening & baseline assessments will be performed during visits 1-3, with a 4 week "run-in" ending in randomisation at visit 2 and start of trial treatment at visit 3. Visits 4-6 will take place during the 2 week dose escalation period, 12 week maintenance and 2 week down-titration phases. Visits 7 & 8 will take place during a 3 month blinded offtreatment phase, visit 9 during a 3 month unblinded follow-up with visit 10 marking the end of the Trial, at which point there will be an option to consent for a possible future follow-up study, (Figure 1).
Data Monitoring Committee (DMC)
An independent Data Monitoring Committee (DMC) will evaluate the outcome and safety data in the context of the overall trial and the currently existing information about the study drugs. No formal interim analyses for efficacy are planned. For these "administrative" analyses, informal Haybittle-Peto type boundaries [17,18] will be adopted for efficacy to permit the DMC to break the blind if it wishes, with negligible effect on the properties of the final analysis
Types of Analyses
The primary efficacy parameter will be the relative attack rate between the two treatment arms and their placebo arms, estimated by a Poisson regression model. The model will include terms to account for treatment arm, stratification variables and other covariates (baseline frequency of attacks, prior Triptan use and whether treatment naïve). The anticipated over-dispersion will be accounted for by estimation of robust standard errors.
The analysis will be performed on the full analysis set (following the intention to treat principle). For the primary efficacy analysis the statistical test will be two-sided at a nominal 5% two-sided significance level (see sample size justification). Secondary outcomes All secondary endpoints will be analysed using analysis of covariance, logistic regression or Poisson regression as appropriate. Secondary analyses will include a repeated measures analysis of the attack frequency at different time points (11-14 weeks; 25-28 weeks; 37-40 weeks from randomisation). Headache frequency will also be analysed with respect to the mean mg/kg/day dose for the active treatments for each participant during this period.
Again, terms to account for treatment arm, stratification variables and covariates will be included in the model with allowance for over-dispersion of binary/count data. Time to event data will be handled by survival regression. All secondary endpoints will be analysed only in the full analysis set.
For the secondary analyses (excluding cost analyses), tests and confidence intervals will also be two-sided and performed at the 5% significance level. No adjustment for multiple testing will be performed.
Adverse events
All adverse events will be listed. Treatment-emergent AEs (defined as AEs which first develop or which worsen after the start of trial treatment) will be summarised by treatment, severity and relationship to treatment.
In addition the frequency (number of AEs and number of patients experiencing an AE) of treatment-emergent AEs will be summarised using the Medical Dictionary for Regulatory Activities [19] (MedDRA v 9.1 or later) by primary body system and preferred term.
Physical and neurological findings at baseline and any changes occurring during treatment will be listed.
Vital signs will be listed and summarised, together with changes from baseline.
Treatment compliance
Treatment compliance will be summarised by treatment group and time interval since randomisation.
Success of blinding
This will be assessed in participants, parents/guardian and investigators by a two-part question at the week 29-30 visit, just prior to un-blinding. The first part will ask whether the participant was believed to have received an active treatment or placebo. The second question will be asked only if the answer to the first question was "active", and will ask which active treatment was thought to have been given. Responses to both questions will include a "don't know" category, and the analysis will correct for guessing.
Procedures for missing data
A major goal of this study is to obtain virtually complete follow-up. The research nurses will ensure this as far as possible by home visits and close telephone/texting/email contact. No missing values are expected for the key baseline covariates because these data must be submitted prior to randomisation.
Missing covariate and response values will be handled by multiple imputation using chained equations, by means of the Stata add-in module ice [20]. In particular the imputation for missing response data during the final 4 weeks will incorporate information on earlier baseline response data and other variables thought likely to account for the missing data. A sensitivity analysis in which missing outcome data are assumed to be missing not at random will also be performed for the primary outcome and for response.
Ehtical approval
The protocol has been given full ethical approval by the Trent Research Ethics Committee (Reference: 09/H0405/ 19). It is fully compliant with the Helsinki Declaration.
Discussion
The protocol design posed some particular challenges. The funding brief was for a 3 arm trial of Propranolol, Pizotifen, and Placebo. For statistical and operational reasons we are technically undertaking 2 parallel 2-arm trials, but blinding will make it indistinguishable to participants and local investigators from a 3 arm trial. This way if there is a problem with one trial the other can continue un-hindered.
We undertook several consumer involvement exercises, with help from the Medicines for Children Research Network (MCRN), and it was clear that keeping participants and their parents/guardians blinded for up to 2 years after their participation was not acceptable and could present a significant barrier to recruitment and retention. Participants and their families were content to be blinded during the trial treatment phase, but wanted to know what the treatment had been as soon as possible afterwards, to inform future treatment decisions. We were keen to maintain the blinding for as long as possible to avoid investigators being biased in their approach to potential participants, e.g. by being biased by preliminary results against one of the active treatments. Also we felt that the long-term assessments would be compromised if they were undertaken unblinded. As a compromise, and taking note of consumers concerns, we decided to unblind at visit 8 (week 29/30) see Figure 1. Consumers at the focus group were supportive of this delay in unblinding.
We were keen to develop a protocol particularly suited to children and young people. In contrast to adult participants, they often express fear and avoidance of medical settings and procedures, although they are usually curious and supportive of clinical research in general and are often remarkably altruistic. With this in mind we minimised the invasive procedures commonly undertaken in trials: there are no blood tests and only routine outpatient procedures. Most contacts with the participants will be by home visits by research nurses, or by phone call, with hospital visits approximately every 3 months, so that approximately 1 additional visit over the year is anticipated. We needed a liquid formulation, as younger children would be unable to swallow tablets. However, we decided to offer participants the choice of liquid or tablets as our consumer involvement work suggested that many young people of secondary school age (teenagers) did not want to take liquid preparations, and that would prove a barrier to recruitment. Tablet and liquid formulation concentrations were such that the numbers of tablets taken were the same for Propranolol, Pizotifen, placebo, during titration, maintenance and withdrawal, phases as were the volumes of liquid formulations.
Being able to offer participants tablets also mitigates the problem posed by the maximum ceiling dose caused by the propylene glycol (see "interventions section above) in the Propranolol liquid. We anticipate that most participants will take tablets because migraine is more common in secondary school aged young people than younger children.
We considered blinding essential in this definitive study, and ideally wanted full blinding of participants, their parents/guardian, their paediatrician/local investigator and research nurse, and trial statistician, for which trial the participants were in (Propranolol, or Pizotifen), as well as within each trial (active vs placebo). To avoid sleepiness commonly seen with Pizotifen we decided that Pizotifen would only be given at bedtime (which is frequent routine clinical practice), and to keep the blind a placebo dose is given in the mornings. Families therefore are given a supply of morning doses as well as a supply of evening doses, and the importance of using the correct supply at the correct time will be emphasised by the research nurses.
Also rather than over encapsulation we opted to manufacture identically looking tablets of Propranolol, Pizotifen and placebo, as we feared children or their parents would deliberately unblind themselves out of curiosity, or by accident. If this were to happen in a significant proportion of participants the trial could be undermined. However, manufacturing tablets created other problems and increased costs and delays.
Propranolol and Pizotifen have been widely used for migraine in children for many years and so the exposure in this trial is relatively low risk.
We are also making a special effort to allow for effects of missing values of the primary response variable and covariates through use of multiple imputation.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2014-07-10T00:00:00.000
|
14501075
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13244-014-0343-3.pdf",
"pdf_hash": "bdaad99d47f3f8bd02114e9252b98154a2ae065a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46113",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "bdaad99d47f3f8bd02114e9252b98154a2ae065a",
"year": 2014
}
|
pes2o/s2orc
|
Computed tomography in the evaluation of vascular rings and slings
Vascular rings are congenital abnormalities of the aortic arch-derived vascular and ligamentous structures, which encircle the trachea and oesophagus to varying degrees, resulting in respiratory or feeding difficulties in children. A sling is an abnormality of the pulmonary arterial system resulting in airway compression. Although several imaging examinations are available for the evaluation of these anomalies, computed tomography (CT) has become the preferred test because of rapid acquisitions, making it feasible to perform the study without sedation or general anaesthesia. Furthermore, CT provides excellent spatial and temporal resolution, a wide field of view, multiplanar reconstruction capabilities and simultaneous evaluation of the airway. In this review, the current role and technique of CT in the evaluation of vascular rings are discussed. A brief discussion of the embryology of the aorta and branch vessels is followed by discussion and illustration of common and some uncommon vascular rings along with critical information required by surgeons. Teaching Points • Computed tomography is valuable in the evaluation of vascular rings. • Due to variable clinical and imaging presentations, diagnosis of vascular rings is often challenging. • Laterality of the arch is critical in surgical management.
Introduction
Vascular rings are rare (<1 %) congenital abnormalities of the aortic arch-derived vascular or ligamentous structures that encircle the trachea and oesophagus to varying degrees [1]. A sling is an abnormality of pulmonary arterial system resulting in airway compression. Vascular rings and slings may be challenging to diagnose since the clinical presentation is variable and often nonspecific. They can be asymptomatic or present with respiratory symptoms and signs such as respiratory distress, stridor, seal-bark cough, apnoea, cyanosis or recurrent infections, typically in the first year of life. Feeding difficulties such as dysphagia, slow feeding and hyperextension of head while eating may present later in life since liquid diets are tolerated earlier and symptoms typically manifest when solid foods are initiated. Weight loss and failure to thrive may be seen. Some anomalies present later in life with feeding difficulties, during periods of haemodynamic stresses such as pregnancy or when there is ectasia of the vessels. On examination, cough, wheezing, stridor, tachypnoea, noisy breathing and subcostal retractions may be apparent [1]. Vascular rings may be associated with congenital abnormalities, particularly conotruncal anomalies (tetralogy, transposition, truncus) and syndromes including 22q11 deletion [2]. There are no gender, ethnic or geographic predilections [3]. Symptomatic vascular rings are surgically repaired in the first year of life to avoid complications such as hypoxic spells, sudden death, aneurysm, dissection and erosion of the aorta into the trachea or oesophagus.
Imaging plays an important role in the evaluation and management of vascular rings. In this review, the current role and technique of computed tomography (CT) in the evaluation of vascular rings are discussed. A review of the embryology of the aorta and branch vessels is followed by discussion and illustration of common and some uncommon vascular rings along with critical information required by surgeons.
Imaging modalities in the evaluation of vascular rings
Vascular rings can be identified by several imaging examinations and occasionally multiple imaging tests may be required to make a diagnosis [4]. A radiograph is often the initial imaging test and some abnormalities are found in almost all patients with vascular rings [5]. Arch laterality may be inferred from the anteroposterior (AP) radiograph by the pattern of indentation of the tracheal air column, which is from the right in a right arch, left in a left arch and bilateral in a double arch. On the lateral view, tracheal narrowing may be apparent. Pulmonary hyperinflation may occur with a pulmonary sling. Presence of a right arch with tracheal compression is highly suggestive of a ring. The location of the descending thoracic aorta can be inferred from the paraspinal line and azygooesophageal recess. Barium oesophagography is often performed in children with feeding difficulties. The specific type of vascular ring can often be diagnosed based on the pattern of oesophageal indentation on the oesophagram in combination with the pattern of tracheal indentation on the radiograph [6]. Indentation of the posterior oesophagus occurs with a double arch, right arch with an aberrant left subclavian artery or left arch with an aberrant right subclavian artery. Anterior indentation of the oesophagus and posterior indentation of the trachea are caused by pulmonary slings. Bilateral indentation in the AP view is due to a double arch. Right indentation is caused by a right arch or a double arch with left atresia. Left indentation is caused by a double arch with a right arch atresia or circumflex aortic arch with right ductus. Oesophagography does not provide direct visualisation of the ring and hence has been replaced by cross-sectional imaging methods. However, a negative oesophagram excludes a ring and can also evaluate other causes of feeding difficulties such as tracheoesophageal fistula, oesophageal atresia, reflux and aspiration. Tracheography with radioopaque contrast was used in the past to determine the tracheal anatomy. Anterior tracheal indentation is caused by a double aortic arch or aberrant brachiocephalic artery. Posterior tracheal indentation results from a pulmonary sling [7]. Bronchoscopy is performed in patients without a clear diagnosis and to exclude other causes of respiratory distress in children such as foreign bodies and subglottic stenosis. It is also useful in patients with a pulmonary sling for evaluation of complete tracheal rings and other congenital airway anomalies. Invasive angiography is no longer routinely performed because of the advent of CT and cardiovascular magnetic resonance imaging (MRI) [8]. Echocardiography has a limited role in the evaluation of vascular rings because of the small field of view, which is even more limited in patients with a poor thymic window or hyperinflated lungs and inability to detect rings without colour flow Doppler or associated airway abnormalities. However, it is useful in the detection of associated congenital abnormalities, which occur in up to 12-30 % of cases [9]. MRI is one of the two commonly used imaging modalities diagnosing and characterising vascular rings. Advantages of MRI include a wide field of view, multiplanar imaging capabilities and adequate spatial resolution to detect vascular ring and associated airway anomalies, without the use of ionising radiation or iodinated contrast material. Disadvantages of MRI include limited availability, the long acquisition times and the need for deep sedation or general anaesthesia in young children, which may involve high risks in patients with airway compromise. In addition, intubation limits tracheal evaluation [10]. Other disadvantages include the need for different imaging sequences for analysis of airway anomalies, lower spatial resolution than CT, the need for gadolinium-based contrast agents and higher cost.
CT has emerged as the preferred imaging examination for the diagnosis and characterisation of vascular rings. It is performed in symptomatic patients with a suspected vascular ring in other imaging modalities to delineate the anatomy and help surgical planning. Advantages of CT include the rapid acquisition time without the need for sedation or general anaesthesia; high spatial and temporal resolution; large field of view; isotropic voxels with multiplanar reconstruction capabilities; and simultaneous evaluation of the vasculature, airways and, to a lesser degree, the oesophagus. The 3D volume-rendered and shaded surface display images can be helpful for surgical planning and depicting the anomalous anatomy. Ionising radiation and the use of potentially nephrotoxic iodinated contrast material are the primary disadvantages.
CT Technique
CT angiography (CTA) for the evaluation of vascular rings can be performed without any sedation and with quiet breathing since the latest CT scanners have fast gantry rotation times and high z axis coverage, as a result of which artefacts are minimal. One example of the CTA protocol for vascular rings, using a 256-slice MDCT scanner (Brilliance ICT, Philips, Cleveland, OH, USA) is listed in Table 1. Prospective ECGtriggering eliminates motion artefacts, making it useful in the evaluation of associated cardiac anomalies, although this technique is associated with a slightly higher radiation dose compared to non-ECG-gated scans. We use 1-2 ml/kg of iodinated contrast agent at 350 or 370 mg/mL concentrations. A power injector is preferred over hand injection to obtain homogeneous vascular opacification, at a rate of 1-3 ml/s.
Hand injection is used when venous access is challenging and only smaller catheters can be placed. Bolus tracking can be used to initiate image acquisition when contrast attenuation in the aortic arch reaches 100 Hounsfield units (HU) above the baseline, but this is associated with a higher radiation dose due to multiple tracker scans. An empirical delay of 12-15 s after contrast initiation for children less than 10 kg or 20-25 s for larger children can be used [11]. The radiation dose can be minimised using low tube voltage, automatic tube current modulation, non-ECG gating or prospective ECG triggering, higher pitch and iterative reconstruction algorithms.
Embryology
Although the development of vascular rings can be explained by Edwards' double ring hypothetical model [12], the development of the arch and branch vessels is more complex, and a thorough understanding of this process is imperative for recognising and accurately characterising the variations of vascular rings, which is critical for surgical planning [13]. Development of the great vessels begins at 20-22 days by vasculogenesis, in which networks of endothelial channels are formed by aggregation of angioblasts. These networks fuse to form the dorsal aortae and aortic arches. The lumen is established within these vessels when the small endothelial channels merge into larger channels [14]. Smooth muscle cells of the media are formed from neural crest cells in the arch and mesenchymal cells in the dorsal aorta [15][16][17].
Six aortic arches are formed in the fourth and fifth weeks of development and run in the centre of the pharyngeal arches connecting the paired ventral and dorsal aortae (Fig. 1a) [18]. The paired ventral aortae fuse to form a single ventral aorta, the aortic sac. Fusion of the dorsal aortae into a single dorsal aorta begins distally and progresses retrograde to the seventh somite. Proximally the aortic sac connects to the heart through the truncus arteriosus. Eventually, the truncus is divided into the ventral aorta and pulmonary trunk by the spiral septum.
The six arches develop in a craniocaudal fashion and also regress in the same fashion; hence all six arches are not seen at the same time (Fig. 1a, b). Also the fifth arch is rarely seen in humans, due to either nonexistence or very early regression [13]. The normal pattern of modelling and regression depends on neural crest cells, although the exact mechanism remains unknown [3,10,13,19].
-The first and second arch vessels form initially and then regress completely by 29 days. The first arch contributes to formation of the maxillary and external carotid arteries, while the second arch contributes to the formation of the hyoid and stapedial branches. -The ventral portion of the right and left sixth arches develops into the proximal right and left pulmonary arteries respectively. The dorsal part of the right sixth arch involutes, but the dorsal part of the left sixth arch persists to form the ductus arteriosus, connecting the distal main pulmonary artery or the left pulmonary artery to the junction of the left fourth arch and the dorsal aorta. The main pulmonary artery develops from the truncus after separation of the conotruncal septum as described above. -The seventh intersegmental branches of the dorsal aorta contribute to formation of the subclavian arteries. On the left, the dorsal aorta persists for its entire length, but it remodels itself such that the left seventh intersegmental branch forms the left subclavian artery. On the right, the dorsal aorta involutes between its junction with the left dorsal aorta and the origin of the right seventh intersegmental artery. The remaining dorsal aorta between the right fourth arch and seventh intersegmental artery remodels such that the right seventh intersegmental artery forms the distal segment of right subclavian artery. The The ventral aorta (purple) forms the ascending aorta and the right brachiocephalic trunk while the dorsal aorta (green) forms the descending thoracic aorta. The left subclavian artery is derived from the left seventh intersegmental artery (brown), while the right subclavian artery is derived proximally from the right fourth arch (red), in the mid portion from the right dorsal aorta (green) and the distal portion from the right seventh intersegmental artery (brown). The aortic root and main pulmonary artery are derived from division of the truncus arteriosus (pink). The ductus arteriosus (blue) is formed from the dorsal portion of the left sixth arch, while the right and left pulmonary arteries are derived from ventral portions of the right and left sixth arches (yellow) respectively. Common carotid arteries are derived from the third arches (orange), while internal carotid arteries are formed from the third arches (orange) and the dorsal aorta (green). External carotid arteries are derived from the third arch branches (black) right dorsal aorta contributes to the mid segment of the right subclavian artery and as described above the right fourth arch forms the proximal segment of the right subclavian artery. -The ventral aorta forms right and left horns, with the right horn forming the right brachiocephalic artery and the left horn forming the proximal ascending portion of the arch. The brachiocephalic and common carotid arteries elongate because of folding of the embryo. -Vertebral arteries are formed by anastomosis between seven cervical intersegmental arteries and these lose connection with the dorsal aorta, except at the seventh level, where they form the subclavian arteries.
Abnormalities in this sequential pattern of development and regression result in vascular rings mostly caused by incomplete regression of the distal left fourth arch or an aberrant retroesophageal subclavian artery or a ductus arteriosus originating from the descending thoracic aorta contralateral to the aortic arch.
Approach to arch anomalies
The normal pattern is a left aortic arch, with a left descending thoracic aorta and a left ductus or ligamentum extending from the proximal descending thoracic aorta to the left pulmonary artery. The arch branches are the right brachiocephalic artery (dividing into the right subclavian and right common carotid), left common carotid artery and left subclavian artery. The different arch types are listed in Table 2, based on Edwards' hypothetical model [20]. The several subtypes of vascular rings/slings are listed in Table 3. Double aortic arch and right arch with aberrant left subclavian artery account for almost 85-95 % of all symptomatic vascular rings [3].
Evaluation of vascular rings begins with a clear understanding of the definitions of the structures. The aortic arch is the vessel that connects the ascending and descending thoracic aorta and gives rise to arteries supplying the upper extremities and head and neck. Laterality of the arch is defined as the side of the trachea on which the arch crosses either of the main bronchi. Thus a right arch crosses the right main bronchus, a left arch crosses the left main bronchus and a double aortic arch crosses both main bronchi. Sometimes this relationship is not clear, and other clues can be used to establish arch laterality. One rule is that the first arch branch vessel that contains a common carotid artery is contralateral to the aortic arch. For example, if the first branch is the right brachiocephalic artery, which gives rise to the right common carotid artery, the arch is then on the left. However, sometimes the carotid arteries arise close to each other and determining which vessel is the first is a challenge. An exception to the rule is a retroesophageal brachiocephalic artery, which may be the last vessel from the arch, as a result of which the first arch vessel, the right carotid artery, is ipsilateral to the arch. Another rule is that the retroesophageal or aberrant subclavian artery is always contralateral to the arch. However, caution should be exercised when the entire aorta courses posterior to the oesophagus [21]. A vascular ring is defined by encirclement of the trachea and oesophagus by the aorta, arch branch vessels, pulmonary artery, ductus arteriosus or ligamentum arteriosum. A ring is complete or true when there is encirclement on all sides, while it is incomplete or partial when at least one side is not involved. The ring may be caused by patent vascular structures, in which case the diagnosis is usually straightforward. However, identifying the ring can be challenging when atretic vessels or the ligamentum arteriosum are involved. In such cases, clues suggesting the presence of a vascular ring are the "three Ds": (1) a dimple opposite the arch (ductus); (2) a diverticulum opposite to the arch, which is a fusiform dilation of the ventromedial portion of proximal descending aorta resulting from a remnant to distal segment of embryonic arch [21]; (3) a descending (proximal) aorta contralateral to the arch. Focal narrowing, asymmetry and distortion of the trachea are also indicators. Tracheal compression may occasionally occur without complete encirclement, such as with brachiocephalic artery compression, pulmonary sling, the aorta wrapping around the trachea or orientation of the arch with compression of the main bronchus and right pulmonary artery.
Double aortic arch
Double aortic arch is the most common symptomatic vascular ring, accounting for 50-60 % of vascular rings [22]. Developmentally it is caused by persistence of both the right and left fourth aortic arches and the right and left dorsal aortae [23]. Double aortic arch usually presents between birth and 3 months, earlier than other symptomatic rings and with more severe symptoms. Occasionally, it can be an incidental finding in older children and adults. There is no association with any major cardiac anomaly. Very rarely it may be associated with tetralogy of Fallot and transposition, but the incidence is not higher than in the general population [23].
Two arches originate from the ascending aorta, cross on either side of the trachea and oesophagus and join the descending thoracic aorta. The right arch gives rise to the right subclavian and right common carotid arteries, while the left arch gives rise to the left subclavian and left common carotid arteries. The proximal bifurcation of the arches is superior to the level of the distal confluence of arches (Fig. 2a). Usually one arch is dominant and the other arch is smaller or may be atretic (in up to 25-34 % of cases) [3]. The atretic segment is more common in the posterior and distal end of the nondominant arch. The right arch is dominant in 55-70 % of double arches [22,24], located more posterior and cephalad than the left arch [25]. Less commonly, the left arch is dominant (20-35 %). In 5-10 % of patients, the arches are equal in size. The descending thoracic aorta is located on the left, but may occasionally be seen on the right or midline. The location of the descending aorta determines the anteroposterior relationship of the arches. If the descending aorta is on the left, as seen in 80 % of patients, the right arch is more posterior than the left arch and crosses to the left to reach descending aorta, but if the descending aorta is on the right, the right arch is located more anterior than the left and the left arch crosses posteriorly to reach the right descending aorta [26]. A ductus arteriosus is usually present when there is no associated intracardiac abnormality. The ductus is usually located on the left, but may be located on the right or very rarely be bilateral [21].
On radiographs, double arches appear as bilateral aortic knobs with midline compression of the trachea. Pulmonary overinflation can result from air trapping. Lateral radiographs show focal tracheal narrowing and anterior bowing. Oesophagography shows bilateral oesophageal indentation on the AP projection and posterior indentation on the lateral projection. On axial CT images, the characteristic appearance of the double arch is the "four-artery sign" (Fig. 2b), referring to a symmetrical trapezoidal or square appearance of the four arch branch vessels at the level of the thoracic inlet. Two arches are seen originating from the ascending aorta and extending on either side of the trachea and oesophagus to form a complete ring. The descending thoracic aorta is located on the left.
Determination of the arch dominance has surgical implications since thoracotomy is performed on the non-dominant side [7]. A single axial image is not always useful in determining which arch is dominant since the ring may not be apparent within a single transverse plane (Fig. 2c). Coronal reformations at the level of the trachea are more useful as they simultaneously depict the arches (Fig. 2d, e). However, the narrowest portion of the arch may not be located at this level. The 3D volume-rendered or surface shaded displays, especially with a left or right posterior oblique view with cranial angulation, are valuable in depicting the ring and sizes of the arches (Fig. 2f, g). In patients with coarctation or hypoplasia of the arch, curved reconstruction along the long axis of the aorta is a good technique. Even in apparently equal-sized aortic arches, one of the arches gets smaller in the posterior aspect near the connection with the descending aorta. If both of the arches are of similar size, the arch with higher flow is the dominant arch. CT can also show the presence and location of the ductus arteriosus, which has to be ligated and divided along with the nondominant arch. Associated airway and oesophageal compression can also be shown (Fig. 2h).
Double aortic arch with atretic left arch can occur in two settings. First, an atretic segment between the left common carotid and the left subclavian arteries can be confused with a right arch with aberrant left subclavian artery and Kommerell diverticulum. Second, an atretic segment distal to the left subclavian artery can be confused with a right arch with mirror image branching and intact retroesophageal left ligamentum . f 3D volume-rendered image exquisitely demonstrates the double aortic arch, with a dominant right arch (arrow) and a smaller left arch (arrowhead), both of which join the descending thoracic aorta (*), which is located on the left. g 3D volume-rendered image in another patient (same patient as e) demonstrates the double aortic arch, with similar sizes of the right arch (arrow) and the left arch (arrowhead), both of which join the descending thoracic aorta (*), which is located on the left. h Coronal volume-rendered image of the airway shows severe narrowing of the airway (arrow) by the double aortic arch arteriosum (Fig. 3a, b and c). Both forms may be indistinguishable from right aortic arches with a left descending aorta. A clue to the presence of an atretic arch and for distinguishing it from a ligamentum is the tethering and distortion of the left carotid or subclavian artery posteriorly from the aortic arch caused by traction from the atretic arch. Other clues to the presence of an atretic segment include the arterial branching pattern, particularly the posterior course of the proximal head and neck vessels, aortic arch laterality, presence and orientation of a ductal diverticulum and dimple, and focal narrowing of the airway (Fig. 3d) [12,21]. Even in the absence of these clues, an atretic segment should be suspected with a right aortic arch and retroesophageal diverticulum or right arch with left descending aorta and the surgeon should be alerted. Double aortic arch with an atretic right arch is very rare and has been shown between the right common carotid and right subclavian artery [24].
A dominant right aortic arch is repaired using a left thoracotomy, while a dominant left arch is repaired using a right thoracotomy with a muscle sparing approach in the fourth intercostal space. The smaller arch is clamped and then divided near its posterior insertion to the descending thoracic aorta and then the stumps are oversewn. The ligamentum arteriosum is also ligated and divided and any adhesions are released [7] (Fig. 4).
Right arch-based vascular rings
A right aortic arch occurs in 0.1 % of the population [27]. It is caused by persistence of the right fourth arch and right dorsal aorta and involution of the left fourth arch and dorsal aorta. A right arch begins to the right of the midline and begins descending on the right. At the level of the diaphragm, the descending aorta is on the left, regardless of the laterality of the arch. This transition from right to left is gradual, except for a circumflex aorta (see below). The right main bronchus may be compressed by a sagittally oriented ascending and descending aorta [21]. The typical pattern consists of a right ductus or ligamentum between the proximal descending aorta and the right pulmonary artery, and it is not associated with major intracardiac anomalies. In contrast, the presence of a left ductus or absence of the ductus is associated with major intracardiac anomalies. Tetralogy of Fallot is seen in 30 % of patients [28]. The most common branching patterns of the right arch are the mirror image branching pattern (84 % of cases) and the aberrant left subclavian artery (14 % of cases) [29]. A mirror image branching pattern is associated with cardiac anomalies in 90 % of cases, with tetralogy the commonest abnormality [28].
Right aortic arch with retroesophageal left subclavian artery, Kommerell's diverticulum and left ductus arteriosus
A right arch with an aberrant left subclavian artery is caused by a persistent right fourth arch and regression of the left fourth arch in between the left common carotid and left subclavian arteries. The branching pattern is the left common carotid, right common carotid and right subclavian arteries followed by an aberrant left subclavian artery, which originates as the last branch from the proximal descending aorta and has a retroesophageal course to reach the left subclavian region (Fig. 5a). In 10 % of cases with a right arch and aberrant left subclavian artery, there is a right ductus, in which case, there is no ring or associated intracardiac defect [30]. On CTA, the calibre of the aberrant left subclavian artery is the same from the origin to termination. Sometimes there may be minimal tapering at the base, but it never extends to the level of the trachea. Indentation on the posterior aspect of the oesophagus may be seen.
However, in 90 % of right arches with aberrant left subclavian arteries, there is a left ductus [30]. A right aortic arch with an aberrant left subclavian artery and Kommerell diverticulum is the second most common cause of a symptomatic vascular ring, accounting for 30 % of these cases [31]. The aberrant left subclavian artery originates in the proximal descending aorta from a bulbous diverticulum and courses behind the oesophagus to reach the left. The diverticulum is an embryological remnant of the left fourth arch that persists because of foetal ductal flow to the descending thoracic aorta through the proximal subclavian artery. The ring becomes complete with an intact left ligamentum arteriosum or ductus, which connects the diverticulum to the left pulmonary artery (Fig. 5a). Patients usually present at between 3 and 6 months of age, but most patients are asymptomatic because the ring is relatively loose.
CTA shows the presence of the right aortic arch, the apex of which is located to the right of the trachea. The first branch from the arch is the left common carotid, followed by right common carotid and then right subclavian arteries. The aberrant left subclavian artery originates from the proximal descending thoracic aorta from a Kommerell diverticulum and then has a retroesophageal course to the left (Fig. 5b). There is an abrupt calibre change of the diverticulum, which is always on the side of the aberrant subclavian artery and opposite the side of the arch (i.e. on the left side beyond the level of the trachea), best seen in the coronal plane (Fig. 5c, d). The ductus arteriosus arises from the base of the diverticulum, extending from the descending thoracic aorta to the left pulmonary artery. Although a ligamentum is usually not visualised, the presence of a diverticulum indicates the presence of a vascular ring. An atretic left arch should be raised as a differential diagnosis in this situation. Tracheal compression is caused by the left ductus (Fig. 5e). Occasionally the Kommerell diverticulum enlarges and can independently compress the trachea and oesophagus. This aneurysm may rupture [12].
A right aortic arch with a retroesophageal left subclavian artery, Kommerell diverticulum and left ductus arteriosus is repaired using a muscle-sparing left thoracotomy. The ligamentum arteriosum is ligated or clamped, then divided, and the stumps are oversewn. If the patient also has a large Kommerell diverticulum (1.5 to 2.0 times the normal size), it is resected and the left subclavian artery is then anastomosed to the left common carotid artery so that it does not cause any compression [7].
Right aortic arch with mirror image branching and an intact retroesophageal left ligamentum arteriosum
A right aortic arch with a mirror image branching pattern is caused by a persistent right fourth arch and partial regression of the fourth left arch between the left subclavian artery and the dorsal aorta. The arch branching pattern is: left brachiocephalic artery (dividing into the left common carotid and left subclavian arteries), right common carotid artery and then right subclavian artery (Fig. 6a). The descending aorta is on the right. The ductus is located on the right in 25 % of these cases, which is not associated with a ring or intracardiac defect [12]. Occasionally the ductus is bilateral, which may or may not be associated with a cardiac defect. An absent ductus is associated with a major intracardiac anomaly. The ductus is located on the left in 75 % of cases, which is almost always associated with a major intracardiac anomaly. The left ductus more commonly extends from the left brachiocephalic artery to the left pulmonary artery, and in such a situation, a ring is not formed [12,28]. However, a complete ring is formed in a right arch with a mirror imaging pattern when there is a left ductus extending from the proximal descending thoracic aorta and it has a retroesophageal course to the left and then inferiorly to connect to the left pulmonary artery. There is a 90 % association with intracardiac defects, of which tetralogy is the most common [28].
On CTA, there is a right arch with mirror image branching pattern. If there is a patent ductus arteriosus, it can be seen originating from the proximal right descending thoracic aorta or left brachiocephalic artery, crossing behind the oesophagus to connect to the left pulmonary artery (Fig. 6b, c). If there is only a ligamentum arteriosum, the only clue to the presence of this anomaly is a small ductus dimple in the right descending aorta Volume-rendered 3D image in a patient with a right aortic arch (A) and mirror image branching pattern show a patent ductus arteriosus (white arrow) extending from the left brachiocephalic (L BC) artery to the left pulmonary artery (L PA) forming a ring that points to the left, reflecting the residual aortic ductal ampulla. The differential diagnosis for this abnormality is a double aortic arch with an atretic left arch, although in such scenario it is more common to see a left descending aorta in a double aortic arch.
A right arch with retroesophageal left brachiocephalic artery is a rare anomaly, which is not a ring. In this, the branching pattern is a right carotid artery, followed by a right subclavian artery and then an anomalous left brachiocephalic artery arising from an aortic diverticulum, which gives rise to the left common carotid and left subclavian arteries [32]. This is an exception to the rule that the first branch vessel containing the carotid artery is opposite to the side of the arch. This may create an indentation on the posterior aspect of the oesophagus, but is not a complete ring, since there is no structure on the left to complete the ring [21].
Circumflex retroesophageal right aortic arch
A circumflex retroesophageal aortic arch is the third most common type of vascular ring and occurs when a portion of the aortic arch (either right or left) extends behind the oesophagus while the ascending and descending thoracic aortic segments are located on either side of the spine [33]. In patients with a right arch, the arch runs to the right of the trachea, after which it abruptly courses behind the oesophagus (above the level of the carina) to reach the left where it continues as the descending thoracic aorta (Fig. 7). The branching pattern can be that of an anomalous left subclavian artery or a mirror image branching pattern. The ductus or ligamentum extends from the left descending aorta to the left pulmonary artery, completing the ring. The differential diagnosis is a double aortic arch with an atretic left arch. A common error is to confuse this with a right aortic arch with a left descending thoracic aorta. However, in this variant the crossing is very gradual and happens close to the level of the diaphragm, unlike the abrupt, supracarinal crossing of the circumflex arch.
A circumflex aorta is treated with a median sternotomy and cardiopulmonary bypass. The retroesophageal arch is mobilised, divided and brought anterior to the airway and anastomosed end to side with the lateral portion of the ascending aorta. Most of these patients have division of the left ligamentum through a left thoracotomy [7].
Left arch variants
Left aortic arch with aberrant right subclavian artery and right ligamentum arteriosum and right descending aorta A left aortic arch with an aberrant right subclavian artery is the most common vascular abnormality of the aortic arch, occurring in 0.5 % of the population [20]. This anomaly is caused by regression of the right arch between the right subclavian and right common carotid arteries. The right subclavian artery then inserts into the proximal descending thoracic aorta. The arch vessel branching pattern is the right common carotid, left common carotid, left subclavian and right subclavian arteries (Fig. 8a). This is usually associated with a left ductus and hence a ring is not produced since there is vasculature on only three sides of the trachea and oesophagus. Although this does not produce a vascular ring, sometimes there may be minimal posterior compression of the oesophagus, causing dysphagia lusoria, which is more common in adults than children, presenting in the 4th or 5th decade. Symptoms may however result from dilation, calcification and hardening of the aberrant subclavian artery. Oesophageal manometry may be required to decide whether compression is the cause of symptoms in these patients.
However, very rarely a ring may be produced in the left arch with an aberrant right subclavian artery in the presence of a right ductus and a circumflex right descending thoracic aorta [12,24]. The right ligamentum arteriosum or ductus extends from the aberrant right subclavian artery to the right pulmonary artery, forming a vascular ring [12]. On CTA there is a left aortic arch. The first branch is the right common carotid artery, followed by the left common carotid and left subclavian arteries. The right subclavian artery originates as the last branch from the proximal descending thoracic aorta and then courses behind the oesophagus to reach the right (Fig. 8b, c and d). The origin of the aberrant vessel may be dilated, the Kommerell diverticulum. A ring is present if there is an associated right ductus/ligamentum. Aneurysms may be seen (Fig. 9a, b).
Circumflex left aortic arch
In this anomaly, the arch courses to the left of the trachea, then extends behind the oesophagus to reach the right and continue as the right descending aorta. A ring is formed when a right ductus or ligamentum connects the descending aorta to the right pulmonary artery. The branching pattern is that of a left arch with an aberrant right subclavian artery. Occasionally a three-vessel branching pattern is seen in which there is no ring.
Brachiocephalic artery compression
In this anomaly, the brachiocephalic artery has an anomalous course, originating more posterior and to the left, from the aortic arch, resulting in anterior tracheal compression as the artery extends to the right superiorly and posteriorly to reach the right subclavian region (Fig. 10a) [34]. On CTA, the origin of the anomalous brachiocephalic artery (Fig. 10b) as well as anterior indentation of trachea (Fig. 10c) is demonstrated. Bronchoscopy shows a pulsatile mass compressing the anterior trachea from the left to right at the level of the vocal cords, which is much higher than the other vascular rings. Compression becomes significant when 70-80 % of the tracheal lumen is compromised [7]. During bronchoscopy, a diminished right pulse can be demonstrated if the anterior trachea is compressed by the scope. This anomaly is surgically treated through a right anterolateral thoracotomy (third interspace), lifting the brachiocephalic artery away from the anterior tracheal wall and suspending it to the posterior aspect of the sternum. An alternative technique is to perform a median Illustration showing the appearances of a left aortic arch (arrow) with an aberrant right subclavian (RSC) artery. The first branch from the arch is the right common carotid (RCC) artery, followed by the left common carotid (LCC) artery and then the left subclavian (LSC) artery. The right subclavian (RSC) artery has an aberrant origin as the last branch originating from the proximal descending thoracic aorta and has a retroesophageal (E) course to reach the right side. b Axial CT scan shows an aberrant right subclavian artery (arrow) originating from the proximal descending thoracic aorta and has a retroesophageal course to reach the right. Posterior indentation and compression of oesophagus are seen. c Coronal MIP image demonstrates the aberrant right subclavian artery (arrow) originating from the proximal descending aorta and reaching the right. Normal left subclavian artery (arrowhead) is also seen. d Coronal reconstructed 3D volume-rendered image shows the aberrant right subclavian artery (arrow) originating from the proximal descending thoracic aorta (*) and coursing to the right behind the oesophagus, which is compressed sternotomy, divide the brachiocephalic artery and reimplant it at a more anterior and right position in the ascending aorta [7].
Pulmonary artery sling
Pulmonary artery sling is characterised by the left pulmonary artery arising from the right pulmonary artery and then passing over the right main bronchus and between the trachea and oesophagus to reach the left hilum (Fig. 11a). In this process, it forms a sling that compresses the trachea and oesophagus. This anomaly is caused by failure of the development or obliteration of the left sixth aortic arch when the developing left lung bud captures its vascular supply from the right sixth arch, caudal to the developing tracheobronchial tree [35]. A pulmonary sling may also be associated with several airway anomalies including compression of the trachea and right main bronchus by the anomalous artery, complete tracheal rings (ring-sling complex) [36] and tracheobronchomalacia. A pulmonary sling is classified according to the associated airway anomalies [37].
Type I demonstrates compression of the trachea and right main bronchus by the anomalous pulmonary artery, but the airway branching is normal. Type I is associated with lower morbidity and mortality [37]. Type IA has no associated airway abnormality. Type IB is associated with the tracheal bronchus, tracheobronchomalacia and unilateral pulmonary hyperinflation.
In type II the anomalous pulmonary artery is more caudal and associated with long segment tracheobronchial stenosis. Associated anomalies include tracheobronchial branching abnormalities, left intermediate and right bridging bronchi, low inverted T-shaped carina, complete tracheal rings and bilateral pulmonary hyperinflation [37].
CTA shows the anomalous origin of the left pulmonary artery from the right pulmonary artery after which it courses between the trachea and oesophagus to reach the left (Fig. 11b). The differential diagnosis for a pulmonary sling is right pulmonary agenesis, where the right pulmonary artery is absent and the left pulmonary artery originates from the main pulmonary artery. Airway anomalies are also demonstrated on CT. With complete tracheal rings, the posterior Fig. 9 Aneurysm of an aberrant right subclavian artery. a Axial CT scan shows a large aneurysm with partial thrombosis (arrow) at the origin of an aberrant right subclavian artery in a patient with a left aortic arch. b Coronal CT reformatted image in the same patient shows the partially thrombosed aneurysm (arrow) at the origin of an aberrant right subclavian artery Fig. 10 Anomalous innominate artery. a Schematic illustration shows an anomalous course of an innominate artery originating more posterior and to the left than normal, coursing to the right anterior to the trachea. b Axial CT image shows a dilated innominate artery (arrow) compressing the anterior trachea. c Sagittal reformatted MinIP (minimal intensity projection) image shows indentation of the anterior aspect of the trachea (arrowhead) by a dilated anomalous innominate artery (black arrow) membrane is absent, and there are circumferential tracheal cartilages. The airway appears round and narrow, as small as 2-3 mm, with a lack of change of caliber between inspiration and expiration. With a bridging bronchus anomaly, the right middle and lower lobes are supplied by a bronchus that originates from the left main bronchus, which bridges the mediastinum to reach the right. The right upper lobe is supplied by right main bronchus originating from the trachea at the level of the carina (Fig. 11c) [38]. Other variants of this branching exist [39]. With tracheobronchomalacia, there is severe (>50 %) narrowing of the airway in expiration.
A pulmonary sling is managed by a median sternotomy and cardiopulmonary bypass. The anomalous left pulmonary artery is ligated, divided and then reimplanted into the main pulmonary artery anterior to the trachea. If there are associated complete tracheal rings, these can be repaired using tracheal resection or end-to-end anastomosis, tracheal autograft, pericardial patch tracheoplasty or slide tracheoplasty [7].
Uncommon rings
There are several uncommon types of vascular rings. A right cervical aortic arch with an aberrant left subclavian artery, left descending aorta and left ductus/ligamentum is a rare anomaly with the aortic arch developing from the third instead of the fourth arch [40]. Similarly, the left cervical aortic arch with an aberrant right subclavian artery, right descending thoracic aorta and right ductus or ligamentum can also produce a ring [12,40]. A left aortic arch, aberrant right subclavian artery and common origin of the carotid arteries anterior to the trachea can cause tracheal and oesophageal compression between the two large vessels [12]. The presence of both the ascending and descending thoracic aorta in the same anteroposterior plane is an unusual cause of tracheal compression [21]. A large left cervical arch may cause anterior tracheal compression, even without a right ductus, which is treated with a right thoracotomy [21,41]. A ductus arteriosus sling is a rare anomaly in which the ductus arteriosus extends from the right pulmonary artery to the proximal descending thoracic aorta between the trachea and oesophagus, with an associated aberrant right subclavian artery that compresses the trachea and right bronchus (similar to a pulmonary sling) [42]. Other rare vascular rings include a right aortic arch with a right ligamentum and absent left pulmonary artery [43], situs inversus with a left arch, aberrant right subclavian artery and right ligamentum [7] and compression by the pulmonary artery following arterial switch repair [41]. A persistent fifth aortic arch is an extremely rare anomaly, which however does not cause a vascular ring. Arches derived from the fourth and fifth foetal arches are present to varying degrees [44].
Conclusion
Vascular rings are complex and diagnosis is often challenging because of variable and non-specific clinical presentations. CT angiography plays an important role in the identification and definition of the anatomy of these complex anomalies, thus providing a roadmap to surgeons. Careful analysis of the arch laterality, branching pattern and position of the ductus or ligamentum is essential for accurate characterisation. Associated airway anomalies are also assessed using CT.
Conflict of interest
The authors do not have any conflict of interest or financial disclosure.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. Fig. 11 Pulmonary sling. a Illustration showing an anomalous left pulmonary artery (LPA), which originates from the right pulmonary artery (R PA) and then crosses between the trachea (T) and oesophagus (E) to reach the left. b Axial CT scan shows an anomalous left pulmonary artery (L PA), which originates from the right pulmonary artery (R PA) and then crosses between the trachea and oesophagus (white arrow) to reach the left. c Coronal MinIP image of the airway shows a bridging bronchus (BB) that originates from the left main bronchus (LMB) and crosses the mediastinum to reach the side where it supplies the right middle and lower lobe. The right upper lobe (RUL) bronchus arises from the trachea
|
v3-fos-license
|
2018-04-03T04:18:50.151Z
|
2011-07-28T00:00:00.000
|
38534960
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2011/623657.pdf",
"pdf_hash": "da7636b4d15853c90629a685353f42e165095d43",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46115",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "fc01cbebeedf322d5d298c61ac4d6282c5727391",
"year": 2011
}
|
pes2o/s2orc
|
Abnormalities of Penile Curvature: Chordee and Penile Torsion
Congenital chordee and penile torsion are commonly observed in the presence of hypospadias, but can also be seen in boys with the meatus in its orthotopic position. Varying degrees of penile curvature are observed in 4–10% of males in the absence of hypospadias. Penile torsion can be observed at birth or in older boys who were circumcised at birth. Surgical management of congenital curvature without hypospadias can present a challenge to the pediatric urologist. The most widely used surgical techniques include penile degloving and dorsal plication. This paper will review the current theories for the etiology of penile curvature, discuss the spectrum of severity of congenital chordee and penile torsion, and present varying surgical techniques for the correction of penile curvature in the absence of hypospadias.
INTRODUCTION
Ideally, a penis should be straight; i.e., the corpora straight, the skin sufficiently lax to avert traction, and the glans with no element of torsion. Penile curvature, including chordee and penile torsion, can be found in boys with and without hypospadias. While the causes of chordee are evident in boys with hypospadias, its precise etiology, as well as that of torsion, in the absence of hypospadias, remain incompletely understood. Recent studies have furthered our understanding of the possible etiology and previously proposed explanations have been revised, which largely resulted in changes in surgical techniques. The current surgical strategies are largely successful in correcting the penis with abnormal curvature.
EPIDEMILOGY
Penile curvature is a spectrum of disease most commonly associated with hypospadias, but is not uncommon in boys with an orthotopic meatus. The prevalence of hypospadias in the general population is approximately 1 in 300 [1] and as many as one-fourth will have chordee [1]. In the U.S., the nationwide Birth Defects Monitoring Program (BDMP) reported a doubling in the rates of hypospadias since the 1970s to about 4 per 1000 in 1993 [2]. Given that chordee occurs in the absence of hypospadias and that some boys are not diagnosed until later in life when the foreskin is retracted, the true incidence of chordee Montag/Palmer: Abnormalities of Penile Curvature: Chordee and Torsion TheScientificWorldJOURNAL (2011) 11,[1470][1471][1472][1473][1474][1475][1476][1477][1478] 1471 is substantially higher, 4-10% of male births [3,4]. Penile torsion is another curvature malformation that can be congenital and associated with hypospadias, or can be acquired after circumcision [5,6]. It results in a rotational defect of the penile shaft, most commonly in the counterclockwise direction. Isolated penile torsion has also been described [7,8], but reports are sparse, with the largest series reporting 46 cases over a 6-year period [5]. The true incidence of the deformity is unknown.
EMBRYOLOGY OF PENILE DEVELOPMENT
The development of the penis and the urethra take place early in fetal development. The bilayered cloacal membrane (ectoderm and endoderm) becomes flanked by cloacal folds early in the 5 th week that meet anteriorly to form the genital tubercle. The cloaca then divides into an anterior urogenital sinus and a posterior anorectal canal. The mesenchymal folds flanking the urogenital sinus become urogenital folds. The corporal bodies, connective tissue, and dermis of the penis are derived from mesodermal cells. The elongating phallus is covered with skin derived from ectoderm. The molecular mechanisms that regulate this mesenchymal differentiation likely depend on epithelial-mesenchymal interaction. Human fetal studies reveal that a ventral curvature is a normal state of penile development at the 16 th week of gestation that resolves during the 20-25 th week [9].
Recent studies have elucidated our understanding of the penile neurovascular anatomy. The neural supply originates under the pubic rami superior and lateral to the urethra as two well-defined bundles that travel towards the glans spreading around the corpora cavernosa to the junction with the spongiosum. This leaves the dorsal midline along the entire shaft devoid of neural tissue; this is also where the thickness/strength of the tunica albuginea is also the greatest [10].
Congenital Chordee
The earliest documentation describing penile curvature dates back to Galen (130-199 AD) [11] Mettauer first defined its etiology in 1842 as "skin tethering implicating subcutaneous tissue for cause of penile curvature" [12]. Since chordee was first described in boys with hypospadias, the leading theories included: (1) abnormal development of the urethral plate, (2) presence of abnormal fibrotic mesenchymal tissue at the urethral meatus, and (3) ventral-dorsal corporal disproportion [9]. Recent studies show that ventral curvature is a normal stage of embryogenesis and, therefore, chordee without hypospadias may represent arrested penile development.
Young first described chordee in the absence of hypospadias and proposed that a congenitally short urethra was responsible [13]. In opposing this theory, Devine and Horton believed that various deficiencies of penile fascial layers contributed to penile curvature without hypospadias and proposed a classification system [14]. Kramer et al. added a fourth category of corporal disproportion in the absence of hypospadias [15]. In rare cases, a congenitally short urethra can be the etiology for ventral curvature [16,17]. Recently, a large series of congenital chordee without hypospadias was evaluated and revealed that the etiology can be evenly divided among skin tethering, fibrotic dartos and Buck's fascia, and corporal disproportion ( Table 1). The series included 87 patients with ventral (84%), dorsal (11%), and lateral (5%) curvature. Patients with thin hypoplastic distal urethra (Type I chordee based on the Devine and Horton classification system) were excluded from the study, since they were considered hypospadiac variants. A congenitally short urethra occurred in only 7% of patients [17]. Snodgrass et al. found no histological evidence of fibrous bands or dysplastic tissue in the urethral plate of boys with varying degrees of hypospadias with and without chordee; all samples demonstrated well-vascularized connective tissue comprised of smooth muscle and collagen [18], which was consistent with previous case reports [10,19]. Congenital short urethra 6 (7) 3 (50) Dorsal and lateral curvature of the penis occurs in cases with as well as without hypospadias [20,21]. All cases referred to children with associated hypospadias. The incidence of dorsal curvature in the absence of hypospadias is low (5%) and primarily associated with epispadias [22]. In boys, true congenital curvature has been associated with a long phallus and its correction is recommended in the case of functional impairment [22]. Dorsal chordee has been described after circumcision most likely secondary to scarring; spontaneous resolution has been reported. Repair of dorsal chordee is recommended when curvature is more than 30 degrees and/or associated with hypospadias [20].
Penile Torsion
Congenital penile torsion is a malformation of unknown cause in which there is a three-dimensional malrotation of the corporal bodies or sometimes just the glans (Fig. 1). The abnormal penile rotation is usually counterclockwise, more common on the left side [23], and many times associated with other penile or urethral malformations, such as chordee or hypospadias. The incidence of isolated penile torsion is 1.7-27%, with torsion of more than 90 degrees reported in 0.7% of cases [24,25]. Torsion of the penis can vary in severity ranging from 30 degrees in mild cases to 180 degrees. It is hard to know how much of a functional problem this malformation causes in adults. Most children who present are asymptomatic; however, the parents usually wish to correct the cosmetic defect. In a survey of adult men evaluated at a sexual dysfunction/infertility clinic, 12% of patients had penile torsion. Of those, 80% had a mild form of the abnormal curvature (<30 degrees), 5% had torsion of more than 60 degrees, and, overall, 2% of these patients actually requested corrective cosmetic surgery. No patient complained of sexual dysfunction related to penile torsion [26].
EVALUATING BOYS WITH ABNORMAL PENILE CURVATURE
Children presenting with reported abnormal penile curvature should be assessed for the degree of curvature and/or torsion, and its direction, as well as any other genital anomalies, such as hypospadias, urethral hypoplasia, and cryptorchidism. In the clinic setting, the ability to make an assessment depends on the cooperation of the child as well as his anatomic limitations. Compression of the suprapubic fat should be performed to best expose the penis, and assess the presence or absence of chordee or torsion. It is also important to assess the prepuce, and whether the complete or incomplete prepuce might be responsible for any of the curvature. Along these lines, a tight frenulum ventrally or an epithelial skin bridge dorsally or laterally may be the cause of the curvature. If the glans penis is covered by an irreducible phimosis, then the presence of chordee may be inferred by inspecting the penis from the side of the penis and the presence of torsion may be inferred from deviation of the median raphe. The degree of chordee can be best evaluated when boys have an erection at the time of examination; unfortunately, this is not very common. Penile torsion may be assessed by using the orthotopic meatus as a guide to determine the degree of rotation. . In the operating room, penile curvature is assessed by inducing either artificial or pharmacologic erection performed after degloving the penile shaft skin. First, it is important to reassess the curvature noted in the clinic by reducing the prepuce and identifying torsion and/or chordee, and to inspect for tethering of the glans by either the frenulum or skin bridges. The incision is made where a circumcision would be performed or use the existing line of a previous circumcision. The penile shaft must be degloved evenly at Buck's fascia to the same level proximally, while identifying and protecting the urethra with a small catheter (5F or 7F) in the urethra. A fine needle (25 Gauge) is used to inject saline into the lateral aspect of one of the corpora or through the glans with a tourniquet at the base of the penis. Artificial erection can be performed multiple times during a case with release of the tourniquet to release the erection after gleaning the desired information; however, it is important to recognize that each injection can cause a hematoma. Pharmacologic erection can be induced by intracorporal injection of a vasodilator (prostaglandin, papaverine, phentolamine) instead of saline. This technique may better evaluate chordee originating at the penile base or in cases where a large suprapubic fat pad is encountered that makes tourniquet placement difficult. The disadvantages of the pharmacologically induced erection is the lack of accurate dosing regimens in children, lack of response or prolonged erections lasting more than 6 h (priapism), additional cost, and the need for a reversal agent (phenylephrine) [27].
Similarly, the extent of penile torsion requires proper penile skin degloving. In cases where the glans is torsed due to misdirected healing of a circumcision, the glans will spring back into its normal position. In other cases, the glans will have an accentuated torsion, as the circumcision line of healing maintains the glans in a position that under-represents the true extent of torsion. Finally, the glans may remain in its position despite degloving. Once the degree and location of curvature and/or torsion has been assessed, the site for orthoplasty and the specific technique to be employed are dictated by the direction and severity of the curvature.
Skin Bridge and Frenular Release
The lateral or dorsal tilting of the glans can be corrected by releasing the skin bridging or release the frenulum responsible for ventral deflection. Sometimes this is a very simple maneuver, as the skin bridge is thin and narrow, and can simply be excised and hemostasis obtained. Other times, the skin bridge is broad and thick. In these cases, it is important to secure the plane between the skin bridge and the coronal sulcus before excising the skin bridge, and then to secure hemostasis and skin closure. The surgeon must remember that the skin bridge may be only partially responsible for the curvature.
Skin Release and Transfer
Penile skin tethering may be the sole source for mild penile curvature or low degrees of penile torsion. This is identified after proper degloving and artificial erection, if necessary. If curvature or torsion have resolved, the procedure is complete after skin closure. In some cases, any resulting ventral skin deficiency can be corrected with rotation of a pedicle preputial patch [28].
Plication Techniques (Fig. 2) The principle of plication is the next simplest technique for correcting curvature. Following degloving, the plication is applied opposite to the point of maximum curvature determined during the artificial erection. Heineke (1886) [29] and Mikulicz (1887) [30] described a new method of pyloroplasty closure in which a longitudinal incision was closed transversely; this was applied by Nesbit (1965), who excised diamond-shaped wedges at the point of maximum curvature[31]. It has become clearer from recent anatomical studies that any dissection of the neurovascular bundle may damage nerve fibers that fan out from the 11 and 1 o'clock positions towards the ventral surface [10,32]. The only area truly devoid of nerve fibers is the 12 o'clock position, which also appears to be the area of greatest tunica albuginea thickness and strength [10,33]. Dorsal midline plication with one or multiple parallel sutures was described by Baskin et al. [10] to ensure maximum preservation of nerves and has since been applied by others with good short-term results [34,35]. Potential disadvantages with this technique include limited applicability to mild-moderate penile curvature and poor efficacy when used in older boys, as the midline plication sutures will probably not hold up to rigid erections [36]. (Fig. 3) In 1975, Devine and Horton described their experience adding dermal tissue to the tunica albuginea to correct chordee associated with hypospadias and epispadias [37]. Since then, several other authors have reported their similar success [38,39,40]. When significant curvature persists after penile degloving, the surgeon has to decide whether plication can correct the chordee without significant shortening of the penis or whether a dermal graft may be used. The urethra can be separated from the corporal bodies, or transected if there is a hypospadias. An artificial erection is created and the tunica albuginea is incised at the point of maximum curvature, including the septum between the corporal bodies. Care must be taken not to incise cavernosal tissue and nerve tissue must be preserved on the lateral aspect of the phallus [42]. Dermal grafts are prepared by excision of an elliptical segment of non-hair bearing skin from the inguinal region. Once the epidermis is removed by shaving it from the dermis using sharp scalpels, the dermis is then defatted. The remaining dermis is trimmed to the size of the defect and attached to the edges of the tunica albuginea with absorbable suture. Repeat artificial erection should be obtained to ensure complete orthoplasty and to confirm the absence of leakage [39,41]. The use of dermal grafts has been shown to yield superior cosmesis and can prevent penile shortening seen with extensive plication in patients with severe penile curvature [39]. Badawy and Morsi [41] reported their 10-year follow-up data for 16 patients with penile dermal grafts showing that erectile function was well preserved in 88% of patients. Two patients had mild residual curvature and one of them needed phosphodiesterase inhibitors to achieve rigid erections. Other materials, including tunica vaginalis [40,42], dura, and pericardium [42], have also been used, but with inferior results and without long-term follow-up. Early experience with small intestine submucosa indicates that this material is safe, but its long-term durability remains unknown [43,44].
Corporal Rotation and Penile Disassembly
For more complex curvature, corporal rotation, as described by Koff and Eakins[45], or penile disassembly, as popularized by Perovic and Djordjevic [46], offer satisfactory surgical results. The principle of corporal rotation involves separation of the corpora cavernosa from the urethral plate and distal corpus spongiosum, starting at the glans and dissecting distally. The midline septum is incised longitudinally. The neurovascular bundles are also elevated off the corporal bodies to avoid possible crush injury. The technique was further developed by adding a series of transverse dorsal plication sutures. No incision is made into the corporal bodies preventing penile shortening [47,48]. Reported series consist only of a small number of patients and long-term follow-up data are lacking.
An extension of the corporal rotation is the penile disassembly technique, as used in boys with complete primary epispadias. The penis was straightened in 68% of cases and additional minor corporoplasty was needed in one-third of patients. A major disadvantage is the extensive dissection needed to separate the dorsal nerves off the corpora, although theoretically this can be achieved with potential damage to only small side branches [32].
Penile Torsion
In mild forms of penile torsion (<90 degrees), the glans is directed away from the midline, but the orientation of the corporal bodies at the base of the penis is usually normal. The defect is often correctable by penile degloving and realignment of the median raphe. Bar-Yosef et al. [5] reported satisfactory results in 95% of patients with isolated penile torsion with rotation <90 degrees and residual torsion of <30 degrees in 5% of patients using a simple technique of penile degloving and realignment. In children with higher degrees of torsion (>90 degrees) or torsion associated with hypospadias, the use of a dorsal rotational dartos flap may help to correct the defect. The technique first described by Fisher and Park [8] showed that at short-term follow-up, cosmetic outcomes were satisfactory in all eight patients. No complications or evidence of residual torsion were reported. In this technique, a dorsal dartos flap is rotated around the right side of the penile shaft to correct for counterclockwise (rotation to the left) torsion. The technique was successfully replicated by Bauer and Kogan [49] and none of the 25 patients needed further repair. Torsion was completely corrected in 16 patients, with the reminder of patients having an insignificant (<30 degrees) amount of residual torsion. However, long-term results for this approach are still lacking. More extensive repair may be needed for children with associated hypospadias or chordee. Bhat et al. [50] described a series of 27 cases with congenital penile torsion ranging from 45 to 180 degrees (mean 69). Only 3.7% of cases were corrected with simple skin rearrangement. The authors describe the use of extended mobilization of the urethra or even the urethral plate to correct the degree of torsion and their overall success rate was 87.5%. In this series, more extensive repair was most likely required secondary to the presence of chordee as well as hypospadias. Isolated penile torsion should be approached conservatively and if cosmetic correction is requested, the least-invasive approach should be used to correct the defect.
SUMMARY
Penile curvature can present a challenging problem to the pediatric urologist. Better understanding of penile neurovascular anatomy has led to improvement of surgical techniques and outcomes for treatment of penile curvature. Several surgical techniques have evolved; however, none are without complications and long-term follow-up studies are lacking. The majority of congenital penile curvature without hypospadias can be corrected with simple degloving or plication techniques. Residual curvature can be addressed with additional plication or more extensive surgical correction, such as dermal graft placement and, in rare cases, complete penile disassembly. The most important aspects of any technique are preservation of the neurovascular structures and urethral plate whenever possible. Correction of curvature should be done preferably during the 1 st year of life with a stepwise, minimally invasive approach. Longterm studies to assess efficacy and complications are needed in order to verify current surgical techniques.
|
v3-fos-license
|
2020-10-17T13:06:30.588Z
|
2020-10-15T00:00:00.000
|
222842013
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pds.5124",
"pdf_hash": "01230ab0d9ba1d0d724ae8c19ab0caedfee8b368",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46120",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7a4e500d1d0735385d29c359a4a9d76644a10b3a",
"year": 2020
}
|
pes2o/s2orc
|
Use of non‐vitamin K antagonist oral anticoagulants in Colombia: A descriptive study using a national administrative healthcare database
Abstract Purpose We aimed to describe time‐trends in the use of NOACs among a group of ambulatory patients with nonvalvular atrial fibrillation (NVAF) in Colombia and to describe treatment patterns and user characteristics. Methods Using the Audifarma S.A administrative healthcare database in Colombia, we identified 10 528 patients with NVAF aged at least 18 years between July 2009 and June 2017 with a first prescription (index date) for apixaban, dabigatran or rivaroxaban (index NOAC) and followed them for at least year (max, 8.0 years, mean 2.2 years). We described patient characteristics, NOAC use over time, and the dose of the first NOAC prescription. Results A total of 2153 (20.5%) patients started on apixaban, 3089 (29.3%) on dabigatran and 5286 (50.2%) on rivaroxaban. The incidence of new users of apixaban and rivaroxaban increased over study years while for dabigatran it decreased. Mean age at the index date was: 78.5 years (apixaban), 76.5 years (dabigatran), 76.0 years (rivaroxaban). The percentage of patients started NOAC therapy on the standard dose was: apixaban 38.0%, dabigatran 30.9%, rivaroxaban 56.9%. The percentage still prescribed their index NOAC at 6 months was apixaban 44.6%, dabigatran 51.4%, rivaroxaban 52.7%. Hypertension was the most common comorbidity (>80% in each NOAC cohort). Conclusion During the last decade, the incidence of NOAC use in patients with NVAF affiliated with a private healthcare regime in Colombia has markedly increased. Future studies should evaluate whether the large number of patients with NVAF starting NOAC treatment on a reduced dose are done so appropriately.
| INTRODUCTION
Atrial fibrillation (AF) is a common cardiac arrhythmia with a prevalence that increases with age. 1 It is estimated that one in four middle-aged adults in Europe and the United States will develop AF in their lifetime. 2 The arrhythmia is associated with a 4-to 5-fold increase in the risk of ischaemic stroke 3 and a 1.5-to 2-fold increased risk of all-cause mortality. 2 Most epidemiological data on AF have come from Western populations; however, analysis of national healthcare databases show that AF also represents a substantial public health burden in Latin America, with estimated country-specific prevalences of around 13% among individuals aged 70 years or more. 4 Moreover, evidence suggests that the prevalence of AF, stroke, and associated mortality has increased dramatically in Latin America, likely due to the combined effect of the aging population and poor control of major risk factors such as hypertension. 5 However, little is known about the management of patients with AF in this area of the world since the introduction of non-vitamin K antagonist oral anticoagulants (NOACs) as an alternative option for stroke prophylaxis in this patient population in the last decade. Non-vitamin K antagonist oral anticoagulants have been shown to be noninferior to warfarin in reducing the risk of stroke and systemic embolism in patients with AF, and to have a superior safety profile. This has been shown in the overall pivotal clinical trial populations in which their approval was based, [6][7][8] as well as in subanalyzes of these trials restricted to participants in Latin America. 9 Unlike warfarin, the fixed dose regimens and predictable pharmacoki- 10 The Audifarma database has been validated in multiple studies that show how medications are used in the Colombian population. [11][12][13] The source population included patients in the contributory regime aged at least 18 years between July 2009 and June 2017 with at least 1 year of enrollment with their insurance provider and at least 1 year of available data following their first recorded outpatient health contact to guarantee a certain level of continuity with health services. No patient identifying information was used in this study. The study protocol was approved by the bioethics committee of the Universidad Tecnológica de Pereira, Colombia.
| NOAC study cohorts
From within the source population, three mutually exclusive cohorts of first-time users of NOACs (apixaban, dabigatran or rivaroxaban) were identified with the date of first NOAC prescription (index NOAC) set as the index date. If a patient had a prescription for another anticoagulant (eg, warfarin) in the year before their index date they were classified as non-naïve, while patients with no prescription for another anticoagulant in the year before their index date were classed as naïve. We excluded patients who were prescribed two different NOACs on the same day. Patients who qualified as a first-time user of more than one NOAC at different times during the study period (ie, switchers) were assigned to the cohort of the first prescribed NOAC. For each NOAC cohort we subsequently only retained patients with a record of AF (ICD-10 code I48) before the index date or in the 2 weeks after the index date. Patients with a record of mitral
KEY POINTS
• The marked increase in the use of rivaroxaban and apixaban to patients with NVAF affiliated with a private healthcare regime in Colombia over the last decade indicates growing confidence in the prescribing of these two NOACs among physicians in the country.
• As substantial numbers of patients with NVAF affiliated with a private healthcare regime in Colombia appear to be prescribed a reduced dose NOAC, studies are now warranted to evaluate the extent to which this is done appropriately-in accordance with the drug labels. stenosis (ICD-10 codes: I050, I05X, I052), valvular replacement (ICD-10 codes: Z952-Z954), and others stenosis (ICD-10 codes: I058, I059, I080, I081, I083, I088) during this time interval were excluded to identify only those patients with NVAF because there are no specific ICD-10 codes for NVAF. All patients were followed up for at least 1 year from the index date, until leaving the health plan, death or end of study data collection (December 2017).
| Characteristics of first-time NOAC users with NVAF
We extracted data on patient demographics (age and sex), comorbidities in the year before the index date including cardiovascular comorbidities (myocardial infarction, heart failure, ischaemic stroke, haemorrhagic stroke, venous thromboembolism (VTE) and hypertension) and other comorbidities (diabetes mellitus, chronic obstructive pulmonary disease, gastrointestinal bleeding, severe renal failure, and cancer). We also extracted data on the following medications prescribed in the year before the index date: other anticoagulants including warfarin and low-molecular-weight heparin (LMWH), antiplatelets (low-dose aspirin and clopidogrel), antiarrhythmic drugs, antihypertensive drugs, statins, antidiabetic drugs, nonsteroidal antiinflammatory drugs, acid-suppressive drugs, antidepressants and oral steroids (see Table S1). Polypharmacy was assessed as the number of different medications prescribed in the 2 months before the index date. We also identified patients with a prescription for another anticoagulant drug (including warfarin and low-molecular weight heparin) at any time before the index date, and classed these patients as anticoagulant non-naïve; all other patients were classed as anticoagulant naïve.
| Characteristics of the index NOAC prescription
For the index NOAC prescription and for subsequent NOAC prescriptions, we extracted information on the number of pills prescribed, the dose and the posology, and estimated the duration of each prescription using these instructions. If there was missing information on the daily dose, we made an assumption about the most likely dose based on assessment of the timing of subsequent prescriptions to that patient. We assessed the dose of the index NOAC prescription as well as the dose prescribed 3 months later. For all patients, we calculated the duration of the first episode of continuous NOAC treatment. Continuous treatment was when there was either no gap in treatment of >30 days between the end of the supply of a prescription and the start of the next prescription for the same NOAC, or no further prescription after the end of the previous one.
| Statistical analysis
For each NOAC cohort, the characteristics of patients and of the index NOAC prescription (at the start of follow-up and at 3 and 6 months) were described using frequency counts and percentages for categorical data, and means with SD for age. To evaluate trends in NOAC prescribing for stroke prevention in AF over time, the number of patients with NVAF newly prescribed a NOAC was described for each study year. In the calculation of incidence rates, we used only F I G U R E 1 Flowchart depicting the identification of the three NOAC study cohorts. *Patients dispensed two different NOACs on the same date. †Patients were excluded if they had a code for mitral stenosis/valvular replacement before the index date or in the 2 weeks after the index date, or if they had a NOAC dispensation with no associated diagnosis. NOAC, non-vitamin K antagonist oral anticoagulant; NVAF, nonvalvular atrial fibrillation [Colour figure can be viewed at wileyonlinelibrary.com] patients enrolled with two of the five healthcare providers contributing to the database (Salud Total and Compensar, corresponding to 3.6 of the 4.8 million patients in this study) for the numerator. This was because the denominator for this calculation-the total number of patients in the database (the exact number of individuals affiliated with the insurance regime in each year)-was only available for these two providers. Incidence rates of new users of NOACs with NVAF were calculated for each study year and were expressed per 10 000 individuals. We also calculated the percentage use for each OAC dispensed out of all OACs for each study year. In an analysis comparing the proportions of use of the two insurance companies vs the other three, in which the exact number of the total membership was not known, it was found that the proportions of use of NOACs was the same. All data analysis was conducted using SPSS Statistics Version 25 (IBM) for Windows. In the year before the index date. b On the index date or in the year before the index date.
| Characteristics of first-time users of a NOAC with NVAF
Characteristics of the three study cohorts are shown in Tables 1 and 2. There were slightly more males than females in each cohort and the mean age was similar between cohorts, albeit slightly higher among apixaban users (apixaban 78. 5
| Characteristics of NOAC use
Details about the index NOAC prescription are shown in Table 3.
Rivaroxaban was mostly prescribed once daily (97.2%), which is the correct posology for stroke prevention in AF. 29,33 Heart failure was present in about one-third of patients with NVAF, which is both higher than some previous findings, 29,33-35 but lower than others, 23
| CONCLUSION
We conclude that over the last decade, rivaroxaban has been the most commonly prescribed NOAC, followed by dabigatran, among patients with NVAF affiliated to a private health insurer in Colombia.
Approximately half of patients continue to receive NOACs 6 months after the start of treatment, which suggests a certain level of adherence and tolerability, and a substantial percentage of patients, especially those starting therapy on apixaban, are prescribed a reduced dose. Studies are now needed focusing on the real-world effectiveness and safety of NOACs in Colombia, as well as an evaluation of the appropriateness of reduced dosing.
|
v3-fos-license
|
2017-06-26T21:19:05.813Z
|
2014-05-16T00:00:00.000
|
9392646
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcevolbiol.biomedcentral.com/track/pdf/10.1186/1471-2148-14-107",
"pdf_hash": "a46d1f523e557c7a5ad63d0d08b71de066bd6abc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46121",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "a46d1f523e557c7a5ad63d0d08b71de066bd6abc",
"year": 2014
}
|
pes2o/s2orc
|
The role of deleterious mutations in the stability of hybridogenetic water frog complexes
Background Some species of water frogs originated from hybridization between different species. Such hybrid populations have a particular reproduction system called hybridogenesis. In this paper we consider the two species Pelophylax ridibundus and Pelophylax lessonae, and their hybrids Pelophylax esculentus. P. lessonae and P. esculentus form stable complexes (L-E complexes) in which P. esculentus are hemiclonal. In L-E complexes all the transmitted genomes by P. esculentus carry deleterious mutations which are lethal in homozygosity. Results We analyze, by means of an individual based computational model, L-E complexes. The results of simulations based on the model show that, by eliminating deleterious mutations, L-E complexes collapse. In addition, simulations show that particular female preferences can contribute to the diffusion of deleterious mutations among all P. esculentus frogs. Finally, simulations show how L-E complexes react to the introduction of translocated P. ridibundus. Conclusions The conclusions are the following: (i) deleterious mutations (combined with sexual preferences) strongly contribute to the stability of L-E complexes; (ii) female sexual choice can contribute to the diffusion of deleterious mutations; and (iii) the introduction of P. ridibundus can destabilize L-E complexes.
Background
Lake frog (Pelophylax ridibundus Pallas, 1771) and pool frog (Pelophylax lessonae Camerano, 1882) can mate producing the hybrid edible frog (Pelophylax esculentus Linneus, 1758). P. esculentus can coexist with one or both of the parental species giving rise to mixed populations. Usually, the genotypes of P. ridibundus, P. lessonae and P. esculentus are indicated by RR, LL, and LR, respectively. In Europe there are mixed populations containing P. ridibundus and P. esculentus individuals, called R-E systems, populations with P. lessonae and P. esculentus individuals, called L-E systems, and populations with all three species. Due to the eastern origin of P. ridibundus, R-E complexes are frequently found in Eastern Europe, while L-E systems are widespread throughout the rest of Europe [1][2][3][4][5]. Hybrids in these populations reproduce in a particular way, called hybridogenesis [1,[6][7][8][9][10][11][12]. Hybridogenesis consists of a gametogenetic process in which the hybrids exclude one of their parental genomes premeiotically, and transmit the other genome, clonally, to eggs and sperm. For example, in L-E complexes, P. esculentus hybrids have both the genomes of the parental species, L and R, but they produce only R gametes.
This mode of reproduction requires hybrids to live sympatrically with the parental species whose genome has been eliminated. In this way hybrids in a L-E system eliminate the L genome thus producing P. esculentus when mating with P. lessonae, and generating P. ridibundus when mating with other hybrids. Usually P. ridibundus generated in L-E complexes are inviable due to deleterious mutations accumulating in the clonally transmitted R genome [13][14][15][16][17]. Analogously, in R-E systems there is a tendency during hybrid gametogenesis to eliminate the R genome; as with L-E systems, P. lessonae, the offspring of hybrids, are often inviable. http://www.biomedcentral.com/1471-2148/ 14/107 In natural L-E complexes, the inviability of offspring of P. esculentus × P. esculentus matings is evidenced by the absence of adults P. ridibundus. Experimental crosses between coexisting hybrids (from localities sampled throughout the range of L-E populations) also show that such offspring are inviable [6,14,16,17]. These studies have also revealed that the same hybrid individuals, producing inviable progeny, produce viable progeny when crossed with either parental species or with hybrids from different regions. The lethality of natural hybrid × hybrid matings is thus neither the result of hybrid sterility nor the inherent consequence of the hemiclonal reproductive mode. Guex et al. present two simple hypotheses, both explaining the observed inviability of P. esculentus × P. esculentus progeny by the load of deleterious mutations on the clonally transmitted R genomes [16]. Quoting from their paper, the hypotheses are "(1) inviability is caused by homozygosity for recessive deleterious mutations at particular gene loci; or (2) inviability is caused by a general deterioration of non-recombining R genomes through Muller's ratchet, reflecting different hemiclonespecific sets of incompletely recessive mutations, which leads to lethality when two such deteriorated R genomes are combined". Their conclusion is that the hypotheses are not mutually exclusive, however there is evidence to support the plausibility of the first hypothesis: Muller's ratchet generates deleterious mutations in relatively random places in the genome, which are then likely to be different in different geographical areas. The study in [16] suggests that, in some cases, single lethal mutations may have the same effect as the accumulation of deleterious ones. However, most studies on P. esculentus fitness, suggest that when in a heterozygous state the effect of deleterious mutation is not significant.
Due to the inviability of P. esculentus × P. esculentus offspring, P. esculentus populations cannot survive alone, but must act as a sexual parasite of one of the parental species. This dependency can be avoided only by all-hybrid populations in which the presence of triploid and tetraploid individuals leads to recombination among homolog parental chromosomes [18][19][20][21][22][23]. This recombination is able to purge, at least partially, deleterious mutations from genomes. thus producing viable offspring of hybrids. Due to its wide distribution, we will consider L-E complexes, in which P. esculentus and P. lessonae coexist. In such a complex, the reproductive pattern is shown in Table 1, where the subscript y indicates the male sexual chromosome. The Y chromosome determines the sex of frog males and can occur only in the L genome, due to primary hybridization which involves, due to size constraints, P. lessonae males and P. ridibundus females. Only one of the three possible matings resulting in viable offspring produces LL genotypes (Table 1). This would give an advantage to P. esculentus which could outnumber P. lessonae and eventually eliminate them. This situation would also eventually result in an extinction of P. esculentus which cannot survive without the parental species. In addition to their relative abundance which is promoted by the above reproductive pattern, P. esculentus have other advantages. Although in many cases they show either no differences or intermediate characteristics compared to their parental species [24][25][26][27], P. esculentus show behavioural differences [28,29], and, have, by heterosis, a greater fitness than the parental species in certain aspects [13,15,[30][31][32][33][34][35][36]. The combination of the relative abundance and heterosis should out-compete P. lessonae in L-E complexes. The widespread distribution of L-E complexes, although with different percentages of hybrids, reveals that there are mechanisms which contribute to the stability of such complexes [37][38][39][40]. Of these mechanisms, sexual selection seems to be one of the most important. In fact, P. esculentus females prefer (either overtly or cryptically) P. lessonae males than males of their own species [41][42][43][44][45]. Many mathematical and computational models have studied the influence of sexual selection in the evolution of populations [46][47][48][49][50][51][52][53][54]. In addition some models have focused on sexual selection in complexes in which some form of clonal reproduction exists [55][56][57]. The models in [57][58][59] show how female preference is able to stabilize L-E complexes by counterbalancing both heterosis and the reproductive advantage of P. esculentus. Other factors, such as reproductive performance, in conjunction with sexual choice can increase the stability of L-E complexes [60].
Using an individual-based computational model, in this paper we study three problems. The first is how deleterious mutations contribute to the stability of L-E complexes. The second concerns how, in L-E complexes, deleterious mutations can diffuse in the R genomes of the whole P. esculentus population. The third is the invasiveness of P. ridibundus in L-E complexes.
Regarding the first problem, the aim is to investigate whether deleterious mutations on the R genome contribute, together with female preferences, to the stability of L-E complexes.
As for the diffusion of deleterious mutations in the population, an interesting hypothesis is proposed in [61], in which deleterious mutations can influence female preferences. From the literature we know that females of P. esculentus have a strong preference for P. lessonae males. Vorburger et al. in [61] suggest that, of P. esculentus males, http://www.biomedcentral.com/1471-2148/14/107 P. esculentus females may prefer those with mutations on the R genome. Such mutations could make the affected loci on the R genome dysfunctional, thus producing a more "lessonae-like" genotype.
Finally, regarding the third problem, Vorburger and Reyer in [9] suggest that the introduction of P. ridibundus can either provoke the collapse of L-E populations, or result in a replacement by P. ridibundus of both P. lessonae and P. esculentus, leading to a mono-specific population.
In order to gain further insight into the three problems, we attempt to answer three questions.
-Is the role of deleterious mutations necessary for the stability of L-E complexes? -How can a stable L-E complex be obtained? -What is the effect of introducing P. ridibundus into L-E complexes?
The model
To study the interaction between populations of P. lessonae, P. esculentus and P. ridibundus we developed an individual-based model. To answer the three questions above, we started with a simple model (for the first question) and then extended it step by step (to tackle the second and third questions). In the simplest model, we consider diploid individuals whose genotype is represented by two chromosome types: L and R. Chromosomes R can contain deleterious mutations (represented by R d ), and only chromosomes L can have the sex-determining chromosome Y (represented by L y ). Thus the possible genotypes are: LL, L y L, LR, L y R, LR d , The fitness of a genotype g, F( g), is computed as follows: where σ is a parameter measuring the strength of the ecological selection (smaller values of σ correspond to a stronger selection). In the simulations we use two different values for σ , σ = 0.4, which corresponds to a hard environment, and σ = 0.6, which corresponds to a weaker selection. Function c( g), is the chromosomes fitness, defined as follows: where δ h and δ e describe the fitness decrement associated with homozygous genotypes (which do not gain from heterosis), and P. ridibundus genotypes, respectively. The use of a further decrement, δ e , in the chromosome fitness of P. ridibundus, derives from the fact that L-E complexes usually live in pools and marshes, where P. ridibundus are less fit. In the following sections, we use δ h = 0.2 and δ e will assume the values 0.0, 0.2, and 0.4. δ e = 0.0 means that P. ridibundus have the same fitness as P. lessonae (i.e. the environment includes niches for both species), while δ e = 0.4 represents the fact that P. ridibundus are strongly disadvantaged compared to P. lessonae (i.e. the environment consists of a typical P. lessonae habitat).
We consider that the populations have a reproductive season each year. During this season all the females mate. Female sexual choice is implemented by a best-of-n selection procedure of males [52,62], i.e. a female mates with the most preferred of n randomly chosen males in the population. The best-of-n procedure, which is a usual computational method to take into account female preferences, can be described as follows. Note that the greater n is, the greater the strength of the female preference. Increasing the n value will lead to a female choosing from a greater number of males, thus mimicking the behaviour of a more discriminating female. In order to obtain stable complexes, we assume a species-specific female preference, in particular, following the studies in [59,60], we assign a stronger preference to females of the parental species than the hybrid females. P. esculentus females have the same behaviour as P. lessonae, thus competing for the same kind of males. Likewise, P. ridibundus females prefer more "ridibunduslike" males. Hereafter we call "lessonae preference" this kind of female preference, because P. lessonae males are the most preferred. The number of n candidates, which each female chooses from, is set to 30, 15, and 30 for the P. lessonae, P. esculentus, and P. ridibundus females, http://www.biomedcentral.com/1471-2148/14/107 respectively. Choosing from among 30 candidates has a biological flavour. If males are distributed in the environment with a density of one male per square meter, each female must swim in a circle of 10 meters of diameter in order to check out 30 males, a distance which is reasonable for a frog. In [57][58][59] it is shown that stable populations are only found when the preference of P. lessonae females is greater than the preference of P. esculentus females. In these papers it is shown that, under the above assumption, many different values of female preference lead to stable complexes. We have performed many simulations by leaving the value of n for P. lessonae and P. ridibundus unmodified, and by varying the value of n for P. esculentus. We have found that if the ratio of the value of P. esculentus to the value of the parental species belongs to the interval [0.03, 0.7] we obtain stable complexes, when P. ridibundus are inviable. This result is analogous to the ones in [57][58][59]. Because different values in the interval [0.03, 0.7] affect only the percentage of hybrid frogs in the final stable population, the choice of n does not change the overall dynamics of the population. For this reason we use the non-extreme value 0.5.
Offsprings genotypes are obtained from the gamete combination of the parents.
The reproductive season is followed by a viability selection. During this phase the probability that an individual of genotype g survives, p surv ( g), is given by a slight modification of the Beverton-Holt model: [63][64][65][66][67][68]: where b is the average number of offspring for females that can reach the stage of adults, φ is the percentage of females in the population, N is the number of individuals in the population competing for the resources, and K( g) is the carrying capacity associated with the genotype g. K( g) is given by F( g)K 0 where K 0 is the maximum carrying capacity of the environment. In all our simulations we assume b = 6 and K 0 = 3000. The standard Beverton-Holt model is modified due to the fact that we consider overlapping generations and we apply the viability selection based on survival probability, not only to young tadpoles but to all the individuals in the population. The simplest model is used to answer the first question by performing both simulations in which all R genomes carry deleterious mutations and simulations in which all R genomes are free from deleterious mutations.
To answer the second question, "How can a stable L-E complex be obtained?", we simulate the diffusion of deleterious mutations in the population, starting with an L-E complex, composed by P. lessonae, in which there are only a few P. esculentus individuals (without mutations on the R genome). We consider a mutation rate, μ, which gives the probability of adding a new deleterious mutation on the R genome of an offspring (in the simulations μ is set both to 10 −4 and to 10 −5 ). We consider different "stages" in the accumulation of mutations in the R genome. For the sake of simplicity we consider only three stages: R, R d1 , and R d . R is the genome without mutations, R d1 is the genome with a non lethal accumulation of mutations, and, as before, R d is the final stage of accumulation (lethal in homozygous individuals). With this scenario we have P. ridibundus females with the following possible genotypes (in order of decreasing fitness): The fitness decrease of each genotype with respect to the previous one is given by δ m . In the simulations we set δ m = 0.04. In this scenario, following [61], we assume that P. lessonae and P. ridibundus females have a strong preference for males of their own species, while of P. esculentus males, females prefer males with a more "lessonae-like" genotype (males with mutations on the R genotype). We call this kind of female preference "lessonae-like preference". We have P. esculentus males with the following genotype: L y R, L y R d1 , L y R d , which, according to the "lessonae-like preference", are in order of increasing preference by the female.
Finally, to answer the third question, "What is the effect of introducing P. ridibundus into L-E complexes?", we need to simulate the effect of the introduction of P. ridibundus males and females in an L-E complex. In these simulations we consider both R genomes without mutations; this is because we assume the absence of the Muller ratchet in the (sexually reproducing) introduced P. ridibundus and R d genomes generated by the stable L-E complex. The further extension we consider is the possibility of having P. ridibundus males, i.e. the possibility of having the Y chromosome on R genomes as well: R y and R yd .
We introduced a limit for the lifespan of individuals, thus all individuals exceeding 10 years of age are removed from the population. Removing old frogs from the system avoids that extremely fit individuals survive indefinitely because the viability selection is not able to remove them. This situation can easily happen in models with overlapping generations which do not consider deaths due to aging.
The parameters used in the model, with their meaning and values, are reported in Table 2.
Results
Is the role of deleterious mutations crucial for the stability of L-E complexes?
The first step in studying the role of deleterious mutations for the stability of L-E complexes is to consider an initial population composed of P. lessonae frogs and small percentages of P. esculentus: 5%, 10%, and 20%, respectively. Note that an initial situation with a large number of P. lessonae individuals favours the stabilization of the complex (there is no possibility of an early collapse due to the greater fitness of P. esculentus). We assume that all P. esculentus individuals carry the deleterious mutations on the R genome, that is P. ridibundus females are not viable and they do not appear in the population. We assume the "lessonae preference" for females, based on the values 30, 15, and 30 for the best-of-n procedure for the three species, as described in the previous section. Finally we consider δ h = 0.2. We perform simulations with σ = 0.4 and σ = 0.6. For each combination of parameters σ and δ e we performed 100 simulations, the possible outcomes of which are either a stable L-E complex or the collapse of the whole population. In all the simulations the populations evolve towards a stable L-E complex, following a typical population composition pattern ( Figure 1). The result of these simulations is not surprising. Essentially the results are in accordance with those in [58,59], showing that female preference is a strong stabilizing force for L-E complexes.
In order to investigate the role of deleterious mutations in the stability of L-E complexes, we consider the same parameters as the previous simulations, but we remove the deleterious mutations from all the R genomes of the initial population. In addition, we set δ e to 0.2 and 0.4, that is we consider that all P. ridibundus born from hybrids are always disadvantaged compared to P. lessonae and P. esculentus. We also set the mutation rate μ equal to 0, in order to prevent deleterious mutations, considering only a "mutation-free" population. We can observe that with all the parameter combinations and the initial percentages of P. esculentus, in all the simulations the population eventually collapses. If viable P. ridibundus females are produced, the reproductive pattern becomes as depicted in Table 3. The table highlights that this reproductive pattern generates a numerical disadvantage for P. lessonae, the population of which decreases. The decrease in the P. lessonae population has, as a consequence, a decrease in produced L gametes, which, in turn, results in a higher production of P. ridibundus. Thus the population of P. ridibundus females grows and eventually out-competes the other species, despite the weaker fitness of P. ridibundus females compared both to P. esculentus (δ h = 0.2) and to P. lessonae (δ e = 0.2 and δ e = 0.4), see Figure 2. Of course a population of only P. ridibundus females cannot survive.
How can a stable L-E complex be obtained?
In this section we study the effect of both "lessonae preference" and "lessonae-like preference" of P. esculentus and P. lessonae females in the diffusion of deleterious mutations. We set the mutation rate, μ, either to 10 −4 or to 10 −5 . We consider three possible stages of deleterious mutation accumulation in each R genome (R, R d1 , and R d ), thus any mutation event determines the passage from one stage to the next. We start the simulations with two different initial populations. The consistency of the initial P. lessonae population is the same in all the simulations, 2700 individuals, but the number of P. esculentus individuals is set, initially, either to 10 or 100 individuals. We also perform simulations with two values for δ e , which lead to a decrease in fitness of mutationfree P. ridibundus compared to P. lessonae. We set δ e to either 0.2 or 0.4, i.e. we consider that the environment is either weakly penalizing for P. ridibundus or is a typical P. lessonae habitat, which is not suitable for P. ridibundus frogs. In addition, any further mutation accumulation on the R genomes of P. ridibundus decreases their fitness by δ m = 0.4.
Firstly, we consider the "lessonae preference". According to this preference pattern, P. esculentus and P. lessonae females prefer P. lessonae males, and, do not choose among P. esculentus males. The results of the simulations show that in our model both with 10 and with 100 P. esculentus mutation-free individuals in the initial population, and with any value for δ e and σ , the lethal deleterious mutation diffuses slowly. This slow diffusion is not able to prevent the production of a sufficient number of viable P. ridibundus females, which leads to the collapse of the population with all possible parameter values. Figure 3 shows a typical dynamic of the population while Figure 4 shows the diffusion of mutations in the R genome of P. esculentus. The results of the simulations with σ = 0.4 and μ = 10 −4 are shown in Table 4.
What is the effect of the introduction of P. ridibundus in L-E complexes?
In this section we analyze the effect of introducing P. ridibundus into an L-E complex. This scenario really happens in natural environments due to the importation in Western Europe of P. ridibundus for commercial purposes. In order to study the effect of the translocation of P. ridibundus, we performed simulations by varying the fitness of the introduced frogs (δ e = 0.0, 0.2, 0.4). δ e = 0.0 means that the environment does not put P. ridibundus at a disadvantage with respect to P. lessonae. In these simulations we consider a strong selection strength, σ = 0.4.
We study the effect of introducing, in a stable L-E complex, a percentage of both 5% and 10% of P. ridibundus, males and females. We consider three different situations: i) P. ridibundus are introduced into the typical environment for P. lessonae (δ e = 0.4), ii) P. ridibundus are introduced into a mixed (lake/marsh) environment (δ e = 0.2), and iii) P. ridibundus are released into an environment that is fit for them (possibly close to resident frogs) (δ e = 0.0). We have four possible outcomes: stable L-E systems, stable L-E-R systems, stable P. ridibundus populations, and the collapse of the population (Table 6, Figures 7, 8, and 9). In the figures we show the dynamics for σ = 0.4. For σ = 0.6 we obtain analogous evolutions.
If the fitness of P. ridibundus is equal to the fitness of P. lessonae (δ e = 0.0), in most cases the population becomes a mono-specific population of P. ridibundus. Note that there are many cases in which the three species coexist at the end of the simulations, which we discuss explicitly in the following section. If the fitness of P. ridibundus decreases (δ e = 0.2), the introduced frogs do not survive for long. However, before their extinction P. ridibundus can mate with P. lessonae and P. esculentus, thus introducing mutation-free R genomes into the hybrid population. At this point, viable P. ridibundus females born from matings between hybrids lead the population to collapse. Finally, when the fitness of P. ridibundus is very low (δ e = 0.4), P. ridibundus frogs are immediately expelled from the system.
Discussion
From the results of the previous section we can deduce the overall dynamics of L-E complexes with viable P. ridibundus females. In general, the presence of viable P. ridibundus females significantly changes the reproductive outcome of L-E complexes, and the generation of offspring becomes as depicted in Table 3. For the sake of simplicity let us consider hypothetical complexes in which the number of females is the same for the three species. For such populations the production of P. lessonae offspring passes from 33.33% in stable L-E complexes (recall that in such complexes P. ridibundus are inviable and do not survive) to 16.66% in complexes where P. ridibundus 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 112 115 118 121 124 127 130 133 136 139 142 145 148 151 154 157 160 163 166 169 172 175 178 181 184 187 190 193 196 199 In the simulations we assume a "lessonae preference" for P. esculentus and P. lessonae females. The parameters are: σ = 0.4, μ = 10 -4 .
survive. Thus the viability of P. ridibundus decreases the relative abundance of P. lessonae offspring. The decrease in the relative production of P. lessonae offspring leads to a decrease in P. lessonae adults in the future. This, in turn, causes a decrease in the production of L gametes, which are produced only by P. lessonae frogs. This process, if not stopped by some external reasons, results in a trend towards the extinction of P. lessonae. A population composed only of P. esculentus individuals and P. ridibundus females cannot survive, because no gametes with the Y chromosome can be generated. The discussion which follows, regarding the three questions mentioned previously, is based on the trend towards extinction mentioned above, modulated for example by the accumulation of mutations, or the introduction of viable P. ridibundus males. Stable L-E system 76 73 LL population 1 0
Deleterious mutations are necessary for the stability of L-E complexes
We showed in Section 'Is the role of deleterious mutations crucial for the stability of L-E complexes?' that by using the "lessonae preference", we essentially obtain the same results as [58,59]. If there are no viable P. ridibundus offspring, the system evolves towards stability regardless of the strength of the selection. The same complexes, with the same values of female preferences, but without deleterious mutations on R genomes will collapse irrespectively both of the selection strength and the fitness of viable P. ridibundus. In the dynamics of the population towards collapse, there are roughly three phases. In the first phase, the effect of sexual selection and the abundance of P. lessonae males maintains a low number of P. ridibundus offspring. Although no mutations are present in the P. esculentus genome, a very few P. ridibundus females are generated because mating between hybrids is rare, due to female preferences. In the second phase, the number of hybrids increases because of their fitness as a result of heterosis, while the number of P. lessonae decreases. In this phase the increased number of hybrids facilitates mating which thereby produces viable P. ridibundus females. In the third phase, P. ridibundus females act as sexual parasites for hybrids, and their number grows because of P. esculentus × P. esculentus and P. esculentus × P. ridibundus matings. Note that these matings produce only P. ridibundus females. Viable P. ridibundus females will mate, preferably, with P. esculentus males (which have a more "ridibundus phenotype") producing a bigger population of ridibundus females for any further generation. In the model we do not consider male frogs as a limiting factor for reproduction, that is one male is able to fertilize the eggs of a unlimited number of females. We consider also that males will not make any specific choice of females [45]. The number of P. esculentus decreases because of the reduced number of P. lessonae. This phase ends with a population of all P. ridibundus females which quickly collapses ( Figure 2). Consequently, if stability is maintained by female preferences, collapse only can be prevented if the initial population of P. esculentus is affected by deleterious mutations on the R genomes, which prevents the birth of viable P. ridibundus. Thus we can conclude that, in a system in which the parameters used for the stability are essentially female preferences and viability of P. ridibundus, deleterious mutations are necessary for the stability of the complex. These results help to clarify why natural L-E complexes generating viable P. ridibundus are extremely rare, and in most cases the percentage of viable P. ridibundus is not significant [15]. It is difficult (perhaps impossible) for L-E complexes to persist if they generate viable P. ridibundus. Starting with an initial population in which all P. esculentus frogs carry deleterious mutations, the http://www.biomedcentral.com/1471-2148/14/ 107 0 500 1000 1500 2000 2500 3000 1 7 13 19 25 31 37 43 49 55 61 67 73 79 85 91 97 103 109 115 121 127 133 139 145 151 157 163 169 175 181 187 193 199 205 211 217 223 229 235 241 247 253 259 265 271 277 283 289 295 301 307 313 319 325 331 337 343 349 355 361 367 373 379 385 391 397 LL LR RR Figure 5 Results of typical simulations with an initial population composed of P. lessonae frogs and 10 mutation-free P. esculentus. In the simulations we assume a "lessonae-like preference" for P. esculentus and P. lessonae females. σ = 0.4, δ e = 0.4, μ = 10 −4 .
complex evolves towards a stable configuration (Figure 1). The possible collapse of the population occurs only if P. esculentus out-compete P. lessonae, however this evolution seldom takes place because of the large number of P. lessonae in the initial population. The results show that, with the same selection strength, L-E complexes evolve towards the same percentages of the two frog species, whatever the initial percentages.
Our results regarding the stability of L-E complexes both with deleterious mutations in all R genomes and with a "lessonae preference" are similar to those in [57][58][59]. We show that deleterious mutations strongly influence the stability of L-E complexes. Thus deleterious mutations are not only a secondary consequence of Muller's ratchet but they have an important role in the stability of complexes. In this paper we highlight that, if we consider both heterosis and females preferences as the only stabilizing forces of L-E complexes, the lack of deleterious mutations drives such populations to collapse. Thus neither female preferences nor deleterious mutations are sufficient to maintain the stability of L-E complexes however, in this scenario, each is necessary for stability.
Female preference can contribute to obtaining stable L-E complexes
The above discussion highlights the important role of deleterious mutations in stable L-E complexes. But how can stable L-E complexes be obtained?
Our results show that, in order to reach stable L-E populations, there must be forces that drive the diffusion of deleterious mutations. Even with a fast mutation rate, 10 −4 , Muller's ratchet alone is not sufficient for diffusing the deleterious mutations in the population. The "lessonae preference" does not allow P. lessonae and P. esculentus females to choose from among P. esculentus males. Thus the diffusion of deleterious mutation among P. esculentus individuals and P. ridibundus females is not guided by female preferences, but only by the mutation rate (essentially by Muller's ratchet). In all our simulations this diffusion turns out to be very slow, thus viable P. ridibundus females are generated before Muller's ratchet accumulates lethal mutations on all R genomes. This intermediate phase, with a sufficient number of viable P. ridibundus females, is responsible for the collapse of the whole system.
Selecting R genomes with a higher mutation accumulation is accelerated by the "lessonae-like preference". In this case the production of offspring with greater mutation accumulation on the R genome is favoured, and consequently the production of fit P. ridibundus females is lowered. To understand this process we need to consider that mutation accumulation on R genomes decreases the fitness of P. ridibundus females, but it does not affect the fitness of P. esculentus in which dysfunctional R genomes are counterbalanced by "healthy" L genomes. 1 7 13 19 25 31 37 43 49 55 61 67 73 79 85 91 97 103 109 115 121 127 133 139 145 151 157 163 169 175 181 187 193 199 205 211 217 223 229 235 241 247 253 259 265 271 277 283 289 295 301 307 313 319 325 331 337 343 349 355 361 367 373 379 385 391 Another important point is that a significant parameter in the diffusion of deleterious mutations is the fitness of P. ridibundus to the environment. If this fitness is too high, too many viable P. ridibundus females are produced before significant mutation accumulations. We know that a high number of such viable females will lead the population to collapse.
For computational purposes we have considered only three stages of mutation accumulation on the R genomes. This is an approximation of the Muller's ratchet which, in most cases, operates through a huge number of mutations. We approximate the Muller's ratchet by decreasing the mutation rate so that a mutation in the model corresponds to many mutations in real genomes. Following the estimation in [76], we assume that, in an eukaryote organism, the mutation rate in a whole genome during sexual reproduction is in the interval [ 3 × 10 −2 , 9 × 10 −1 ]. Many of these mutations are either not significant or not deleterious. Values of the mutation rate in the interval [ 10 −5 , 10 −4 ], used in our model, take both the above considerations into account.
Our study differs significantly with the results of other authors with regard to the diffusion of mutations. The models in [58][59][60] provide an extensive insight into the reasons for the stability of L-E complexes starting from the assumption that deleterious mutations are present in all the R genomes in the population. Our model builds on the previous ones by assuming sexual choices in the populations. However, it differs by considering a population in which deleterious mutations are not present, but are generated according to a mutation rate. In addition, only when this accumulation reaches a given threshold does it become lethal. This leads us to conclude that sexual selection not only stabilizes the complexes, but can also force mutation diffusion.
Note that our simulations do not enable us to prove the hypothesis suggested in [61]. Computational and mathematical models, without subsequent experimental support, can only be used to rule out incorrect hypotheses -they cannot prove correct ones. Computational and mathematical models can only state that a hypothesis is plausible. In the case of L-E complexes, the stabilization period is so long that no real experiment can support a hypothesis on its stabilization, however our model suggests that the hypothesis in [61] regarding female preference, could plausibly lead to stable L-E complexes.
Invasion of translocated P. ridibundus
Another point that we study with our model is the consequence of introducing P. ridibundus into stable L-E complexes. P. ridibundus can mate both with P. esculentus, producing P. ridibundus, and with P. lessonae (primary hybridization), producing P. esculentus. Primary hybrids can have low fertility rates [77], thus their contribution to the dynamics of the population is low. In our model we take account of this low contribution by decreasing the possibility of producing primary hybrids. This is done by setting the female preferences of P. ridibundus in a way that P. lessonae males are seldom chosen (the value of n, in the best-of-n procedure, is set to 30). On the other hand, the preference of P. lessonae females is mainly for males of their own species. We assign the same fitness both to the introduced P. ridibundus and to those generated by matings of P. ridibundus with P. esculentus. This is in accordance with the semi-natural experiments in [17]. However, in some simulations we use a P. ridibundus fitness, which is lower than the fitness of the 3000 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 112 115 118 121 124 127 130 133 136 139 142 145 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 101 106 111 116 121 126 131 136 141 146 151 156 161 166 171 176 181 186 191 196 201 206 211 216 221 226 231 236 241 246 251 256 261 266 271 276 281 286 291 296 301 306 311 316 321 326 331 336 resident P. lessonae and P. esculentus because we consider that a marshy environment with a low oxygen level, where P. lessonae and P. esculentus live, is less suitable for P. ridibundus.
The results show that, as predicted in [9], the introduced P. ridibundus often out-compete the other species resulting in a mono-specific population, when their fitness is comparable to the resident population's. Although the introduction of P. ridibundus results in new R hemiclones, which contribute to the genetic diversity of hybrids, our results do not support the hypothesis presented in [5] that this genetic diversity can stabilize the hybridogenetic system. If the fitness of P. ridibundus is competitive with the fitness of the resident population (δ e = 0.0), P. ridibundus males will survive and P. ridibundus often will replace the original population. Note that in this case we also have stable L-E-R complexes as the outcome. This is a system where two independent populations coexist, an L-E complex and a P. ridibundus population. The L-E complex is stable due to female preferences and lethal mutations on the R genomes, while the P. ridibundus population is stable because of the absence of deleterious mutations, which are purged by selection. The two populations do not cross because the number of individuals in both is high enough to ensure that females of one population in most cases find a preferred male, of the same population, in the set randomly chosen by the best-of-n procedure.
The whole population collapses when the introduced frogs have a low fitness. In this case, P. ridibundus individuals will not survive for long, given their unfitness, but P. ridibundus frogs, before their death, can introduce R genomes without mutation in the P. esculentus population, thus provoking the collapse of the complex.
Finally, by assuming that P. ridibundus are at a considerable disadvantage, the introduced unfit population is out-competed. During their short survival time, P. ridibundus females are not able to have a sufficient number of matings with P. esculentus males, thus they cannot introduce a sufficient number of R genomes without mutations into the P. esculentus population.
Conclusions
We have presented an individual-based computational model to study L-E water frog populations, i.e. complexes composed of P. lessonae and P. esculentus. The individual based model considers not only the genotypes, but also the age of each individual and the average lifespan. In addition, female preferences (implemented by a best-of-n procedure) and ecological selection are considered.
|
v3-fos-license
|
2019-10-10T09:18:10.029Z
|
2019-10-04T00:00:00.000
|
208603494
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "BRONZE",
"oa_url": "https://www.ajtmh.org/downloadpdf/journals/tpmd/102/2/article-p403.pdf",
"pdf_hash": "476fe7879542a3ce7cd5a31ce02a1b3c14aaf6a7",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46122",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "20bb96cf85f63c495d1a5705304b94ac47f1aa9f",
"year": 2019
}
|
pes2o/s2orc
|
Heart Rate Variability as an Indicator of Autonomic Nervous System Disturbance in Tetanus
Abstract. Autonomic nervous system dysfunction (ANSD) is a significant cause of mortality in tetanus. Currently, diagnosis relies on nonspecific clinical signs. Heart rate variability (HRV) may indicate underlying autonomic nervous system activity and represents a potentially valuable noninvasive tool for ANSD diagnosis in tetanus. HRV was measured from three 5-minute electrocardiogram recordings during a 24-hour period in a cohort of patients with severe tetanus, all receiving mechanical ventilation. HRV measurements from all subjects—five with ANSD (Ablett Grade 4) and four patients without ANSD (Ablett Grade 3)—showed HRV was lower than reported ranges for healthy individuals. Comparing different severities of tetanus, raw data for both time and frequency measurements of HRV were reduced in those with ANSD compared with those without. Differences were statistically significant in all except root mean square SD, indicating HRV may be a valuable tool in ANSD diagnosis.
Tetanus is a severe disease characterized by toxin-mediated disinhibition of autonomic and motor nervous systems. Motor neuron disinhibition causes characteristic muscle spasms, whereas autonomic nervous system disinhibition results in fluctuating blood pressure, tachycardia, and pyrexia. When mechanical ventilation is available, spasms can be controlled, but autonomic nervous system dysfunction (ANSD) remains a principal cause of mortality. 1,2 Robust methods of detecting ANSD suitable for implementation in resource-limited settings where most tetanus occurs would allow earlier intervention and may improve outcome. Diagnosis is currently based on nonspecific clinical signs of pyrexia, sweating, and increased or fluctuating heart rate and blood pressure. 3 Other methods include 24-hour collections of urinary catecholamines, but this has low specificity, unsuitable for routine use. 4 In health, heart rate is carefully controlled by the autonomic nervous system. Alterations in parasympathetic and sympathetic nervous system activity result in beat-to-beat heart rate variation, and hence, this variation (heart rate variability [HRV]) reflects autonomic nervous system activity. Heart rate variability is altered in pathological states, such as ischemic heart disease, and reduced variability is predictive of worse outcomes. 5 Standardized measures of HRV can be calculated from electrocardiogram (ECG) R-R intervals, and consensus guidelines on appropriate indicators are available. 5 Time domain variables are calculated directly from R-R intervals (termed normal-to-normal intervals), for example, SD. Frequency domain variables are generated from ECG spectral analysis, usually following fast Fourier transformation. 5 By observing changes in these components after administering autonomic nervous system antagonists, relative contributions of parasympathetic and sympathetic nervous systems have been inferred. Whereas total power of the spectrum represents the general level of autonomic activation, low-frequency activity (< 0.15 Hz) is mainly due to baroreceptor reflex modulation and related to both vagal and sympathetic influence, and high-frequency activity is mainly aligned with vagal activity. Low to high frequency ratio is accepted to indicate the balance between both systems; however, this interpretation fails to take account of effects such as different temporal patterns of sympathetic and parasympathetic components and cardiac pacemaker sensitivity.
Heart rate variability changes in tetanus are largely unknown. Sykora et al. 6 analyzed baroreflex sensitivity and time domain variables in an 87-year-old woman with tetanus and reported decreased baroreceptor sensitivity compared with a control of similar age; however, the patient, but not control, received mechanical ventilation and a beta-blocker, both of which can influence sensitivity. Goto et al. 7 reported reduced frequency domain variables in an 11-year-old child; however, this recording was taken following a cardiac arrest and on the 122nd day of hospitalization, when clinical recovery from tetanus is normally expected.
Nevertheless, ANSD diagnosis and prognostication through HRV remains an attractive prospect because of its noninvasive nature. Hitherto, required monitoring equipment was rarely available in settings where most tetanus occurs, but growing availability of low-cost sensors means measurement is increasingly feasible in low-resource settings. 8 In this study, we aim to investigate the relationship of HRV and ANSD in patients with severe tetanus, providing proof-of-principle that such monitoring may be valuable.
The study was conducted in the Intensive Care Unit at the Hospital for Tropical Diseases, Ho Chi Minh City, between October 2016 and January 2017 and was approved by the Ethical Committee of the Hospital for Tropical Diseases. Written informed consent was given by all participants or representatives before enrollment.
Adults with severe tetanus (Ablett Grade 3 or 4) diagnosed according to the Hospital for Tropical Disease guidelines 9,10 and receiving mechanical ventilation were recruited to the study. Recruitment was pragmatic and depended on availability of suitable monitors. Ablett Grade 3 was defined as "severe spasms interfering with breathing" and Grade 4 as Grade 3 but with ANSD. 10 Autonomic nervous system dysfunction was diagnosed clinically by the attending physician but required the presence of at least three of the following within 12 hours: heart rate > 100 bpm, systolic blood pressure > 140 mmHg, blood pressure fluctuation with minimum mean arterial pressure < 60 mmHg, and temperature > 38°C without evidence of intercurrent infections.
Tetanus management followed a standard protocol previously described, 11 consisting of antibiotics, and spasm control using benzodiazepines and pipecuronium. Autonomic nervous system dysfunction was managed principally with magnesium sulfate.
Electrocardiogram data were collected from bedside monitors (Datex; Datex Ohmeda Inc., GE Healthcare, Helsinki, Finland) in supine undisturbed patients using VSCapture software. 12 Electrocardiogram, physiological, and clinical data were collected over a 24-hour period. Heart rate variability features were extracted from noise-free 5-minute recordings at 6 AM, 12 noon, and 6 PM to prevent bias from HRV diurnal variation. 13 Time domain variables measured were square root of the mean squared differences of successive normal-to-normal intervals (RMSSD) and SD of all normal-to-normal intervals (SDNN). Frequency domain variables were total power, highfrequency power (0.15-0.4 Hz), low-frequency power (0.05-0.15 Hz), low-frequency normalized units, highfrequency normalized units, and low-to high-frequency ratio. Statistical analyses were performed using R statistical software version 3.5.1 (R Corporation, Vienna, Austria). Data are presented as mean (SD). Heart rate variability was compared between two groups of tetanus severity using a linear mixed-effects model to correct for repeated measurements. A P-value < 0.05 was considered statistically significant.
Five patients with Ablett Grade 4 and 5 patients with Ablett Grade 3 tetanus were recruited to the study. Data from one patient with Grade 3 tetanus was too noisy for analysis and, therefore, excluded. Clinical characteristics of the remaining nine patients are given in Table 1. Of these, 8/9 had three highquality noise-free 5-minute segments at the chosen time point. One patient with Ablett Grade 4 had only two suitable 5-minute segments at 12 noon and 6 PM.
Heart rate variability data are presented in Figure 1 and Table 2. All HRV measurements were very low compared with reported ranges for healthy individuals, with low-to highfrequency ratios being significantly greater. 5 Comparing different severities of tetanus, both time (RMSSD and SDNN) and frequency (low frequency, high frequency, low-frequency normalized units, and total power) variables were reduced in those with ANSD (Ablett Grade 4) compared with those without. Differences were statistically significant in all except RMSSD (P = 0.09). Only high-frequency normalized units and low-to high-frequency ratios showed no difference between groups.
We present, to our knowledge, the first HRV measurements in a series of patients with tetanus. Our data show a consistent reduction in time and frequency domain variables compared with values reported in healthy subjects. These are particularly reduced in those with clinical signs of ANSD. This is consistent with HRV reported in other pathological states with high levels of sympathetic activation and with existing understanding of ANSD in tetanus.
Sympathetic activation in tetanus is associated with increased circulating catecholamines, which are increased in proportion to disease severity. 4 These may exert direct effects on the heart and vasculature and indirect effects through reflex reduction in vagal tone. The observed reduction in HRV variables in those with ANSD is consistent with sympathetic nervous system activation. Although the reduction in highfrequency power, suggesting a reduction in vagal tone, is expected, we also observed a reduction in low-frequency power, indicative of both sympathetic and parasympathetic ANSD = autonomic nervous system dysfunction. Figures given are mean (SD), except males, mechanical ventilation, and mortality, which are n (%). * Ablett Grade 3: severe tetanus with spasms compromising respiration. Ablett Grade 4 is as Grade 3 but with additional signs of autonomic nervous system dysfunction.
activation. In cases of sympathetic activation, heart rate increases and total power is reduced and, as a result, the lowfrequency component may actually decrease. 14 Similarly at high levels of sympathetic stimulation, a "ceiling effect" may occur at the sinoatrial node when further response cannot occur. 14 A significant limitation to interpretation of our data is that our patients were all receiving sedative drugs which may influence HRV. Although sedation is not reported to affect HRV in critically ill patients 15 and subjects in both groups received similar sedative doses, it is possible that drugs were titrated against clinical effect. Magnesium sulfate was used almost exclusively in those with ANSD. Although we have previously shown its use in tetanus is associated with a reduction in urinary catecholamine excretion, 16 limited data in myocardial infarction suggest that it has limited effect on HRV. 17 A further limitation is that we used 5-minute recordings to measure time domain variables. Guidelines recommend that these should be measured from 24-hour recordings. 5,14 Nevertheless, our values are lower than reported 5-minute "normal" values, and our primary comparison was between severity groups. 5 Heart rate control in tetanus is undoubtedly complex and influenced by many factors not measured in this study. As such, we aimed only to demonstrate that variability is related to disease severity and that alterations in HRV may aid ANSD diagnosis in patients with tetanus. Currently, ANSD diagnosis is limited by poor specificity and may be difficult to distinguish from other causes of cardiovascular instability, such as infection, ischemia, or pain. Heart rate variability could, therefore, potentially be a more sensitive and specific way of identifying those with ANSD. The low HRV observed even in patients with Grade 3 tetanus may represent a clinically less apparent category of ANSD, but where, nevertheless, intervention may be beneficial. Furthermore, HRV changes may be early predictors of subsequent ANSD and enable earlier intervention.
This article has focused on using established HRV measures; however, it is likely these are relatively blunt tools with which to decipher the complex underlying physiological mechanisms and rely on high-quality signals difficult to obtain in critically ill populations in resource-limited settings. However, it may be that obtaining high-quality data becomes more feasible through either increased availability of wearable devices or adaptation of existing equipment. 18 As newer innovative methods for analyzing data are developed, for example, artificial intelligence, more sensitive ways of analysis could be developed, providing better insight into control mechanisms and disease pathophysiology.
|
v3-fos-license
|
2024-06-06T06:17:19.816Z
|
2024-06-01T00:00:00.000
|
270256706
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rspb.2023.2791",
"pdf_hash": "6b56dc7922c747cfc2ef047d41cf51963319dfe6",
"pdf_src": "RoyalSociety",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46127",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "904dde3e658cce34948159acd468176fe6543d8c",
"year": 2024
}
|
pes2o/s2orc
|
The diversity of social complexity in termites
Sociality underpins major evolutionary transitions and significantly influences the structure and function of complex ecosystems. Social insects, seen as the pinnacle of sociality, have traits like obligate sterility that are considered ‘master traits’, used as single phenotypic measures of this complexity. However, evidence is mounting that completely aligning both phenotypic and evolutionary social complexity, and having obligate sterility central to both, is erroneous. We hypothesize that obligate and functional sterility are insufficient in explaining the diversity of phenotypic social complexity in social insects. To test this, we explore the relative importance of these sterility traits in an understudied but diverse taxon: the termites. We compile the largest termite social complexity dataset to date, using specimen and literature data. We find that although functional and obligate sterility explain a significant proportion of variance, neither trait is an adequate singular proxy for the phenotypic social complexity of termites. Further, we show both traits have only a weak association with the other social complexity traits within termites. These findings have ramifications for our general comprehension of the frameworks of phenotypic and evolutionary social complexity, and their relationship with sterility.
Introduction
Life comes in a bewildering diversity of forms.One of the most striking and essential features about living organisms is their astonishing array of social interactions.Chromosomes cooperate with other chromosomes within cells; cells associate with other cells to form multicellular organisms; and individual insects live with other, related insects to form cohesive colonies [1][2][3][4][5].The variety and depth of these cooperations can be thought of as social complexity.Social complexity, which encompasses the variety and depth of cooperation in such systems, forms a framework to understand phenotypic evolution and the ecology of communities [1][2][3][4][5].
There are several ways to describe, compare and understand variation in social complexity.Here, we focus on two: evolutionary and phenotypic views of social complexity.The evolutionary viewpoint (box 1) asks why social complexity exists at all and investigates the factors determining why individual units cooperate.Under an evolutionary social complexity framework, the diversity of life can be explained by a stepwise progression whereby solitary individuals (e.g.bacteria or mason bees) exist on their own, but can form social groups (e.g.slime moulds or honeybees).In turn, social groups can come together and form interdependent fitness-maximizing individuals (all units working together as a whole).These shifts from one level of sociality to another are termed major evolutionary transitions (e.g. as seen in red algae or mound-building termites) [1].
Crucially, this stepwise progression can be recursive, creating a nested hierarchy of cooperating units, all working as part of the highestlevel individual (box 1) [3].For instance, some social insects are highly integrated societies composed of many individuals, which are themselves made up of cooperating cells, which in are in turn collections of cooperating genes.Therefore, we use the term 'evolutionary social complexity' to describe the degree to which a group has transitioned to a fitness maximizing individual at a given level.These transition stages provide us with a valuable tool for understanding the intricacies of social behaviour and how it evolves.
An alternative perspective on social complexity is a phenotypic viewpoint.Phenotypic views of social complexity focus on how social organisms look and function now, rather than on how they have evolved per se.In this sense, phenotypic social complexity can be defined as the extent to which a system is made up of many specialized and interacting parts which come together to contribute Box 1. Glossary Conflict.This would refer to evolutionary conflict of interest between members of a social group over actions like reproduction.This can be potential conflict when there are differing inclusive fitness optima, such as having higher relatedness of workers to their own offspring compared with their siblings.This may not become actual conflict however (for instance, when there is worker reproduction within a colony while the queen is present).
Fertile workers.workers that can become dispersing primary reproductives that form their own colony or replacement reproductives able to take over a nest if a parent dies.
Foraging termites (also referred to as separate-piece or multiple piece nesters).Species that live in a well-defined nest where workers, at some point in the colony cycle, will leave the nest to forage.This means the colony longevity is not limited to the availability of food.All foraging termites have true workers which can be thought to have reduced potential and therefore are at least functionally sterile.These are found in the Mastotermitidae, Hodotermitidae, most Rhinotermititdae and all Termitidae.
Functional sterility.When the working unit in a group can become a reproductive only under extreme circumstances.For instance, in termites the apterous (working/somatic) and nymphal (reproductive/germline) lines have incomplete separation where workers can still become reproductives if for example a parent dies but are unable to become dispersing reproductives.Some species with functional sterility, like Mastotermes and Reticulitermes, have worker-derived reproductives in the nest while the primary reproductive is still present but the ability of a worker to become a worker is still much reduced compared with species with fertile workers.
Individual.A collective that adheres to the conditions needed for a major evolutionary transition (i.e. a group whose lower units are interdependent and have aligned interests).
Major evolutionary transition.A change in the way that heritable information is stored and transmitted, concentrating on transitions that lead to a new form of individual.This requires two conditions to be met: (1) entities capable of independent replication before the transition can replicate only as part of a larger unit after it (i.e.interdependence of these units); and (2) there is a lack of within-group conflict such that the larger unit can be thought of as a fitness maximizing individual in its own right (i.e.there is long-term alignment of interests).Also called evolutionary social complexity.
Phenotypic social complexity.The extent to which a system is made from many interacting parts all coming together to contribute to a function.This is a measure of phenotypic complexity within each level of transitionary level and can be explained via multiple traits, defined as follows.Colony size (CS): the larger the colony, the less likely the chance of individuals to become reproductive, therefore they are selected to specialize instead.Also, there are more individuals in the first place to interact to do different jobs.From 1 to 7, the number of units within a colony for a given species, logged.Functional sterility (FS): this trait is important for separating termite species which have workers able to disperse to become primary reproductives (fertile workers/wood-dwelling) and those that cannot (functionally and obligately sterile workers/foraging).Helper polyphenism (HM): the greater the number of morphs of workers and soldiers, the greater the number of specialized groups in a colony.1-4 morphs have been measured in this study.Nest complexity (NC): an extended phenotype signifying the complexity of behaviour ( polyphenisms) required to create the nests around them three separate levels: (1) no structure, (2) subterranean structure, (3) above ground structure.Obligate sterility (OS): individuals are committed to their role as workers and therefore will be solely selected to be the most specialized and efficient worker they can be.0 or 1 where 1 is a species which has workers unable to take over the colony or found their own and 0 is anything else.
Soldier.Sterile altruistic caste which are generally morphologically and behaviourally specialized for defence.Termite, higher.Made up of only termite species within the family Termitidae, which only have non-flagellate gut symbionts.
Termite, lower.All termites other than the Termitidae.They have both flagellates and non-flagellate gut symbionts.True workers.Individuals that have diverged and are part of a separate wingless line.
Evolutionary social complexity.The extent to which a group has become a fitness maximizing individual in its own right, with long term alignment of interest and complete interdependence.
Wood-dwelling termites (formally, one-piece nesters).Species where a colony will live in a single piece of wood which serves as both food and nest source.Only the winged sexuals leave the nest and when their only food source is exploited the colony will die.Species within this life type are thought to have highly flexible development and false workers which can also be described as fertile workers.These are found within the Termposidae, Kalotermitidae and some species within the Rhinotermitidae.
royalsocietypublishing.org/journal/rspb Proc.R. Soc.B 291: 20232791 to a function (box 1) [6].Many of the phenotypic measures of phenotypic social complexity have been derived from the study of social insects.Four readily quantifiable phenotypic measures are colony size, helper polyphenism (i.e. the degree of polymorphism among helper individuals), nest complexity and worker sterility.Colony size quantifies the number of individuals within a colony [5][6][7][8][9][10][11][12][13][14].The larger the colony of fertile workers, the less likely the individuals are to reproduce and the more likely they are to be selected to specialize within a colony instead.Further, larger colony sizes lead to there being more individuals to interact and perform different jobs and generate complex behaviours and extended phenotypes [6][7][8][9][10][11][12][13].Helper polyphenism captures the distinct physical variation observed among the workers and soldiers of a colony.The presence of a diverse array of worker and soldier morphs within a colony enables the formation of specialized groups that can tackle various tasks, thereby enhancing the overall functioning and productivity of the colony [6][7][8][9]14,15].Nest complexity signifies the repertoire of behaviours colonies use to create and manage their nests (colony centres).
Evolutionary and phenotypic views of social complexity are thought to be linked by the concept of sterility due to it being an indicator of high complexity in both views.Within social insects, for instance, it is assumed that there is a positive relationship between worker sterility and all the other phenotypic social complexity traits, such that all social complexity traits can be conflated [5,[16][17][18][19][20][21][22][23][24][25].This potential relationship leads to an assumption that worker sterility can act as a proxy measure for social complexity overall.Further, it has been proposed that obligate sterility is the prerequisite of a major evolutionary transition (i.e.evolutionary social complexity).Once a species has irreversibly gained sterility, it will be selected for greater complexity in all traits in a positive feedback process [5,26].This is due to obligate sterility removing the potential for reproductive conflict within a colony and therefore allowing for complete interdependence and alignment of interests [3][4][5].Therefore, obligate sterility is seen as the key trait aligning both phenotypic and evolutionary views of social complexity.
Despite the importance of sterility, however, there are two key issues that complicate its use and interpretation as a general proxy for social complexity at large.The first is that phenotypic social complexity is likely to be a more complex and multivariate concept than previously thought.For instance, variation in traditional phenotypic measures of social complexity such as colony size, colony longevity and worker size variation, within different Hymenoptera taxa demonstrate that these measures do not always correlate with each other [6][7][8]27].Furthermore, within bumblebees, some species can display high levels of phenotypic social complexity despite the presence of fertile workers [28].The second issue is that sterility itself is complicated.For instance, some multicellular organisms can regenerate their germline (i.e.Echinoderms) and therefore have functionally sterile somatic cells, rather than obligately sterile somatic cells.This casts doubt on the importance of complete early separation of germline and soma for obligate multicellularity and, therefore, for major evolutionary transitions [19,29,30].Multicellular species that have a functionally sterile somatic cell line, rather than an obligately sterile one, show that although there is potential reproductive conflict, it has not prevented complete interdependence and alignment of reproductive interests [1,3].These datapoints highlight that functional sterility, rather than obligate sterility, may be of more relevance for both evolutionary and phenotypic views of social complexity and major evolutionary transitions (in both multicellularity and social insects).We require a greater understanding of atypical developmental systems if we are to fully appreciate the role of sterility in generating or maintaining social complexity [1,19].
To explore the overall importance of functional and obligate sterility in explaining phenotypic and evolutionary social complexity, we must first examine them at the highest levels of sociality within developmentally atypical systems [31,32].The hemimetabolous termites are an extremely useful system to do this.Termites are of huge ecological and economic importance across the globe [33][34][35][36] and display a wide range of social complexities-from simple colonies to complex agricultural societies.Traditionally, they are grouped into the 'higher' and 'lower' termites, this distinction separates those species which have flagellates in their guts (lower termites) from those that do not (higher termites) (figure 1).This separation has been claimed as a defining biological difference.In terms of social complexity, however, this distinction is not particularly useful as both higher and lower termites exhibit a range of complex and simple societies [31,32,37,38].Crucially, termites have a complex relationship with sterility-making them an ideal model taxon to investigate how this trait links to other measures of social complexity [37].Some termites retain fully fertile workers (often called 'wood-dwellers' or onepiece/single-piece nesters [39]), some have functionally sterile workers which can become fertile, if necessary ('foraging' termites or separate piece nesters [39]), and some have fully sterile workers (also 'foraging' termites or separate piece nesters [40]; see figure 1) [31,32,37,40].
Here, we use trait data from the large termite collection of the Natural History Museum, London, supplementing pre-existing data, to produce the most comprehensive phenotypic social complexity trait dataset for termites to date.By doing so, we test whether obligate sterility, functional sterility or neither are able to explain variation in termite phenotypic social complexity.We make use of the dataset to answer the following questions that arise from the problems discussed above: (1) Can either trait (functional or obligate sterility) be used a singular proxy for phenotypic social complexity in termites?(2) Regardless of their proxy power, are there any significantly positive associations between functional and obligate sterility and the other social complexity traits?(3) Is obligate sterility reliable as a central concept allowing phenotypic and evolutionary social complexity to be seen inseparably?Is obligate sterility necessary for higher evolutionary social complexity?
Methods (a) Data collection
We collected morphological trait data from termites preserved in ethanol at the Natural History Museum, London.We used head width, hind femur length and front tibia length to predict the number of worker and soldier morphs in a species (helper polyphenism).Head width is used as a proxy of body size and the two leg measures capture potential limb allometries related to foraging mode and task allocation [41].We sampled from 300 species in total with every termite genus in the collection represented but with several genera, such as Macrotermes, having a greater representation to allow for their geographical spread and species richness.Where possible, 30 royalsocietypublishing.org/journal/rspb Proc.R. Soc.B 291: workers and at least 10 soldiers were sampled from each species, preferably with three individual workers and 1 soldier from 10 different colonies to prevent colony level bias.We photographed each termite specimen twice to allow for digital measurements of their morphology.To do this, we placed individual termites on blobs of K-Y jelly (Thornton & Ross) within a Petri dish to maintain their posture and covered them with ethanol to prevent them from drying out.We then used an Axio Zoom.V16 (Zeiss) to automate the photographing of each individual.We oriented individuals for a profile and a dorsal photograph.In total, we took 18 900 photographs.We used ImageJ version 1.53a to measure the head width, hind femur length and hind tibia length directly [42].Once these measures were collected, we used the clustering analysis tool DBscan in R version 4.2.1 [43,44] to identify the number of distinct groups of workers and soldiers within each species using the morphometric measures (see electronic supplementary material, S1).This is a measure of helper polyphenism.Within the DBscan function, MinPts (the minimum number of samples seen together that can be defined as a cluster) was set at 4 so that potential human error in measuring a single individual would not cause a mistaken morph number.The functions kNNdist and kNNdisplot, from within the DBscan package, were used to calculate the k-nearest neighbour distances and plot them to identify the most appropriate eps value.The eps value dictates how close points should be to each other to be considered a part of a single cluster.To complement our estimates of helper polyphenism, we cross-referenced them with existing literature and changed the estimates where there was greater evidence for a different estimate within the literature.The script (named DBscan.md) and the relevant data can be found in the GitHub repository (see data availability section).
The acquisition of data for obligate sterility (OS), functional sterility (FS), colony size (CS) and nest complexity (NC) involved the systematic search of Web of Science for each species that was measured from the museum collections.This search was expanded to genus level when there was no data as species level.In total, 99 of the 300 species photographed could be used for these analyses as many had incomplete data relating to these traits.
Most colony size data were acquired from a single paper which has compiled the existing literature on this trait [45].To process the data, we rounded the maximum colony size values to the nearest power of 10 and then applied the logarithm base 10 function to them.Much of the data on sterility was also previously acquired from studies compiling developmental plasticity and worker fertility data across the termites [37,46].We define obligate sterility as a species which has workers unable to take over the colony or found their own under any circumstances (box 1).Functional sterility is defined as species which have workers able to become reproductives only under extreme circumstances, for instance where the primary reproductives have died (box 1).There are varying levels of reliability with the data, as some are long-term observational data stating that no replacement reproductive were present, some were field-based colony orphaning experiments and others laboratory-based colony orphaning [37,46].We can only be totally certain that a species has actual obligate sterility when doing these in-depth colony orphaning experiments.These experiments help with understanding absolute developmental potential, not for understanding whether these species naturally produce replacement reproductives from workers in the field.The trait of functional sterility includes any species where the workers cannot become a primary founding reproductive, therefore including obligately sterile species.Nest complexity data were taken from the literature and defined as three separate levels: no structure, subterranean structure, above-ground structure.Species which reside in wood are making use of pre-existing structures so do not require as many building behaviours to create their nest, whereas the subterranean structures of soil-dwelling termites certainly require constructions behaviours as well as related behaviours required when creating their own nest and some defence.The creation of above-ground structures requires substantially more construction behaviours as well as defence behaviours due to potentially being more vulnerable to predators.The complete trait data matrix for the 99 species can be found in the GitHub repository (see data availability section).
(b) Phylogeny
We estimated a termite phylogeny from 637 termite species and nine outgroup Cryptocercus cockroach species as created using PyPHLAWD [47].This is an open-source python package that creates molecular tree-building datasets from publicly available genetic data from GenBank [48], NCBI BLAST [49] and uses a Markov clustering approach [50] to infer a RAxML tree [51].This allowed us to combine DNA from many genes for all termites that had a genus and species name in GenBank.Recent termite phylogenies informed us on the constraints needed on the tree [52][53][54][55][56][57]: Mastotermitidae, Hodotermitidae, Stolotermitidae, Kalotermitidae, Serritermitidae, Stylotermitidae and Termitidae are monophyletic families; Rhinotermitidae and Archotermopsidae are not.Subfamilies Rhinotermitinae, Apicotermitinae, Foraminitermitinae, Cubitermitinae, Syntermitinae, Macrotermitinae and Nasutitermitinae form monophyletic groups; Heterotermitinae and Termitinae do not.Following tree reconstruction, 14 taxa were trimmed that were known to be incorrectly placed, probably due to previous misidentification (figure 1).Species present in the tree but not in the data were then removed.Any remaining species missing from the tree but present in our data were manually added either to the already present genus (48 sister species added) and giving the same branch length as their sister species or when no other member of the same genus is present, we made use of pre-existing termite phylogenies to add the remainder of the species (11 newly added genera).This was done using treegraph2 [58].Consequently, the final phylogeny we used for further analysis included 99 species [52][53][54][55][56][57].Their branch lengths were equal to their single sister species, or an average of multiple closely related species already present in the tree (figure 2).The scripts, tree_creation.md and Figure_2_creation.md (a script to visualize the phylogeny and trait data) and the relevant data can be found in the GitHub repository (see data availability section).
(c) Statistical analysis
All analyses used R version 4.2.1 [44].We investigated the ability of the functional and obligate sterility traits to explain the variance in the other social complexity traits, as a test of their proxy power, by running a phylogenetic MANOVA where functional and obligate sterility (binary traits) were each in turn explanatory variables and the response variables were helper polyphenism, colony size and nest complexity (discrete traits).First, we used principal coordinate analyses (PCoAs) to summarize the discrete response data as three continuous axes of variation using the ape package [59].These axes were used as response variables in a phylogenetic MANOVA which required the phytools and vegan packages [60,61].The script (named Phylogenetic_MANOVA.md) and the relevant data can be found in the GitHub repository (see data availability section).
We also tested the associations between functional and obligate sterility, and the other social complexity traits (nest complexity, colony size and helper polyphenism).To account for non-independence due to common ancestry and the discrete nature of the data, we used a Bayesian phylogenetic mixed model approach from the package MCMCglmm version 2.34 [62].This package uses a Markov chain Monte Carlo (MCMC) estimation approach and places the phylogenetic relationships among species as a random variable to account for the non-royalsocietypublishing.org/journal/rspbProc.R. Soc.B 291: 20232791 independence of closely related species [63].The number of iterations, thinning and burn-in period for each pairwise comparison was by default 100 000, 50 and 5000, respectively.We used a mixed model with a threshold distribution where functional and obligate sterility are the response variables and the predictors are helper polyphenism, colony size and nest complexity.A weakly informative Gelman prior was used for fixed effects and an inverse Wishart prior for random effects, fixing the residual variance to 1 as this cannot be estimated from binary data [64][65][66].We ran multiple chains and tested for convergence using the gelmon.plotfunction from the coda package [67].We report the significance of a relationship using overlap of the upper and lower 95% CLs with 0. The script (named MCMCglmm.md)and the relevant data can be found in the GitHub repository (see data availability section).
Results
We found that the functional sterility trait accounts for approximately 40% of the variation in the other social complexity traits (R 2 = 0.404; table 1).However, obligate sterility explains much less with only 13% variation explained in these social complexity traits (R 2 = 0.131; table 1).Although neither can be seen as sufficient singular proxies for understanding termites broadly, functional sterility does explain a significant proportion of variance in the other social complexity traits.Certainly, it explains much more than obligate sterility.
Genera that have high overall social complexity in all the traits include the Macrotermitinae and Syntermitinae (figure 2), whereas families with low overall social complexity include Kalotermitidae, Stolotermitidae and Archotermopsidae (figure 2).
There are examples of species (such as Trinervitermes bettonianus) which have a large colony size and high polyphenism but without obligately sterile workers.We also see species such as those in the Apicotermitinae (figure 2) with small colony sizes and low polyphenism (due to their loss of the soldier caste) but workers with obligate sterility.We statistically explored the individual relationships of the different social complexity traits using MCMCglmm analysis.We found that functional sterility had a royalsocietypublishing.org/journal/rspb Proc.R. Soc.B 291: 20232791 significantly positive relationship only with nest complexity, having CL 95% not overlapping 0 (l-95%, 1.114; u-95%, 3.428) (figure 3 and table 2).Obligate sterility had no significant relationship with any of the other traits (figure 3 and table 2).
Discussion (a) Can sterility type explain termite phenotypic social complexity?
Here, we found that functional sterility explains a more significant proportion of variance in social complexity traits across termites than obligate sterility.However, it is clear that neither trait can be used as a single proxy to explain the diversity of all other social complexity traits (table 1).Instead, we argue that we should use a multivariate view of phenotypic social complexity.This is already gaining traction within the Hymenoptera literature, and clearly would be beneficial in general across the social insects royalsocietypublishing.org/journal/rspb Proc.R. Soc.B 291: and potentially across all forms of sociality [6][7][8]27].The usefulness of complexity traits such as obligate sterility to represent the phenotypic social complexity of species is clearly exaggerated.It would be better to use functional sterility to explain the complexity of termites, but a more nuanced multivariate approach would capture the most variance.An overall phenotypic social complexity score for each species based upon the bringing together of the complexity traits we have explored may allow us to incorporate this greater detail.However, before this can be done in termites, traits such as colony longevity, queen-worker dimorphism and age polyethism will need to be more comprehensively collected.This will allow the truest picture possible of overall phenotypic social complexity [6][7][8]27].Further exploration is needed to determine whether to assign different weights to the traits when developing this score [6][7][8]27].It is likely that there will be no single approach for every question using these frameworks and data.Instead, traits may be weighted differently depending on the question of interest.But only through greater .Effect plots with explanatory variables are on the x-axis and effect size is on the y-axis.The mean effect size with 95% CIs are plotted where an overlap with 0 signifies a lack of significance in the variables relationship with the response variable (circle, obligate sterility; triangle, functional sterility).This shows results from all three chains from each relationship.Summaries of results for one chain allowing for Pmcmc values are presented in table 2. discourse are we able to decide the relative importance of each trait in evolutionary social complexity.Finally, to make our results more robust, we should try and gain more termite species trait data to reduce the problems with having only a relatively small number of independent transitions to obligate sterility.
(b) The relationships of functional and obligate sterility with other social complexity traits Although our data suggest that functional and obligate sterility traits cannot be used as a single proxy to explain the diversity of social complexity in termites, it could still be the case that they have significant relationships with individual traits.We found that this is only the case with nest complexity and functional sterility, however.The strong positive relationship we found between functional sterility and nest complexity is likely due, in part, to the first category of nest complexity being 'nests which have no structure', which is the case in all wood-dwelling species.This highlights the importance of reduced worker reproductive capacity in the evolution of more complex nesting capabilities.The lack of significance in the relationships between obligate sterility and the other phenotypic traits goes against several studies that assert that obligate sterility should cause an increase in complexity or that complexity is needed to achieve obligate sterility [5,[16][17][18][19].We cannot completely rule out the possibility that the lack of association between obligate sterility and these other traits could be due to having so few examples of obligate sterility in our present study compared with species with functional sterility or fertile workers.Further, it may well be the case that the presence of obligate sterility is still important for evolutionary social complexity by being the major prerequisite for a major evolutionary transition.However, based on the current data we have at hand, the lack of any relationship with these traits means that it has no fundamental value for quantifying phenotypic social complexity in termites.
(c) Redefining the importance of sterility in phenotypic social complexity and therefore questioning the inseparability of phenotypic and evolutionary social complexity The difference in importance of obligate sterility to phenotypic and evolutionary views of social complexity has already been highlighted [26].It has been hypothesized that high phenotypic social complexity is not necessary for an increase in evolutionary social complexity (MET) to occur if obligate sterility is present [26].This is an important and necessary step towards separating these two concepts.We clearly find that there is not a significant association between a species' level of overall phenotypic social complexity and their likelihood of having transitioned to a higher level of evolutionary social complexity, especially when this is defined by obligate sterility.Some species within the Apicotermitinae (figure 2), which have only the worker caste, small colony size and low nest complexity, have obligate sterility.It could be the case that they acquired obligate sterility while having low phenotypic social complexity.Alternatively, it could be that phenotypic complexity was high but was secondarily reduced after acquiring obligate sterility.Either way, this means there is no clear permanent positive relationship between phenotypic and evolutionary social complexity.We should instead view these as related but separate measures of complexity.By doing so, we are better able to incorporate species that do not conform to the rigid view that phenotypic and evolutionary social complexity fully align at every step.A colony that has small group size and only one morph with little to no nest structure could still have interdependence and an alignment of interests and therefore be seen as a higher individual [3].It seems likely that each trait has its own independent selective pressures which cause the changes in their respective complexities.Further analyses exploring the relative importance of biotic and abiotic factors in selecting for these social complexity traits could provide greater understanding on the adaptive reasons for species evolving traits such as worker reproductivity or high levels of polyphenism [14].
(d) Inclusivity of atypical organisms sheds light on evolutionary social complexity
We must be more inclusive to developmentally atypical organisms when outlining the prerequisites for a major evolutionary transition if we are to create a generalized framework for all life [1,19].This includes systems that seemingly harbour potential conflict, but which have found ways to prevent this from becoming actual conflict over evolutionary time while still allowing for increases in evolutionary social complexity [1,19].Within multicellular groups, the presence of units able to become germline or soma at any point in the group's life traditionally would prevent them from being defined at a higher level of individuality [19].However, it has been shown in some metazoan lineages that species which clearly have interdependence and aligned interests also do not always have complete segregation of germline and soma, i.e. have early separation of a germline [19].This is also the case in termites, which often have workers within highly complex colonies able to reproduce under extreme circumstances.We have shown in this study that obligate sterility is not an adequate singular proxy for phenotypic social complexity, nor does it correlate with any of the present social complexity traits within termites.We have outlined the need for decoupling phenotypic social complexity from evolutionary social complexity as a consequence.However, it may be necessary to go a step further; it may also be the case that obligate sterility is unnecessary to allow complete interdependence and long-term alignment of interests within the group.The closed nature of these systems with functional sterility (note that some wood-dwelling termites with fertile workers exhibit colony fusion [68]), whereby the replacement reproduction by a worker is only occurring under extreme circumstances like their parents dying, means there is less potential conflict compared with replacement worker reproduction seen in Hymenoptera [4].Consequently, the evolutionary retention of worker reproductivity could be a group level adaptation to variable environments, present in many atypical systems like the termites and basal metazoans.
Furthermore, the alternative strategies to interdependence and alignment of interests which are not reliant on obligate sterility, shown in developmentally modular organisms like plants, should also be incorporated into our frameworks [19,29,30].It may be the case that the highly modular but extremely complex siphonophores, Hydrozoans within the phylum Cnideria, are another novel example of a higher individual [4,69,70].They appear to have complete interdependence and aligned interests of royalsocietypublishing.org/journal/rspb Proc.R. Soc.B 291: 20232791 multicellular replicated units (zooids) coming together for the higher individual.The more inclusive we are to these developmentally atypical organisms, the more inclusive we become to alternative strategies which can produce a higher individual.In the future, we must include as many clades as possible to understand the true spectrum of individuality [1].Wood-dwelling termites have been shown to generally have the lowest levels of social complexity in all the traits we have explored in this study, but this is only compared with other termite species.Comparisons that include the full spectrum of sociality within the Blattodea may shed more light on this discussion.
Conclusion
Creating a framework where we can compare the phenotypic and evolutionary processes by which the complexity of life on earth has evolved is invaluable.However, such a unifying concept cannot be explained so simply if we are to include the diversity of all life.Here, we have shown that the diversity of phenotypic social complexity traits such as colony size, nest complexity and worker polyphenism cannot be explained fully by functional and obligate sterility traits.Therefore, instead of a singular proxy for phenotypic social complexity, we must use a multivariate approach to explain its true diversity within termites and more broadly across all sociality if we are to step closer towards this unifying concept.Furthermore, we find that there is a lack of significant association between either functional or obligate sterility with the other social complexity traits, so we should not conflate these traits.Consequently, we outline that phenotypic and evolutionary social complexity (based on individuality) are not necessarily fully in line but instead should be seen as distinct but interacting frameworks if we are to fully understand what is required to transition to higher individuality.By turning our sights to the developmentally atypical termites, we broaden this understanding, which allows us to find greater and more accurate parallels across major evolutionary transitions, such as that between termites and siphonophores as superorganisms and plants, and some metazoan lineages as multicellular organisms.
Figure 1 .
Figure1.A cladogram showing the higher and lower termites as well as the major families within the termites and whether they are wood-dwelling (yellow) or foraging (blue).Cryptocercidae is the outgroup sister taxa to all termites.Taxa with asterisks are non-monophyletic.
Figure 2 .
Figure2.Phylogeny with the 99 species of termites used in analyses and their status in each of the five complexity traits standardized from 0 to 1 where yellow is 0 (low complexity) and blue is 1 (high complexity) in each trait: HM, helper polyphenism; NC, nest complexity; CS, colony size; FS, functional sterility; OS, obligate sterility.Abbreviated sub/family names: M, Mastotermitidae; S, Stolotermitidae; ARC, Archotermopsidae; H, Hodotermitidae; SYN, Syntermitinae.Photographs are from some of the specimens used in this study from NHM London.
Figure 3
Figure3.Effect plots with explanatory variables are on the x-axis and effect size is on the y-axis.The mean effect size with 95% CIs are plotted where an overlap with 0 signifies a lack of significance in the variables relationship with the response variable (circle, obligate sterility; triangle, functional sterility).This shows results from all three chains from each relationship.Summaries of results for one chain allowing for Pmcmc values are presented in table 2.
Table 1 .
Phylogenetic MANOVA.Helper polymorphism, colony size and nest complexity are the response variables for both explanatory variables.
Table 2 .
Summary results from a MCMCglmm chain.(a) Results where the response variable is functional sterility.(b) Results where the response variable is obligate sterility.** outlines a pMCMC value below 0.01.
|
v3-fos-license
|
2016-03-14T22:51:50.573Z
|
2016-03-01T00:00:00.000
|
5070426
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/17/3/320/pdf?version=1456824299",
"pdf_hash": "b54cdfb6b14ad14e302623ef627fd4b30e3c963f",
"pdf_src": "Crawler",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46129",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b54cdfb6b14ad14e302623ef627fd4b30e3c963f",
"year": 2016
}
|
pes2o/s2orc
|
CD86+/CD206+, Diametrically Polarized Tumor-Associated Macrophages, Predict Hepatocellular Carcinoma Patient Prognosis
Tumor-associated macrophages (TAMs), the most abundant infiltrating immune cells in tumor microenvironment, have distinct functions in hepatocellular carcinoma (HCC) progression. CD68+ TAMs represent multiple polarized immune cells mainly containing CD86+ antitumoral M1 macrophages and CD206+ protumoral M2 macrophages. TAMs expression and density were assessed by immunohistochemical staining of CD68, CD86, and CD206 in tissue microarrays from 253 HCC patients. Clinicopathologic features and prognostic value of these markers were evaluated. We found that CD68+ TAMs were not associated with clinicopathologic characteristics and prognosis in HCC. Low presence of CD86+ TAMs and high presence of CD206+ TAMs were markedly correlated with aggressive tumor phenotypes, such as multiple tumor number and advanced tumor-node-metastasis (TNM) stage; and were associated with poor overall survival (OS) (p = 0.027 and p = 0.024, respectively) and increased time to recurrence (TTR) (p = 0.037 and p = 0.031, respectively). In addition, combined analysis of CD86 and CD206 provided a better indicator for OS (p = 0.011) and TTR (p = 0.024) in HCC than individual analysis of CD86 and CD206. Moreover, CD86+/CD206+ TAMs predictive model also had significant prognosis value in α-fetoprotein (AFP)-negative patients (OS: p = 0.002, TTR: p = 0.005). Thus, these results suggest that combined analysis of immune biomarkers CD86 and CD206 could be a promising HCC prognostic biomarker.
Introduction
With an increasing incidence rate, hepatocellular carcinoma (HCC) is ranked the second cause of cancer-related deaths around the world [1].Currently, surgical resection is the preferred treatment for HCC.However, large cohorts of HCC patients suffer from postoperative recurrence, and have a poor response to systemic chemotherapeutic treatments, with a five-year survival rate of only 30%-40% [2].Still worse, current clinicopathologic factors, such as α-fetoprotein (AFP), tumor-node-metastasis (TNM) stage, and Barcelona clinic liver cancer (BCLC) stage, cannot accurately predict the outcome of HCC patients.Novel prognostic markers need to be developed for more customized HCC treatment.HCC is a typical inflammation-related cancer.Chronic inflammation provides a favorable surrounding to facilitate HCC progression [3,4].Accumulating evidence indicates that tumor microenvironment plays a vital role in tumor progression and metastasis [5].HCC microenvironment could be rich resources for identifying novel powerful prognostic biomarkers.
Macrophage is a main cellular ingredient in human tumor microenvironment, and is commonly known as tumor-associated macrophages (TAMs) [6].Different types of macrophage have distinct functions in tumor progression.M1 macrophages, which activate tumor-killing mechanisms, as well as amplify Th1 immunocytes responses, provide a resistant role in tumorigenesis.On the other hand, M2 macrophages, via suppressing tumor-specific immune responses, mainly act to enhance tumor growth and metastasis [7].
Accumulating evidence indicates that CD68 was expressed in all macrophages, and is labeled as a pan-macrophage biomarker [8].However, CD68 cannot effectively distinguish between M1 and M2 subtype macrophages.Previous study implied that M1 macrophages expressed high level of CD86 and tumor necrosis factor α (TNF-α), while M2 macrophages expressed relatively high level of CD206, CD163 and IL-10 [9,10].Interestingly, Tan et al. reported that in HCC, M1 macrophages expressed increased level of CD86 relative to TNF-α and IL-12, while M2 macrophages expressed increased level of CD206 relative to IL-10, and transforming growth factor β (TGF-β) [11].Recent studies have demonstrated that TAMs were associated with HCC progression, and may act as a promising prognostic factor and therapeutic target [12,13].
In this study, we showed that CD68 + TAMs alone had no prognostic value in HCC patients, indicating that total macrophages had no impact on HCC prognosis.Low presence of CD86 + and high presence of CD206 + TAMs were clearly correlated with aggressive tumor phenotypes, such as multiple tumor number and advanced TNM stage; and were associated with a poor prognosis in survival and recurrence.Furthermore, combined analysis of CD86 and CD206 provided a better prognostic indicator for HCC patients than individual analysis of CD86 and CD206.Furthermore, CD86 + /CD206 + TAMs predictive model also showed strong prognosis value in AFP-negative patients.
Characterization of Tumor-Associated Macrophages in Hepatocellular Carcinoma (HCC) Patients
Immunohistochemistry was performed to assess the expression and presence of macrophages in tumor tissues from 253 HCC patients who had undergone curative resection.CD68, CD86, and CD206 positive staining were mainly located in the cytoplasm of macrophages (Figure 1).In tumor tissues, the number of CD68 positive cells (median, 67 cells/field) was higher than CD86 positive (median, 37 cells/field, p < 0.001) and CD206 positive cells (median, 33 cells/field, p < 0.001, Figure 2 and Table S1).
of HCC patients.Novel prognostic markers need to be developed for more customized HCC treatment.HCC is a typical inflammation-related cancer.Chronic inflammation provides a favorable surrounding to facilitate HCC progression [3,4].Accumulating evidence indicates that tumor microenvironment plays a vital role in tumor progression and metastasis [5].HCC microenvironment could be rich resources for identifying novel powerful prognostic biomarkers.
Macrophage is a main cellular ingredient in human tumor microenvironment, and is commonly known as tumor-associated macrophages (TAMs) [6].Different types of macrophage have distinct functions in tumor progression.M1 macrophages, which activate tumor-killing mechanisms, as well as amplify Th1 immunocytes responses, provide a resistant role in tumorigenesis.On the other hand, M2 macrophages, via suppressing tumor-specific immune responses, mainly act to enhance tumor growth and metastasis [7].
Accumulating evidence indicates that CD68 was expressed in all macrophages, and is labeled as a pan-macrophage biomarker [8].However, CD68 cannot effectively distinguish between M1 and M2 subtype macrophages.Previous study implied that M1 macrophages expressed high level of CD86 and tumor necrosis factor α (TNF-α), while M2 macrophages expressed relatively high level of CD206, CD163 and IL-10 [9,10].Interestingly, Tan et al. reported that in HCC, M1 macrophages expressed increased level of CD86 relative to TNF-α and IL-12, while M2 macrophages expressed increased level of CD206 relative to IL-10, and transforming growth factor β (TGF-β) [11].Recent studies have demonstrated that TAMs were associated with HCC progression, and may act as a promising prognostic factor and therapeutic target [12,13].
In this study, we showed that CD68 + TAMs alone had no prognostic value in HCC patients, indicating that total macrophages had no impact on HCC prognosis.Low presence of CD86 + and high presence of CD206 + TAMs were clearly correlated with aggressive tumor phenotypes, such as multiple tumor number and advanced TNM stage; and were associated with a poor prognosis in survival and recurrence.Furthermore, combined analysis of CD86 and CD206 provided a better prognostic indicator for HCC patients than individual analysis of CD86 and CD206.Furthermore, CD86 + /CD206 + TAMs predictive model also showed strong prognosis value in AFP-negative patients.
Characterization of Tumor-Associated Macrophages in Hepatocellular Carcinoma (HCC) Patients
Immunohistochemistry was performed to assess the expression and presence of macrophages in tumor tissues from 253 HCC patients who had undergone curative resection.CD68, CD86, and CD206 positive staining were mainly located in the cytoplasm of macrophages (Figure 1).In tumor tissues, the number of CD68 positive cells (median, 67 cells/field) was higher than CD86 positive (median, 37 cells/field, p < 0.001) and CD206 positive cells (median, 33 cells/field, p < 0.001, Figure 2 and Table S1).A-C) showed low staining presence of CD68 + , CD86 + and CD206 + macrophages; Case 32 (D-F) showed high staining presence of CD68 + , CD86 + and CD206 + macrophages; Case 158 (G-I) showed high staining presence of CD68 + and CD86 + macrophages, but low staining presence of CD206 + macrophages; Case 92 (J-L) showed high staining presence of CD68 + and CD206 + macrophages, but low staining presence of CD86 + macrophages.
Association between Macrophage Markers Presence (CD68, CD86 and CD206) and Clinicopathologic Characteristics in HCC Patients
We next investigated the association between macrophage markers (CD68, CD86 and CD206) and patients' clinicopathologic characteristics.The 253 patients were divided into two groups (low and high) based on the median value of CD68, CD86, and CD206 staining cells, respectively.As summarized in Table 1, CD68 positive staining count in tumor had no relationship with any clinicopathologic features.However, lower infiltration of CD86 + TAMs was associated with aggressive tumor phenotypes, such as multiple tumor number (p = 0.006), high-grade TNM stage (p = 0.001) and elevated alaninetransaminase (ALT) (p = 0.020).Interestingly, higher infiltration of CD206 + TAMs was also positively correlated with multiple tumor number (p = 0.038), presence of vascular invasion (p = 0.011), appearance of tumor capsulation (p = 0.004), and advanced TNM stage (p = 0.005).
Association between Macrophage Markers Presence (CD68, CD86 and CD206) and Clinicopathologic Characteristics in HCC Patients
We next investigated the association between macrophage markers (CD68, CD86 and CD206) and patients' clinicopathologic characteristics.The 253 patients were divided into two groups (low and high) based on the median value of CD68, CD86, and CD206 staining cells, respectively.As summarized in Table 1, CD68 positive staining count in tumor had no relationship with any clinicopathologic features.However, lower infiltration of CD86 + TAMs was associated with aggressive tumor phenotypes, such as multiple tumor number (p = 0.006), high-grade TNM stage (p = 0.001) and elevated alaninetransaminase (ALT) (p = 0.020).Interestingly, higher infiltration of CD206 + TAMs was also positively correlated with multiple tumor number (p = 0.038), presence of vascular invasion (p = 0.011), appearance of tumor capsulation (p = 0.004), and advanced TNM stage (p = 0.005).
Analysis of Macrophages Immune Marker (CD68, CD86 and CD206) Prognostic Value in HCC Patients
We further investigated the clinic prognosis value of TAMs markers in this cohort of 253 patients (Figure 2).CD68 + TAMs had no prognostic value in HCC patients (Figure 3A,D).Patients with low CD86 + TAMs staining cells had a significantly shorter median overall survival (OS) and time to recurrence (TTR) (OS, 41.3 months; TTR, 36.3 months) than those with high staining cells (OS, 49.1 months, p = 0.027; TTR, 43.2 months, p = 0.037) (Figure 3B,E).Conversely, low presence of CD206 + TAM group had a markedly longer median OS and TTR (OS, 46.2 months; TTR, 41.7 months) when compared with the high presence group (OS, 40.1 months, p = 0.024; TTR, 34.0 months, p = 0.031) (Figure 3C,F).As summarized in Table 2, univariate analyses suggested that low CD86 + TAMs and high CD206 + TAMs were significantly associated with decreased survival and high risk of recurrence in HCC patients post curative resection.Furthermore, multivariate Cox's regression analysis, after backward stepwise variable selections, suggested that apart from tumor size, tumor differentiation, and vascular invasion, infiltration of CD86 + and CD206 + TAMs remained as the independent As summarized in Table 2, univariate analyses suggested that low CD86 + TAMs and high CD206 + TAMs were significantly associated with decreased survival and high risk of recurrence in HCC patients post curative resection.Furthermore, multivariate Cox's regression analysis, after backward stepwise variable selections, suggested that apart from tumor size, tumor differentiation, and vascular invasion, infiltration of CD86 + and CD206 + TAMs remained as the independent prognostic factors in HCC patients for both OS (HR = 2.178, p = 0.040 and HR = 1.584, p = 0.027) and TTR (HR = 1.810, p = 0.006 and HR = 1.872, p = 0.030).Collectively, these data implied that CD86 and CD206 were valuable prognostic biomarkers in HCC patients.
Integrated Analysis of Immune Markers CD86 and CD206 Provides More Powerful Prognostic Value in HCC Patients
As important tumor microenvironment components, antitumoral M1 and protumoral M2 immunophenotype macrophages both influence the tumor development and progression.Thus, we hypothesized that combined analysis of CD86 and CD206 may better predict the prognosis of HCC patients by evaluating both M1 and M2 immunophenotype macrophages.Based on CD86 + and CD206 + TAMs presence, patients were classified into four groups: Group I, CD86 high and CD206 low ; Group II, CD86 low and CD206 low ; Group III, CD86 high and CD206 high ; and Group IV, CD86 low and CD206 high .The median OS for Groups I, II, III, and IV were 52.5, 43.0, 45.2 and 37.8 months, respectively (Figure 4A).The median TTR for Groups I, II, III, and IV were 47.4 months, 38.7, 37.4 and 31.9 months, respectively (Figure 4B).Significant differences in OS and TTR were found among the four groups (OS: p = 0.011, TTR: p = 0.024, Figure 4A,B).Collectively, these data evidently suggested that combined analysis of CD86 and CD206 served as a better indicator of survival and recurrence in HCC patients than analyzing individual factors.Kaplan-Meier curves for overall survival (A) and time to recurrence (B) according to the comprehensive analysis of the staining presence of CD86 + and CD206 + macrophages in the above cohort (n = 253).Group I, high staining presence of CD86 + but low CD206 + macrophages; Group II, both low staining presence; Group III, both high staining presence; and Group IV, low staining presence of CD86 + but high CD206 + macrophages.
TAMs Predictive Model for α-Fetoprotein (AFP) Negative HCC Patients
Several clinical studies demonstrated that preoperative serum AFP level was a promising predictor for HCC prognosis [14][15][16].Low AFP level is generally associated with favorable prognosis.Nevertheless, some patients with negative AFP developed rapidly.Still worse, there is no reliable biomarker to differentiate the prognosis of AFP -HCC patients.To test the prognostic value of TAMs marker in AFP − patients (cut-off point 20 ng/mL) [17,18], 99 patients were selected from as the abovementioned cohort.In this AFP − cohort, patients with CD86 low and CD206 high status had the worse OS and TTR (OS: 40.5 months, p =0.002, TTR: 32.6 months, p = 0.005, Figure 5A,B) compared with three other groups (Group I, CD86 high but CD206 low , OS: 57.1 months, TTR: 54.1 months; Group II, CD86 low and CD206 low , OS: 54.8 months, TTR: 48.8 months; Group III, CD86 high but CD206 high , OS: 52.8 months, TTR: 37.4 months).
Discussion
Tumor milieu consists of many cell types.Cross-talks between tumor cells and the ambient microenvironment plays pivotal role in tumor progression and metastasis.In particular, TAM is a major cellular component that infiltrates most tumors, establishing a bridge between tumor cells and immune microenvironment [19].Accumulating evidence indicated that the subtypes of macrophages, M1 and Kaplan-Meier curves for overall survival (A) and time to recurrence (B) according to the comprehensive analysis of the staining presence of CD86 + and CD206 + macrophages in the above cohort (n = 253).Group I, high staining presence of CD86 + but low CD206 + macrophages; Group II, both low staining presence; Group III, both high staining presence; and Group IV, low staining presence of CD86 + but high CD206 + macrophages.
TAMs Predictive Model for α-Fetoprotein (AFP) Negative HCC Patients
Several clinical studies demonstrated that preoperative serum AFP level was a promising predictor for HCC prognosis [14][15][16].Low AFP level is generally associated with favorable prognosis.Nevertheless, some patients with negative AFP developed rapidly.Still worse, there is no reliable biomarker to differentiate the prognosis of AFP ´HCC patients.To test the prognostic value of TAMs marker in AFP ´patients (cut-off point 20 ng/mL) [17,18], 99 patients were selected from as the abovementioned cohort.In this AFP ´cohort, patients with CD86 low and CD206 high status had the worse OS and TTR (OS: 40.5 months, p =0.002, TTR: 32.6 months, p = 0.005, Figure 5A,B) compared with three other groups (Group I, CD86 high but CD206 low , OS: 57.1 months, TTR: 54.1 months; Group II, CD86 low and CD206 low , OS: 54.8 months, TTR: 48.8 months; Group III, CD86 high but CD206 high , OS: 52.8 months, TTR: 37.4 months).).Group I, high staining presence of CD86 + but low CD206 + macrophages; Group II, both low staining presence; Group III, both high staining presence; and Group IV, low staining presence of CD86 + but high CD206 + macrophages.
TAMs Predictive Model for α-Fetoprotein (AFP) Negative HCC Patients
Several clinical studies demonstrated that preoperative serum AFP level was a promising predictor for HCC prognosis [14][15][16].Low AFP level is generally associated with favorable prognosis.Nevertheless, some patients with negative AFP developed rapidly.Still worse, there is no reliable biomarker to differentiate the prognosis of AFP -HCC patients.To test the prognostic value of TAMs marker in AFP − patients (cut-off point 20 ng/mL) [17,18], 99 patients were selected from as the abovementioned cohort.In this AFP − cohort, patients with CD86 low and CD206 high status had the worse OS and TTR (OS: 40.5 months, p =0.002, TTR: 32.6 months, p = 0.005, Figure 5A,B) compared with three other groups (Group I, CD86 high but CD206 low , OS: 57.1 months, TTR: 54.1 months; Group II, CD86 low and CD206 low , OS: 54.8 months, TTR: 48.8 months; Group III, CD86 high but CD206 high , OS: 52.8 months, TTR: 37.4 months).
Discussion
Tumor milieu consists of many cell types.Cross-talks between tumor cells and the ambient microenvironment plays pivotal role in tumor progression and metastasis.In particular, TAM is a major cellular component that infiltrates most tumors, establishing a bridge between tumor cells and immune microenvironment [19].Accumulating evidence indicated that the subtypes of macrophages, M1 and
Discussion
Tumor milieu consists of many cell types.Cross-talks between tumor cells and the ambient microenvironment plays pivotal role in tumor progression and metastasis.In particular, TAM is a major cellular component that infiltrates most tumors, establishing a bridge between tumor cells and immune microenvironment [19].Accumulating evidence indicated that the subtypes of macrophages, M1 and M2 phenotypes, performed exactly opposite roles in tumor progression and metastasis [10,20,21].M1 acted as a proinflammatory factor against microorganisms and tumor cells [22], while M2 played an immunosuppressive role to promote tissue repair and tumor progression [23].In this study, CD68 was used as pan macrophages immune marker; while CD86 and CD206 as marker for M1 and M2, respectively.Other molecules like CD11c and CD163 are also, respectively, expressed in M1 and M2 macrophages, but are at lower level compared with the former two in HCC.This makes CD86 and CD206 promising candidates as prognostic biomarkers compared with other macrophage biomarkers.
This study indicated that the presence of CD68 + TAMs had no impact on HCC prognosis.This may attribute to functional counterbalance regulated by M1 and M2 macrophages.Previous studies in HCC, colon cancer and gastric cancer implied that the abundance of CD68 + TAMs infiltrated in tumor tissue was not associated with patient prognosis after curative cancer tissue resection [24][25][26].However, other research reported that CD68 + TAMs in tumor stroma was an independent prognostic factor for poor OS and TTR in breast cancer, cholangiocarcinoma and Hodgkin lymphoma [27][28][29].The relationship between CD68 + TAMs and tumor prognosis is not clear-cut.Different CD68 + TAMs polarization status may indicate different prognosis.
In contrast to CD68 + TAMs infiltration, the low presence of CD86 + TAMs and high presence of CD206 + TAMs markedly correlated with poor HCC prognosis.Although plentiful studies described CD86 and CD206 as a cell surface immune marker of M1 and M2 macrophage respectively, only a few reports emphasized the clinic significance of CD86 + and CD206 + TAMs in tumors.In colorectal cancer, the infiltration of CD86 + TAMs indicated a favorable prognosis [30].In multiple myeloma patients, CD86 + TAMs did not show correlation with tumor progression [31].In prostate adenocarcinoma, the high presence of CD206 + TAMs infiltration was associated with poor prognosis [32].Similarly, Xu et al. reported that CD206 + TAMs was a promising indicator for poor survival in renal cell carcinoma patients [33].In gastric cancer, infiltration of polarized CD206 + TAMs in tumor indicated poor survival after surgical resection [26].These results, as well as findings from this study, indicated that polarized M1 and M2 play a vital role in tumor prognosis.
Many reports highlighted the prognostic values of individual M1 or M2 in cancers, but did not perform a combined analysis of M1 and M2.It should be noted that M1 and M2 macrophage polarization are the two ends of macrophages.Most macrophages take on a mixed M1/M2 phenotype [34,35].Combined analysis of M1/M2 phenotype TAMs seems to be more appropriate in cancer patients.In the present study, patients from CD86 low /CD206 low and CD86 high /CD206 high groups had the intermediate OS and TTR, possibly attributed to functional counterbalance regulated by CD86 and CD206.CD86 high /CD206 low group implied an immune profile of M1 macrophage polarization and served as a favorable prognostic factor in OS and TTR.On the other hand, CD86 low /CD206 high group showed an immune profile of M2 macrophage polarization, and was associated with a poor HCC prognosis.Our results further emphasized the opposite functions of M1 and M2 macrophages in HCC prognosis.
Previous reports revealed that about 40% early-stage HCC patients and 15%-20% late-stage HCC patients were AFP-negative [36].Generally, HCC patients with negative AFP were associated with favorable prognosis.However, some AFP ´patients progressed rapidly, with poor prognosis.Thus far, there is no satisfactory prognostic factor for AFP ´patients.It is urgent to develop a novel prognostic factor for patients with negative AFP.In our study, when applied to preoperative serum AFP-negative patients, CD86 low but CD206 high TAMs status could effectively differentiate patients with poor prognosis.Thus, to some extent, this predictive model could be a powerful tool for making rational treatment decision in AFP-negative patients.
In summary, this study indicated that CD86 high /CD206 low group, implying M1 immunophenotype macrophages, and CD86 low /CD206 high group, implying M2 immunophenotype macrophages, provides favorable and poor prognosis of HCC, respectively.Combined analysis of
Figure 3 .
Figure 3. Kaplan-Meier curves for overall survival (A-C) and time to recurrence (D-F) of hepatocellular carcinoma (HCC) patients according to the staining presence of CD68 + , CD86 + and CD206 + macrophages in the cohort (n = 253).
Figure 3 .
Figure 3. Kaplan-Meier curves for overall survival (A-C) and time to recurrence (D-F) of hepatocellular carcinoma (HCC) patients according to the staining presence of CD68 + , CD86 + and CD206 + macrophages in the cohort (n = 253).
Figure 4 .
Figure 4.Kaplan-Meier curves for overall survival (A) and time to recurrence (B) according to the comprehensive analysis of the staining presence of CD86 + and CD206 + macrophages in the above cohort (n = 253).Group I, high staining presence of CD86 + but low CD206 + macrophages; Group II, both low staining presence; Group III, both high staining presence; and Group IV, low staining presence of CD86 + but high CD206 + macrophages.
Figure 4 .
Figure 4.Kaplan-Meier curves for overall survival (A) and time to recurrence (B) according to the comprehensive analysis of the staining presence of CD86 + and CD206 + macrophages in the above cohort (n = 253).Group I, high staining presence of CD86 + but low CD206 + macrophages; Group II, both low staining presence; Group III, both high staining presence; and Group IV, low staining presence of CD86 + but high CD206 + macrophages.
Figure 4 .
Figure 4.Kaplan-Meier curves for overall survival (A) and time to recurrence (B) according to the comprehensive analysis of the staining presence of CD86 + and CD206 + macrophages in the above cohort (n = 253).Group I, high staining presence of CD86 + but low CD206 + macrophages; Group II, both low staining presence; Group III, both high staining presence; and Group IV, low staining presence of CD86 + but high CD206 + macrophages.
Table 1 .
Correlation between immunohistochemical variables and clinicopathologic features of HCC patients in the cohort (n = 253).
Table 2 .
Univariate and multivariate analysis of factors related to OS and TTR of HCC patients in the cohort (n = 253).
HCC, hepatocellular carcinoma; OS, overall survival; TTR, time to recurrence; HBsAg, hepatitis B surface antigen; HCVAb, hepatitis C virus antibody; AFP, alpha-fetoprotein; ALT, alanine transaminase; γ-GT, γ-glutamyltransferase; TNM, tumor-node-metastasis; HR, hazard ratio; CI, confidential interval; NA, not applicable; NS, not significant.Univariate analysis was performed by Kaplan-Meier method (log-rank test).Multivariate analysis was calculated using the Cox multivariate proportional hazard regression model with stepwise manner.a Patients were divided into four groups based on their staining densities of CD86 and CD206 positive TAMs: Group I, high expression of CD86 but low expression of CD206; Group II, both low expressions; Group III, both high expressions; and Group IV, low expression of CD86 but high expression of CD206; b Control group.
|
v3-fos-license
|
2021-01-18T14:25:58.740Z
|
2021-01-18T00:00:00.000
|
231629124
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2020.563475/pdf",
"pdf_hash": "28f14cf4ab2c97bd6fa9653f82cdb7b6b0c171dd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46130",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"sha1": "28f14cf4ab2c97bd6fa9653f82cdb7b6b0c171dd",
"year": 2020
}
|
pes2o/s2orc
|
Increased Reward-Related Activation in the Ventral Striatum During Stress Exposure Associated With Positive Affect in the Daily Life of Young Adults With a Family History of Depression. Preliminary Findings
Background: Being the offspring of a parent with major depression disorder (MDD) is a strong predictor for developing MDD. Blunted striatal responses to reward were identified in individuals with MDD and in asymptomatic individuals with family history of depression (FHD). Stress is a major etiological factor for MDD and was also reported to reduce the striatal responses to reward. The stress-reward interactions in FHD individuals has not been explored yet. Extending neuroimaging results into daily-life experience, self-reported ambulatory measures of positive affect (PA) were shown to be associated with striatal activation during reward processing. A reduction of self-reported PA in daily life is consistently reported in individuals with current MDD. Here, we aimed to test (1) whether increased family risk of depression is associated with blunted neural and self-reported reward responses. (2) the stress-reward interactions at the neural level. We expected a stronger reduction of reward-related striatal activation under stress in FHD individuals compared to HC. (3) the associations between fMRI and daily life self-reported data on reward and stress experiences, with a specific interest in the striatum as a crucial region for reward processing. Method: Participants were 16 asymptomatic young adults with FHD and 16 controls (HC). They performed the Fribourg Reward Task with and without stress induction, using event-related fMRI. We conducted whole-brain analyses comparing the two groups for the main effect of reward (rewarded > not-rewarded) during reward feedback in control (no-stress) and stress conditions. Beta weights extracted from significant activation in this contrast were correlated with self-reported PA and negative affect (NA) assessed over 1 week. Results: Under stress induction, the reward-related activation in the ventral striatum (VS) was higher in the FHD group than in the HC group. Unexpectedly, we did not find significant group differences in the self-reported daily life PA measures. During stress induction, VS reward-related activation correlated positively with PA in both groups and negatively with NA in the HC group. Conclusion: As expected, our results indicate that increased family risk of depression was associated with specific striatum reactivity to reward in a stress condition, and support previous findings that ventral striatal reward-related response is associated with PA. A new unexpected finding is the negative association between NA and reward-related ventral striatal activation in the HC group.
Background: Being the offspring of a parent with major depression disorder (MDD) is a strong predictor for developing MDD. Blunted striatal responses to reward were identified in individuals with MDD and in asymptomatic individuals with family history of depression (FHD). Stress is a major etiological factor for MDD and was also reported to reduce the striatal responses to reward. The stress-reward interactions in FHD individuals has not been explored yet. Extending neuroimaging results into daily-life experience, self-reported ambulatory measures of positive affect (PA) were shown to be associated with striatal activation during reward processing. A reduction of self-reported PA in daily life is consistently reported in individuals with current MDD. Here, we aimed to test (1) whether increased family risk of depression is associated with blunted neural and selfreported reward responses. (2) the stress-reward interactions at the neural level. We expected a stronger reduction of reward-related striatal activation under stress in FHD individuals compared to HC. (3) the associations between fMRI and daily life self-reported data on reward and stress experiences, with a specific interest in the striatum as a crucial region for reward processing.
Method: Participants were 16 asymptomatic young adults with FHD and 16 controls (HC). They performed the Fribourg Reward Task with and without stress induction, using event-related fMRI. We conducted whole-brain analyses comparing the two groups for the main effect of reward (rewarded > not-rewarded) during reward feedback in control (no-stress) and stress conditions. Beta weights extracted from significant activation in this contrast were correlated with self-reported PA and negative affect (NA) assessed over 1 week.
INTRODUCTION
Major depression disorder (MDD) is a leading cause of disability worldwide, and a research priority in mental health. Having a family history of depression (FHD) is a strong and consistent predictor of MDD development (1)(2)(3). In particular, the offspring of parents with MDD have a higher probability of experiencing poorer physical, psychological, or social health (4), as well as a two-to five-fold increased risk of experiencing an episode of MDD, and an increased risk of earlier onset of MDD (i.e., adolescence) (5).
Anhedonia, i.e., the reduced ability to enjoy once-pleasurable activities is a core feature of MDD (6) that could be partially explained by blunted responses to reward at neural level (7)(8)(9). Neural responses to reward are processed by a system of cortical and subcortical structures, including among other the striatum, the orbitofrontal and medio-prefrontal cortex as well the anterior cingulate gyrus, with the striatum, in particular the ventral striatum, being one crucial region involved in the anticipation, consumption, and learning from rewarding stimuli (10)(11)(12)(13)(14). The term ventral striatum was coined by Heimer (15) and encompasses the continuity between the nucleus accumbens and the ventral part of putamen and of the ventral caudate as well as rostral internal capsule, the olfactory tubercle and the rostrolateral part of the lateral olfactory tract in primates. In the context of reward, the ventral striatum includes the nucleus accumbens, the medial/ventral caudate nucleus, and the medial and ventral putamen (16). A large number of neuroimaging studies reported that individuals with MDD exhibit reduced reward-related activity in the ventral striatum (VS) (17)(18)(19)(20). Interestingly, a similar reduced VS activity in response to reward was also found in individuals with FHD before they have met the criteria for a first episode of MDD (21)(22)(23)(24). For instance, reduced striatal activation in response to monetary rewards was evidenced in asymptomatic adolescents and children of parents with MDD compared to age-and gender-matched control groups without FHD (25,26). Thus, blunted striatal response to reward has been postulated to be a potential endophenotype related to MDD (27).
A growing amount of evidence indicates that stress exposure and stress sensitivity are strongly associated with the onset of MDD (28)(29)(30)(31)(32). Stress experiences have been shown to affect striatal reward processing in the context of early-life stress, childhood emotional neglect (33,34), recent life stress (35), and experimental acute stress (36)(37)(38). In most cases, stress experiences reduced the activation of the striatum in response to reward. It has been hypothesized that an imbalance between stress and reward reactivity could be a predictor for the development of psychopathology in general (39,40) and for MDD in particular (9). In line with that hypothesis, a recent study indicated that reward responsiveness measured with eventrelated potential had a moderator effect on the relationship between life-stress exposure and depressive symptoms in a large sample of young adults (41). Further findings showed that higher VS response to reward was associated with more reported positive affect (PA) in daily life (21,35,42), and supporting evidence suggests that this association could buffer the effect of stress sensitivity [e.g., (43,44)].
Combined findings from daily life measures and neuroimaging techniques, including functional Magnetic Resonance Imaging (fMRI) and positron emission tomography (PET scan) support the idea that dopaminergic activity in VS related to reward response is associated with self-reported PA in daily life (21,45,46). The experience sampling method (ESM) is used to collect self-report measures at multiple points in time in natural settings. It offers the opportunity to capture daily life dynamics related to cognitive and affective experiences, including in individuals with MDD (47-49). PA and negative affect (NA)are traits related to the propensity to experience positive (e.g., happy, confident, joyful) or negative (e.g., sad, angry, ashamed, anxious, lonely) affective states (50) and can be measured with the ESM. PA and NA have been analyzed as both, predictors and outcomes of mental health status (51,52). Whereas, NA is commonly experienced in almost every mental health disorder (52), there has been an increasing interest in PA in terms of both, its role in daily life and the neuroscientific understanding of psychopathology development and treatment, notably in MDD (21,45,51,(53)(54)(55)(56). In that context, Forbes et al. (21) showed that reduced reward-related striatal response in adolescents with MDD compared to healthy participants was associated with lower subjective PA in everyday life. In addition, the frequency of reported PA has been conceptualized as an indicator of reward reactivity in daily life (57). Therefore, recording PA in daily life in association with neural measures of reward and stress seems a promising way to investigate the effects of the stress-reward interaction on the development of MDD symptoms, in particular in vulnerable individuals. To our knowledge, one study has examined first-degree relatives of individuals with psychotic disorders (58), but none has investigated first-degree relatives of individuals with MDD.
Based on the above considerations, we propose here an innovative way to investigate the complexity of family risk of MDD by combining neuroimaging measures of reward processing with everyday life reward-related measures, using an ESM protocol in association with fMRI measurements. The aims of this study were: (1) To investigate whether increased family risk of depression is associated with blunted neural and self-reported reward responses. We expected lower neural and self-reported reward sensitivity in individuals with FHD in comparison to healthy controls (HC). (2) To test the stressreward interactions at the neural level. We expected a stronger reduction of reward-related striatal activation under stress in FHD individuals compared to HC. (3) To explore associations between fMRI and daily life self-reported data on reward and stress experiences, with a specific interest in the striatum as a crucial region for reward processing. Based on the results of (21), we expected positive correlations between PA and rewardrelated striatal activation to be more accentuated in HC than in FHD participants as well as negative correlations with NA and self-reported stress that would be more accentuated in the FHD group than in the HC group. We focused here on the striatum, in particular the VS, because (1) it is a crucial region in all phases of reward processing (12), (2) it is a region in which differences were reported in the reward-related neural activation between depressed and not-depressed participants (17,19) as well as between individuals with a family history of depression and controls (22,23), and (3) this region was reported to be correlated with positive emotions in everyday life (45) #147. We focused on the reward-related activation during the outcome phase, because a recent meta-analysis indicated that differences in the reward-related striatal activation between depressed and control participants were mostly measured activation during the outcome phase (or reward delivery phase) (59) and because robust striatal differences between FHD and healthy participants have been evidenced in this phase in particular (27).
Participants
Sixteen asymptomatic first-degree relatives with family history of MDD (FHD; 12 females, mean age = 24.31 years, SD = 4.08), and sixteen age-, gender-and socioeconomic status (SES)matched healthy controls (HC; 12 females, mean age = 25.19 years, SD = 4.79) with no parental history of mental disorder were recruited from the local community by advertisement at the University of Fribourg. The participants of the control group were selected from a larger sample [see (36)] to match for age and gender the group of participants with increased family risk of depression. Participation was compensated in money and/or experimental hours for study plans. The inclusion criteria were: age between 18 and 40 years; good health; good understanding of French; compliance with study procedure; and, for the FHD group, having a first-degree relative with a diagnosed major depressive disorder (MDD), or, for HC group, having no mental health history, as assessed with the Family interview for Genetic Studies (FIGS) (60). General exclusion criteria were: current or past history of any mental disorder, as determined by the Mini International Neuropsychiatric Interview (MINI) (61); history of any endocrinological conditions; history of any neurological condition, epilepsy or head injury; use of psychoactive substances, including alcohol (CAGE) (62), tobacco (Fagerström Test for Nicotine Dependence) (63), and cannabis (CAST) (64); being at risk for pathological gambling (Lie/bet) (65); non-removable metal elements in or on the body; pregnancy, which was confirmed by a urine test on the day of the scan; and being left-handed, as determined with the Edinburgh Handedness Inventory-short form (EHI) (66). Participants were mainly university students (FHD; 87%, HC; 81%) from the Swiss middle-class population. Table 1 shows that groups did not differ significantly in socioeconomic status (SES). Depressive symptoms were assessed with the Beck depression inventory II (BDI-II) (69), and the Montgomery and Asberg depression rating scale (MADRS) (68), and state and trait anxiety were assessed with the Spielberger State-Trait Anxiety Inventory (STAI) (70). This study was approved by the local ethical review boards of Vaud and Fribourg region (Commission cantonale d'éthique de la recherche sur l'être humain (CER-VD), Study Number 261/14) as well as that of the Bern region (Kantonale Ethikkommission Bern (KEK BE), Study Number 337/14). All participants provided written informed consent that conformed to the guidelines set out in the Declaration of Helsinki (2013).
Procedure
The first meeting included assessment of the inclusion/exclusion criteria. Participants then received detailed explanations of the ESM protocol and we planned the MRI session. ESM material included an iPod 5 Touch (Apple © ) with the iDialogPad (Mutz © ) app, for collecting real-time, self-reported data over seven consecutive days (from Monday to Sunday). This decision was made to enable participants to follow the more consistent rhythm of a standard week (71). An alarm was programmed to emit a signal ("beep") at four precise times during the day: 11:00 a.m. (T1), 2:00 p.m. (T2), 6:00 p.m. (T3), and 9:00 p.m. (T4). Participants self-reported their affective states and subjective stress 30 min after waking in the morning (T0). In most cases, ESM data collection started the week after the initial meeting and the scan session. A final clinical interview was conducted to ensure that participants finished without any outstanding questions or inconveniences related to their participation.
ESM Measurements
A total of 1,062 observations were collected, which represents a 95% participant compliance rate. The lowest participation was in 25 self-reported observations (71%), which satisfied the criteria for a representative sample of data (72). Affective states were rated by participants using statements that began with: "At the moment, emotionally I feel. . . ." These were rated on 7-point Likert scales (1 = Not true at all to 7 = Totally true). Items were selected from the PANAS-X (73) and from Wichers et al. (74). We included an additional item, "vulnerable, " to reflect a negative low-dominance affective state. The were "confident" and "happy" for positive affect (PA; α = 0.74) and "irritable, " "alone, " "angry, " "depressed, " "vulnerable, " "ashamed, " and "anxious" for negative affect (NA; α = 0.89). Subjective Stress was rated by participants on a 10-point scale with the item "Now, I evaluate my stress at. . . " (0 = No stress to 9 = Extremely stressed) (75). Aggregated mean scores were computed as individual traits for subjective stress. Positive affect (PA) was computed as mean scores of the items "confident" and "happy, " and then aggregated for a PA trait score. Negative affect (NA) was computed as mean scores of the items "irritable, " "alone, " "angry, " "depressed, " "vulnerable, " and "anxious, " and then aggregated for an NA trait score.
The Fribourg Reward Task
The Fribourg Reward Task is a monetary incentive delayed task, that was previously shown to elicit striatal activation (36). Participants performed a spatial delayed recall task with two levels of cognitive load (low = 3 circles and high = 7 circles) differentiated by the number of circles to be remembered (see Figure 1). At the onset of each trial, a visual cue showed the level of cognitive load and the monetary reward associated with performance ("blank screen" = no reward or "$$" = reward). Participants then saw a fixation cross (500 ms), followed by an array of yellow circles (3 or 7 circles) (1,500 ms). A fixation cross was then displayed (3,000 ms) before the presentation of the target blue circle, which appeared at any position on the screen during 1,500 ms. With a response box in their right hand, participants responded "yes" or "no" to the question of whether this blue circle occupied a position previously occupied by yellow circles, and did so as quickly as possible. Participants had a maximum of 1,500 ms to respond. After that, a blank screen was displayed during a variable jittered inter-stimulus-interval (ISI; 0 or 2,000 ms) and the feedback displayed (1,000 ms) "blank Frontiers in Psychiatry | www.frontiersin.org screen" for no reward or "1 CHF" for reward gain. A final display (1,000 ms) showed a blank screen or the "accumulated amount of gain." Every four trials, participants rated their mood and stress levels (max. 20 s). Task-related mood and stress were rated by participants on a 10-point Likert scale (0 = Emoticon with very negative mood and 9 = Emoticon with very positive mood), as was current stress (0 = "--" No stress and 9 = "++" Extremely stressed), all within a maximum of 20 s (see Figure 1). Correct responses were rewarded in the reward condition ("$$"), but not in the no-reward condition ("blank screen"). Each participant performed two distinct block sessions. In the second block, we added an experimental stress condition with six unpredictable mild electric shocks, previously adjusted to the participant's level of sensitivity. At the beginning of the second block, participants were informed that they would receive electrical shocks unrelated to the task and that they might receive electrical shocks at any time during the block. Before entering the scanner, every participant practiced the task to ensure a good understanding of it and answered questions. The task was implemented using E-Prime Professional (Version 2.0.10.353, Psychology Software Tools, Inc.). Stimuli were presented via goggles (VisualStimDigital MR-compatible video goggles; Resonance Technology Inc., Northridge, CA, USA) with a visual angle of 60 • , a resolution of 800 × 600 pixels, and a 60 Hz refresh rate. In this current study, we considered only the reward (reward vs. noreward) factor of the experiment in our analyses to test our a priori hypotheses.
Acute Experimental Stress Manipulation
We induced an acute stress condition in participants during the second block of our experimental design with an unpredictable mild electric shock on the external side of the left hand. The electrical shock intensity was calibrated to each participant before they entered the scanner with a standard shock workup procedure, starting at the lowest level and increasing the intensity until the participant identified an "aversive, but not painful" feeling (77). Electric shocks were induced through an electrical pain stimulator using the PsychLab © measuring system, with MRI-compatible electrodes and cables. The highest allowable shock intensity level was 5 mA (milliamperes).
MRI Data Acquisition
Magnetic resonance imagery (MRI) was performed at the Department of Diagnostic and Interventional Neuroradiology of the University Hospital of Bern, Switzerland. The functional MRI images were acquired using a Siemens (Erlangen, Germany) TrioTim syngo 3.0-Tesla whole-body scanner equipped with a radio frequency 32-channel head coil. MRI acquisition included fMRI Data Analysis fMRI data were analyzed using Statistical Parametric Mapping software (SPM12; https://www.fil.ion.ucl.ac.uk/spm/). The echoplanar images were realigned to the 37th volume, slice timing corrected, coregistered to the structural MR image, spatially normalized to standard Montreal Neurological Institute (MNI) 152 coordinate space, resampled into 3 × 3 × 3 mm voxels, and smoothed with an isotropic 6-mm full-width half maximum Gaussian kernel. Statistical analysis was performed within the framework of the general linear model. We considered only the reward delivery phase as robust striatal differences between FHN and healthy have been evidenced in this phase in particular (27): Because the main focus of this article was on the relationship between neural activation and ESM measures, we focused our analyses on a specific contrast (reward vs. no reward during the reward feedback phase) to limit the number of analyses, in particular with respect to the small sample sizeFor this reason, we will report here only the results related to the whole brain and ROI analyses in response to reward during reward feedback and their association with the ESM measures. Other data related to this study and this sample have been reported elsewhere, in particular the results related to the anticipation phase (76). For each participant, four distinct events were modeled as separate regressors in an event-related manner for the duration of each phase: (a) trial cue (2,000 ms); (b) stimulus presentation (6,000 ms); (c) feedback (2,000 ms); and (d) mood and stress rating (20,000 ms). Subsequently, these regressors were convolved with the canonical hemodynamic response function implemented in SPM12. The six movement parameters (three translations and three rotations) obtained from the realignment procedure were also included in the model. We used a highpass filter with a cut-off frequency of 1/128Hz. Only trials with correct responses were analyzed. Statistical analyses of singlesubject fMRI data were implemented using a general linear model (GLM) with a total of 20 regressors corresponding to six movement parameters and conditions-Stress (control/Stress) × Load (high/low) × Reward (no/rewarded)-across the four events. Note that only high-reward vs. not rewarded trials were used in analysis to increase contrast. A second-level (randomeffects) model analysis was performed with independent ttest for group analyses. Contrast maps were constructed for the main effect of Reward (high reward > not rewarded), Stress (no-stress vs. stress), and Load (high vs. low), as well as interaction effect for Reward × Stress, Reward × Load, and Stress × Load, for both anticipation and feedback delivery phases. These contrast maps were used for both region of interest (ROI)-based statistical analyses and for whole-brain main effects analysis. For ROI-based analyses, a mask was created with automated anatomical labeling (AAL2) template (78,79) for bilateral caudate, putamen, and pallidum regions, with two added parcellations for the bilateral nucleus accumbens (Nacc) to create a mask of striatal regions typically involved in reward processing based on (16). An alpha of 0.05 was used with correction for multiple non-independent comparisons using Gaussian random field theory (80) and suprathreshold cluster-size statistics (81). The initial voxel-level threshold for all analyses was set at p < 0.001, uncorrected. We used conservative whole-brain correction and kept clusters that reached significance after Family Wise Correction (FWC) at p < 0.05. Parameter estimates (beta weight) were extracted from coordinates that showed significant activation after FWC at p < 0.05, based on the average activation within the ROI using the MarsBaR toolbox (http://marsbar. sourceforge.net), and referred based on the AAL2 atlas (78,79) for the main effect of Reward (i.e,. reward condition vs. no reward condition) during the outcome phase in the control condition and in the stress condition.
To control the effects of the reward task, we performed a 2 × 2 × 2 × 2 repeated measures ANOVA including Group (FHD vs. HC) as the between-subject factor, and Stress (no-vs. threatof-shock), Reward (no-vs. reward), and Load (high vs. low) as within-subject factors for responses accuracy, reaction times (RT) and self-reported mood and stress scores during the task. Results were adjusted with Bonferroni correction for multiple comparisons. We expected faster RT and higher accuracy, higher mood scores during reward as well as an effect of stress on these variables. In particular, we expected higher self-reported stress scores during the stress condition.
Correlations with ESM measures were performed using the beta-weights obtained for the contrast of interest and the selfreported mean for PA, NA and subjective stress over 7 days. We used SPSS (IBM SPSS Statistics, Version 25.0, Armonk, NY, USA) for descriptive analyses of the participants, independent t-test and χ 2 analysis.
Participants
Socio-demographic and clinical description of the participants is presented in Table 1. The FHD did not differ significantly from the HC in terms of gender, age, or socioeconomic status. Both groups were mainly composed of students (87 and 81%, respectively). The results of semi-structured interview for depressive symptoms MADRS (68), as well as self-reports for depressive symptom severity BDI-II (69) and for state and trait anxiety (STAI) (82) did not differ significantly between FHD and HC groups. In both groups, one participant has reached BDI-II (69) scores above the clinical threshold. This was not the case for the MADRS (68) scores.
Our results showed that 44% reported currently living with the parent with the history of MDD. Nearly all participants (94%) had lived with their depressive parent. Parents with a history of MDD were mainly mothers (75%); one participant reported that both parents had a history of MDD. Table 2 presents the detailed results for the behavioral data analyses for the task.
Reaction Time and Accuracy
For RT, we found significant main effects for the Stress [F (1,30) Results are corrected for multiple comparisons by applying a Bonferroni correction. Bold indicates two-tailed (p < 0.05) and one-tailed (p < 0.05/2) significant results. RT, Reaction time; R, Rewarded; NR, Not rewarded; H, High; L, Low. Partial eta squared (η 2 ) values range from 0 to 1, and represents the proportion of total variance accounted for by the factor(s), while excluding other factors from the total explained variance (i.e., non-error variation) in the repeated measures ANOVA (83). . No significant group differences were found for RT and accuracy.
Self-Reported Mood and Stress
For the self-reported mood scores, our results show significant main effects of the Reward factor [F (1,30) With regard to the stress ratings, we did not find any significant results.
ESM Protocol: Group Comparisons
Aggregated means and standard deviation of the daily life measurements are reported in Table 1. Results of the PA and NA mean score comparison between the FHD and HC groups showed no significant differences (p = 0.74 and 0.78 respectively). Similarly, no group difference was found for the reported daily life stress (p = 0.69).
fMRI Results Table 3 presents the results of the whole-brain analyses in the contrast of interest. To control for the effect of the stress condition, we also report the regions activated in the main contrast comparing the stress vs. no stress condition.
Striatal Activation During Feedback: Group Comparison
The whole-brain analysis for group comparison showed a significant difference in BOLD response in part of the VS, i.e., in the left putamen region between FHD and HC group during feedback delivery for the main effect of reward (reward vs. no reward condition in the control condition, see Table 3) at p < 0.005 FWE that remains significant in the stress condition, i.e., comparison of reward vs. no reward condition in the stress condition (see Figure 2).
VS Reward-Response Under Stress Association With ESM
Spearman correlations were performed between beta parameter estimates extracted in the VS based on the striatal mask, whose peak activation was located in the ventral striatum around the left medial caudate (see Table 3
Additional Regions Activated During Feedback
The whole-brain analysis for the main effect of reward showed significant differences in BOLD response in the comparison of the reward condition vs. the no-reward condition bilaterally in the occipital cortex, the anterior cingulate cortex, and the inferior frontal gyrus as well as in the right parietal cortex, right middle cingulate gyrus, right middle and superior frontal gyrus, right periaquaductal area, right thalamus, right hippocampus and in the left insula, left orbitofrontal cortex, and left cerebellum in the HC participants. In the FHD group, we found significant differences in BOLD response bilaterally in the anterior cingulate gyrus, the insula, and the parietal cortex as well as the right orbitofrontal cortex, right middle frontal gyrus and left occipital cortex (see Table 3).
Regions Activated in Response to Stress
The whole-brain analysis for the main effect of reward showed significant differences in BOLD response in the comparison of the stress condition vs. the no stress condition in the right superior parietal cortex, right lateral occipital cortex, right precuneus, right caudate as well as in the left superior frontal cortex and left insula in the healthy controls. In the FHV group, our results evidenced bilateral significant differences in BOLD responses in the parietal cortex that were also significantly more activated in the group comparison.
DISCUSSION
To our knowledge, this may be the first study to report a significantly increased ventral striatal neural response to reward delivery received during stress exposure in individuals with FHD compared to healthy controls. These results are counter to our hypothesis and previous findings on the blunting effect of stress on the hedonic capacity (84)(85)(86). Another remarkable finding is the association between the observed ventral striatal activation with daily life measures of PA in FHD participants and healthy participants as well as a significant negative correlation with daily life measures of NA that was significant only in the healthy control group.
Unexpectedly there was no significant difference in the striatal activation during reward delivery between FHD and HC in the condition without stress. This differs from previous findings on blunted striatal responses to reward in high-risk individuals (24)(25)(26)(27). This could be related to the lack of power; the sample may have been too small to detect difference between FHD and HC groups. However, McCabe et al. (22) did not report any striatal response to reward difference between groups with high and low risk of MDD. A common factor, shared by our study and McCabe et al.'s (22) previous research, is related to the mean age of the sample, which is older in our study (above 20 years). Striatal development studies have shown an important change between childhood and early adulthood in healthy individuals (87), and individuals with FHD (27). In addition, evidence demonstrates that neural response sensitivity to monetary and social reward changes across developmental stages (88). A further explanation could be related to the design, since participants might have been expecting the stress condition, and the condition without stress cannot be considered without taking into account the stress condition.
The increased sensitivity to reward outcomes during stress exposure for the FHD group compared to the HC group is consistent with a heuristic model of depression and the specific influence of stress on reward processing (9), as well as with psychobiological mechanisms of resilience and vulnerability (89). In our sample, the increased sensitivity to reward in the stress condition could be interpreted as a sign of a specific resilience marker in a brain region (i.e, the putamen) previously related to vulnerability to family risk of MDD (27). Putamen activation has been suggested to play a unique role in the intergenerational risk of depression, with evidence of an association between maternal and daughter putamen responses to anticipation of loss (90). Since we excluded participants with a previous history of mental disorder and since our sample was composed of young adults and not of adolescents, we might have included resilient individual, i.e., individuals who had passed through the high risk phase of adolescence without developing MDD or another psychopathology. This hypothesis is supported by the finding that the groups did not differ with regard to their subjective stress ratings, PA and NA measures in everyday life. Thus, in our results the increased VS response to reward delivery under stress could be a marker of a resilient profile. This interpretation should be however be taken with caution due to the small sample of participants, and because we did not use a longitudinal setting.
In line with that hypothesis, our significant association between increased ventral striatal reward reactivity and PA in daily life could be interpreted as a protective factor. Previous findings showed that the VS response to reward was associated to PA in daily life (35,91). A higher VS response to winning has been reported as a resilience marker in adolescent girls with unknown parental mental health histories (92). High sensitivity to reward experiences in daily life has been shown to increase resilience after environmental adversity (57). More PA after stress events has been shown to mediate the relation between sensitivity to reward and trait resilience (93). More broadly, increased reward response could buffer and blunt stress responses more quickly in a less predictable environment [for a review of a reward pathway buffering stress; (94) #132]. In that context, our unexpected finding that there was not reduced self-reported reward sensitivity (measured as PA) in the FHD group, could be associated with the hypothesis that we might have included resilient individual, i.e., individuals who did not develop psychopathological problems during the high-risk period of adolescence. An addition to the existing literature comes from our finding of a significant negative correlation between daily life NA and ventral striatal activation to reward that was specific to the HC group. To our knowledge, no study has investigated the correlation between neural reward reaction and NA.
In addition to the results observed in striatal regions, we also found in both groups significant reward-related activations in regions, which have been typically associated with the cerebral reward system (12), including the orbitofrontal and medioprefrontal cortex and the anterior cingulate gyrus in both groups of participants. Interestingly, our results also evidenced significant reward-related BOLD responses in the occipital and the parietal cortex. This is in line with previous studies showing for instance increased responses in the occipital cortex to rewarded tasks, especially in tasks involving visual attention (95). Activation in the parietal cortex was reported in response to reward tasks, in particular in tasks involving several levels of reward (96) as this is the case in our task. However, we found no significant group difference in any of these regions, but regions of the parietal cortex were also significantly more activated in the stress condition and this activation was also more FIGURE 2 | Left ventral striatal (VS, i.e., putamen) region BOLD activation for comparison of FHD and HC groups during reward feedback in stress condition for contrast rewarded > not rewarded (p < 0.005 FWE). Parameter estimates (beta weight) were extracted from coordinates that showed significant activation after FWE at p < 0.05 in the ROI analyses for the main effect of reward. accentuated in the FHD group than in the HC group. Increased activation in parietal regions in response to acute experimental stress has been documented in previous studies [for instance (97)] and interpreted as an augmented cognitive control under stress conditions. This increased activation in regions associated with cognitive controls could therefore also be associated with the observed better performance during the task (e.g., faster reaction times and increased accuracy) in the stress condition.
Our study has some limitations. First, the small sample size of this preliminary study did not allow us to investigate participants' age in relation to parental onset of MDD, or to use years lived with depressed parents to predict striatal activation. Secondly, our design did not include a counterbalanced condition in the no-stress (control) and stress (unpredictable threat of shock) conditions. In that context, the observed stress main effect in reaction times and accuracy could reflect a learning effect FIGURE 3 | Graphical presentation of the statistical relationships between (A) mean positive affect resp. (B) Mean negative affect and left ventral striatal (VS) region BOLD activation during reward feedback in stress condition for contrast rewarded > not rewarded. Parameter estimates (beta weight) were extracted from coordinates that showed significant activation after FWC at p < 0.05 in the ROI analyses with peak activation in the caudate. Results are presented for the entire group, the FHD group and the HC group. r, Spearmann correlation coefficient, n.s., not significant. rather than a stress effect. The lack of counter-balancing cannot however explain the lack of group difference in the condition without stress, since the same potential flaw was balanced out in the group comparison. Thirdly, our results did not evidence differences in stress ratings between the control and the stress conditions. This could be related to the small sample size as the results obtained in a larger associated sample could evidence significant stress ratings differences between the conditions (36). In addition, the different levels of cognitive load could have induced stress and be a confounding factor. Fourthly, in both groups of participants, one participant evidenced BDI scores above the clinical threshold. This could indicate that we included participants with increased depressive symptomatology in both groups or this could be related to a misunderstanding of some questions of the BDI-II, since no participant had MADRS scores above the clinical threshold and no participant fulfilled the depression criteria as determined by the MINI (61). Self-report questionnaires tend to overreport and clinician-based measures are thus the gold standard. Fifthly, the fact that a blank screen was presented in the no-reward condition in the feedback phase did not allow us to control for the brain activation related to the processing of the salience, visual attention and reading processes. Sixthly, the observed activation differences between the groups in the putamen were significant at a reduced thershold (p < 0.005). Seventhly, using average scores for the ESM data analysis might have obscured some important features of the experience sampling data. Measure of variability might have taken better advantage of the rich dataset and provided a better measure of emotional lability in everyday life. Finally, our results showed only associations, and a prospective design would be needed to enable the accumulation of causal and predictive evidence. Altogether, our results should be taken as preliminary and as a first step toward thinking about new pathways for studying the psychophysiological dynamics of reward processes within the laboratory and daily life environments.
CONCLUSION
Our results indicate that an increased family risk of depression was associated with specific striatum reactivity to reward in a stress condition. This is in line with previous studies showing atypical responses to reward in individuals at risk of depression. This finding extends the literature by investigating the stressreward interaction in these individuals. Our results support previous findings that ventral striatal reward-related response is associated with PA in daily life, (46). A new finding is the negative association between NA in daily life and reward-related ventral striatal activation that was observed in the HC group but not in the FHD participants. Due to the small sample size, these results must be considered preliminary. We suggest that our integrative approach might be a promising way to tackle subtle processes and differences in the field of vulnerability research.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by commission d'éthique du Canton de Vaud. The patients/participants provided their written informed consent to participate in this study.
|
v3-fos-license
|
2024-05-26T15:09:51.129Z
|
2024-05-24T00:00:00.000
|
270016096
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.spiedigitallibrary.org/journals/advanced-photonics-nexus/volume-3/issue-4/044001/Silicon-thermo-optic-phase-shifters--a-review-of-configurations/10.1117/1.APN.3.4.044001.pdf",
"pdf_hash": "fba9b6b191f161429a44b5934a0b8c9f5146f437",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46131",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"sha1": "1ef04eef478ed6dd310c43c37a7c5c4356534d3e",
"year": 2024
}
|
pes2o/s2orc
|
Silicon thermo-optic phase shifters: a review of configurations and optimization strategies
. Silicon photonics (SiPh) has emerged as the predominant platform across a wide range of integrated photonics applications, encompassing not only mainstream fields such as optical communications and microwave signal processing but also burgeoning areas such as artificial intelligence and quantum processing. A vital component in most SiPh applications is the optical phase shifter, which is essential for varying the phase of light with minimal optical loss. Historically, SiPh phase shifters have primarily utilized the thermo-optic coefficient of silicon for their operation. Thermo-optic phase shifters (TOPSs) offer significant advantages, including excellent compatibility with complementary metal – oxide – semiconductor technology and the potential for negligible optical loss, making them highly scalable. However, the inherent heating mechanism of TOPSs renders them power-hungry and slow, which is a drawback for many applications. We thoroughly examine the principal configurations and optimization strategies that have been proposed for achieving energy-efficient and fast TOPSs. Furthermore, we compare TOPSs with other electro-optic mechanisms and technologies poised to revolutionize phase shifter development on the SiPh platform.
Introduction
The use of silicon photonics (SiPh) [1][2][3][4] has witnessed exponential growth over the past decade.This increase is driven by the relentless and explosive expansion of consumer data, the necessity for real-time processing of wideband signals, and the significant energy demands of the data center industry, which consumed between 1% and 5% of global power in 2020. 5Photonic integrated circuits (PICs) present effective solutions to these challenges, offering solutions where there is a demand for energy efficiency and high computational throughput in disruptive technologies, including optical communications transceivers, 6,7 lidar systems, 8 quantum optics devices, 9 and optical sensors. 10n addition, emerging computing architectures for artificial intelligence and neuromorphic computing, leveraging SiPh, have shown numerous benefits-such as multiwavelength capabilities, ultrahigh speeds, and low power consumption-that address the limitations of complexity, cost, and footprint associated with traditional electronic computing components. 11oth mainstream and emerging applications necessitate the development of highly complex PICs that incorporate an extensive library of on-chip components such as (de)multiplexers, phase shifters, modulators, laser sources, photodetectors, and fiber-to-chip couplers.Among these, phase shifters stand out as a pivotal component in most PICs, enabling the manipulation of the real part of the effective refractive index with minimalideally zero-alteration to the imaginary part.The demand for components that combine ultralow optical loss with a compact footprint is critical for ensuring the scalability of advanced PICs and meeting the rigorous requirements of emerging applications.In this context, silicon thermo-optic phase shifters (TOPSs) have emerged as the prevalent method.TOPSs utilize the variation in silicon's refractive index-where light is predominantly confined-due to changes in temperature.Silicon TOPSs have become the cornerstone for the development of sophisticated PICs, showcasing the vast potential of SiPh technology across various application domains.Notable examples include optical reconfigurable and multipurpose photonic circuits, 9,12 phased arrays for lidar systems, 13 optical neural networks, 14 and Fourier transforming for optical spectrometry, 15 with demonstrators integrating from ∼50 to 176 TOPSs. 9owever, the intrinsic heating mechanism of TOPSs often results in high power consumption and slow operation.As a consequence, various optimization configurations and strategies have been proposed to enhance power efficiency and switching speed, or both, making the topic of TOPSs a blooming area of research over the past decade.
In this review, we explore the configuration and optimization strategies that have been proposed for TOPSs in SiPh.Our discussion begins with an examination of the fundamental principles underlying thermo-optic tuning in silicon waveguides, along with basic design guidelines and the trade-offs required for achieving optimal performance.Subsequently, we delve into the advancements in various TOPS technologies, highlighting developments in metallic heaters, transparent heaters, doped silicon, folded waveguide structures, and multipass waveguide configurations.Finally, TOPSs are compared with alternative technologies, providing a comparative analysis.A concluding section is dedicated to discussing prospective technological advancements and the future outlook for TOPSs in SiPh.
Fundamentals
Thermo-optic phase tuning in silicon waveguides is achieved by applying localized heat and exploiting the large thermo-optic coefficient of silicon, ∼1.8 − 1.9 × 10 −4 K −1 . 16,17It is important to note that for devices utilizing SiO 2 as the waveguide cladding, the thermo-optic effect of SiO 2 is typically disregarded since it is an order of magnitude lower than that of silicon, ∼9 × 10 −6 . 18The phase shift variation Δϕ in a waveguide can be expressed as where λ is the wavelength, Δn eff is the variation in the effective refractive index, and L is the path length.When the phase shift is induced by a change in the waveguide temperature, it is described by where ∂n eff ∕∂T is the thermo-optic coefficient of the optical mode, and ΔT represents the temperature increase.According to joule heating, the temperature increase is directly proportional to the power consumed by the microheater, denoted as ΔT ∝ P elec .Consequently, the power consumption of TOPSs, specifically the power required to induce a phase shift of π (P π ), can be formulated as where ∂n eff ∕∂P elec represents the variation of the effective refractive index with the electrical power applied to the microheater.For TOPSs that are invariant in the propagation direction, such variation is proportional to the active length of the heater.
Hence, in Eq. ( 3), the value of P π does not significantly vary with the length of the phase shifter.This implies that the same phase shift can be achieved using either short but intensely heated active heaters or longer but mild heaters, with the electrical power required to reach the desired temperature remaining constant.However, if the phase shift architecture is designed to vary with the direction of light propagation, it is possible to disrupt this relationship and achieve higher thermo-optic efficiencies while maintaining the same active footprint.
To assess the performance of TOPSs, the following figure of merit (FOM) is commonly employed and aimed to be minimized: where P π represents the power required to induce a phase shift of π, typically expressed in milliwatts (mW), and τ denotes the switching time, measured in microseconds (μs).On the other hand, to experimentally determine the performance metrics of the phase shifters, these devices are often integrated into interferometric structures, such as Mach-Zehnder interferometers (MZIs), microring resonators (MRRs), or multimode interferometers (MMIs).
Basic Configurations
The fundamental design of a TOPS typically involves a straight silicon waveguide accompanied by a parallel heater, resulting in a device that is invariant along the propagation direction.The heater is constructed from an electrically conductive material, designed to allow the flow of an electrical current and consequently generate joule heating, described by the equation , where I h represents the current flowing through the heater, and R h denotes the resistance of the heater.In addition, an alternative approach to heater design involves doping the silicon waveguide itself, thereby enabling the waveguide to function as the heater by facilitating electrical conductivity and heat generation directly within the silicon.
In the context of a propagation-invariant configuration for TOPSs, the power consumption can be analytically approximated, as detailed by Jacques et al., 19 by the equation, where G represents the thermal conductance between the heated waveguide and the surrounding materials, and A denotes the area through which the heat flow occurs.Similarly, an analytical expression for the switching speed, τ, can be derived, indicating its dependence on the thermal properties and geometry of the system, 19 τ ≈ in which H, the heat capacity of the heated waveguide, is proportional to the product of the area, A, and the length, L, of the waveguide (H ∝ AL).
To minimize power consumption in TOPSs, it is crucial to incorporate waveguides with materials of low thermal conductivity and to minimize the distance between the waveguide and the heater.However, reducing the distance between the heater and the waveguide often results in a trade-off, as it may increase optical loss due to heater absorption.Conversely, using materials with low thermal conductivity can indeed reduce power consumption but at the cost of slower switching speed.Therefore, unless the gap between the heater and the waveguide is diminished, a distinct trade-off between power consumption and switching speed exists.According to Eqs. ( 5) and ( 6), one potential strategy to achieve faster switching speeds without escalating power consumption involves decreasing the heat capacity of the waveguide, which suggests the use of shorter active lengths.However, this approach entails challenges.By analyzing Eq. ( 2), it is evident that L π ∝ 1∕ΔT π .In this regard, opting for short heater lengths can give rise to critical temperature values.High temperatures can compromise the performance of the heater caused by the self-heating phenomenon produced by the increase of the heater resistance with the temperature. 18Therefore, the actual temperature increase is lower than expected, assuming a constant heater resistance and thereby yielding a different phase shift.In addition, employing such compact phase shifters increases the susceptibility of adjacent structures to thermal cross talk, potentially affecting the overall device performance.
Several optimization strategies to enhance power consumption, switching speed, or both, have been explored in the literature, as we discuss in the Secs.3.1-3.3.Initially, we examine the use of metallic heaters to decrease power consumption by reducing the thermal conductance of the surrounding waveguide environment.This approach, however, results in a longer switching speed.Subsequently, we explore the application of transparent heaters, which aim to diminish the gap between the heater and the waveguide, i.e., the area A traversed by the heat flow [refer to Eqs. ( 5) and ( 6)], without penalizing the optical loss of the device.The final approach involves direct heating of the silicon waveguide through doping, thereby transforming it into a resistive element.This technique offers significant improvements in both power consumption and switching speed by minimizing the value of A, though it introduces optical loss due to free carriers.It is important to note that this direct heating approach is specific to the SiPh platform and is not applicable to other emerging photonics platforms, such as silicon nitride.Unless specified otherwise, the results discussed herein pertain to transverse electric (TE) polarization at a wavelength of ∼1550 nm.
Metallic Heaters
The most commonly employed method for inducing localized heating in a silicon waveguide or structure involves the use of metallic heaters and the principle of joule heating [Fig.1(a)].Such resistive heaters are typically configured as metal wires placed atop the silicon structure, separated by an intermediate dielectric layer, such as SiO 2 , to mitigate optical loss [Fig.1(b)].The thickness of these heaters is generally on the order of ∼100 nm, determined by standard fabrication techniques, including lift-off procedures.In addition, a diverse range of metals or metallic compounds compatible with complementary metal-oxide-semiconductor (CMOS) fabrication technology can be utilized for the heaters.These materials include copper (Cu), nickel silicide (NiSi), platinum (Pt), titanium (Ti), titanium nitride (TiN), and tungsten (W). Figure 1(c) shows the temperature the TOPS upon a square electrical signal applied to the heater with (solid blue line) and without (dotted red line) employing pulse pre-emphasis.The considered TOPS comprises a 500 nm × 220 nm Si waveguide with a 2 μm × 100 nm Ti heater on top.The gap between the waveguide and the heater is 1 μm.The temperature distribution in the cross section was obtained by solving the conductive heat equation using the COMSOL Multiphysics simulation tool.We considered the thermal constants reported in the literature. 20A nonuniform tetrahedral mesh, with element sizes ranging from 1 to 500 nm, was employed.A conductive heat flux boundary condition with a heat transfer coefficient of 5 W∕ðm 2 KÞ was set on the surface.The temperature of the remaining boundaries was fixed at 293.15 K (cold).
distribution within a typical TOPS based on a metallic heater, featuring a 1-μm-thick oxide cladding layer situated between the silicon waveguide and the metallic heater.
Table 1 surveys the experimental works that have employed metallic heaters alongside various generic optimization strategies to develop phase shifters in straight silicon waveguides.It is important to note that while the focus of these studies is on the use of metallic heaters, the optimization strategies outlined are versatile and can be applied to other methodologies discussed in subsequent sections.
Espinola et al. 21provided one of the pioneering experimental demonstrations of TOPSs on silicon nearly two decades ago.The design featured a silicon waveguide with a Cr/Au heater measuring 14 μm in width and 100 nm in thickness, positioned atop the waveguide.The phase shifter spanned a length of 700 μm, separated from the heater by a 1-μm-thick layer of SiO 2 .Integrated within an MZI to function as a switch, the device exhibited significant optical loss (32 dB), which the authors attributed primarily to scattering caused by considerable sidewall roughness in the waveguide.Despite its status as one of the initial experimental reports in this field, the device demonstrated a power consumption of 50 mW and a switching time of 3.5 μs, resulting in a FOM of 175 mW μs.Notably, subsequent studies have reported similar, or at times, inferior performance metrics. 22,23n the application side, the capabilities of TOPSs have been harnessed for switching purposes by cascading 1 × 2 MZI switches to implement 1 × N configurations. 23A significant advantage of these switches lies in their compact design, with the phase shifter elements measuring only 40 μm in length.Nonetheless, these devices were characterized by considerable power consumption and slow switching speeds, reported at 90 mW and 100 μs, respectively.The primary factor contributing to such a suboptimal performance is the substantial width of the heaters, ∼20 μm, which enlarges the cross-sectional area A of the phase shifter, as shown in Eqs. ( 5) and (6).A notable improvement in power consumption and switching speed-to 40 mW and 30 μs, respectively-can be achieved by reducing the heater width to 5 μm, as demonstrated in subsequent studies. 25tabaki et al. 26 have highlighted the substantial influence of the heater width and the intermediate layer on the performance of TOPSs equipped with metallic heaters atop silicon waveguides.Narrow heaters, with widths of less than ∼2 μm, are shown to enable faster switching time (∼4 μs) and lower power consumption (∼16 mW), attributed to the reduced volume of heating.However, reducing the heater width below 2 μm does not yield significant further improvements, primarily due to the lateral heat diffusion, which spans ∼1 to 2 μm, thus becoming comparable to the microheater's dimensions.
Furthermore, the selection of material for the waveguide cladding plays a critical role in modulating both power consumption and switching speed, establishing a trade-off with the thermal conductivity of the cladding material.Enhancing the thermal conductivity, while keeping the specific heat capacity constant, accelerates the phase shifter's response but increases power requirements [refer to Eqs. ( 5) and ( 6)].Substituting SiO 2 with SiN is one strategy to enhance switching speed.Moreover, applying high-energy-pulsed drive signals can further decrease switching time, potentially to submicrosecond scales, as demonstrated by the use of a pre-emphasis pulse [illustrated in Fig. 1(d)].This approach swiftly achieves the steady-state operation, although the inherent delay in heat transfer from the heater to the silicon waveguide sets a lower bound on achievable switching time.
The employment of parallel heaters alongside the silicon waveguide has been showcased as a method to realize low-loss, energy-efficient, and fast phase shifters. 29This approach utilizes a rib waveguide configuration instead of the conventional strip design, with heaters positioned on both sides of the waveguide's thin bottom slab.In Ref. 28, the heaters were composed of a 20-nm-thick NiSi layer, featuring widths varying from 500 nm to 3 μm.Notably, a layer of SiN is deposited atop the silicon waveguide prior to heater formation to inhibit silicide development within the waveguide structure.By setting the distance between the heaters and the waveguide at 500 nm, a balance between low optical loss and a remarkable FOM of 60 mW μs was attained, accompanied by a power consumption of 20 mW and a switching time of 3 μs.Despite the phase shifter's relatively high propagation loss of 25 dB∕cm, its compact length (40 μm) resulted in an insertion loss of less than 1 dB.
Lower FOM values have also been reported through the strategic placement of metallic heaters directly atop the silicon structure, leveraging silicon's thermal conductivity, 32 achieving a power consumption of merely 12 mW and a switching time of 2.9 μs.To circumvent optical losses associated with NiCr heaters, a microdisk with a 4 μm diameter was utilized as the phase-shifting element, minimizing metal-light interactions to less than 1 dB of loss due to the evanescent nature of the optical mode toward the device's center.
The application of the pre-emphasis technique, as previously mentioned, 26 further reduces the switching time to 85 ns (FOM ≈ 1 mW μs), enhancing the responsiveness of ON/OFF switching devices based on thermal phase shifters.Such devices benefit from differential or balanced architectures, enabling optical changes by selectively heating one of the optical paths.However, the primary challenge lies in the cooling period required for the heaters, as simultaneous cooling of both paths is essential before initiating the next switch to prevent continuous device heating.
The selection of an appropriate metal for the heaters is crucial not only from the perspective of minimizing optical loss but also to ensure that electrical power dissipation occurs predominantly within the heater rather than in the interconnections.While the optical loss may not be significantly affected by the choice of heater metal, the efficiency of power dissipation is paramount.The integration of the heater metal into a CMOS process flow is a critical consideration when selecting the optimal material for the heater.Although tin-and nickel-based alloys can be patterned as heaters within a CMOS process, foundries often prefer Cu and W due to their more desirable characteristics.
W, in particular, is favored for its relatively high resistivity and melting point, offering enhanced stability for the heaters. 33his stability is beneficial for devices that require consistent performance over time.In addition, W heaters can be electrically interconnected with Cu wires, taking advantage of Cu's lower resistivity to ensure that most of the heat is dissipated in the W heater.This configuration maximizes the thermal efficiency of the device.
Masood et al. 33 demonstrated the effectiveness of W heaters in a silicon waveguide, fabricated using a CMOS-like layer stack without further optimization.The devices exhibited power consumption levels of around 22 mW and switching time of ∼40 μs.The optical loss was reported to be less than 1 dB, with excellent electrical stability observed over 750 switching cycles.
Thermal cross talk is a critical consideration in densely packed PICs, where the proximity of devices can lead to undesirable interference due to heat diffusion.Depending on the TOPS configuration, the minimum thermal cross talk between devices can range between less than 10 to 50 μm. 19,34Although utilizing longer heaters can decrease the temperature difference required to achieve a phase shift of π as indicated by Eq. ( 2), this approach also expands the device's footprint and potentially increases optical loss.Thus, achieving an optimal balance among device specifications necessitates careful consideration and judicious optimization.
A strategy to mitigate parasitic thermal phase shifts involves the implementation of deep trenches between the aggressor (source of thermal interference) and victim (affected device) components. 19This technique effectively isolates devices thermally, minimizing cross talk without compromising the compactness or performance of the circuit.By employing such structural modifications, PIC designers can enhance device integration density while maintaining control over thermal effects, ensuring that each component functions as intended with minimal interference.
The thermal isolation of phase shifters, achieved through the implementation of air trenches or by detaching the structure from the substrate via an undercut [illustrated in Figs.2(a) and 2(b)], significantly decreases power consumption.This reduction is due to the air's thermal conductivity being nearly 2 orders of magnitude lower than that of SiO 2 (∼0.025Wm −1 K −1 ), thereby concentrating and elevating the temperature within the silicon waveguide, as depicted in Fig. 2(c).However, it is important to note that this approach leads to an increase in switching time [as indicated by Eqs. ( 5) and ( 6)].Despite this drawback, such thermal optimization strategies are particularly beneficial for deploying multiple phase shifters within applications where moderate total power consumption is prioritized over rapid switching speeds.
A straightforward method for achieving thermal isolation involves deep etching on both sides of the waveguide, preserving the conventional heater-waveguide layout.Following this approach, devices have demonstrated power consumption and switching speeds around 10 mW and 10 μs, respectively. 24oreover, submilliwatt power consumption (0.54 mW) has been reported for waveguides released from the substrate. 27hese freestanding phase shifters, supported by two SiO 2 struts across a 320-μm-long released waveguide, exhibit mechanical stability.However, this configuration results in the extended switching time, increasing from 39 μs in the attached version to 141 μs upon release.Recent studies have reported similar outcomes for released switching structures, 28,30,31 underscoring the trade-offs between power efficiency, switching speed, and structural design in the development of TOPSs.
Transparent Heaters
Transparent heaters, i.e., electrically conductive materials with minimal optical loss in the near-infrared region, provide a strategic avenue to mitigate the trade-off between optical loss, power consumption, and switching speed in TOPSs.This approach facilitates placing the heater in close proximity to the silicon waveguide, as illustrated in Figs.3(a) and 3(b), significantly reducing both the temperature gradient and the diffusion time between the waveguide and the heater.Consequently, this configuration not only improves the efficiency of heat transfer but also enhances the switching time of the phase shifter by shortening the thermal diffusion pathway.
Transparent heaters can be constructed using either twodimensional (2D) materials or transparent conducting oxides (TCOs).2D materials, such as graphene and carbon nanotubes (CNTs), offer the advantage of low optical loss due to their exceptional optical properties and atomic-scale thickness while also being electrically conductive.However, fabricating heaters from graphene presents challenges not encountered with traditional metal heaters.Typically, graphene heaters are produced by synthesizing a monolayer through chemical vapor deposition and subsequently transferring it onto the photonic chip, followed by precise patterning.It is important to note that the optical and electrical characteristics of graphene heaters are significantly influenced by the quality of the graphene sheet.
By contrast, TCOs such as indium tin oxide (ITO) are widely utilized in various optoelectronic applications, including photovoltaic cells and displays, due to their well-established and mature fabrication techniques, such as sputtering.TCOs combine transparency in the visible to near-infrared range with good electrical conductivity, making them suitable for integration into photonic devices.
Table 2 summarizes the main specifications for experimental TOPSs in silicon that utilize transparent materials for heating.
Graphene, renowned for its electrical conductivity, also boasts a remarkable thermal conductivity of ∼5000 Wm −1 K −1 . 42nitial propositions for incorporating graphene into silicon waveguides for thermo-optic tuning aimed to exploit its thermal conductance, envisioning a graphene layer to bridge the metallic heater and the silicon waveguide for more effective heat transfer. 35Despite these efforts, experimental outcomes indicated power consumption exceeding 50 mW and a moderate switching speed of 20 μs, failing to surpass the performance The considered TOPS comprises a 500 nm × 220 nm silicon waveguide with a 2 μm × 100 nm Ti heater on top.The gap between the waveguide and the heater is 1 μm.The temperature distribution in the cross section was obtained by solving the conductive heat equation using the COMSOL Multiphysics simulation tool.We considered the thermal constants reported in the literature. 20A nonuniform tetrahedral mesh, with element sizes ranging from 1 to 500 nm, was employed.A conductive heat flux boundary condition with a heat transfer coefficient of 5 W∕ðm 2 KÞ was set on the boundaries in contact with air.The temperature of the remaining boundaries was fixed at 293.15 K (cold).The temperature distribution in the cross section was obtained by solving the conductive heat equation using the COMSOL Multiphysics simulation tool.We considered the thermal constants reported in the literature. 20A nonuniform tetrahedral mesh, with element sizes ranging from 1 to 500 nm, was employed.A conductive heat flux boundary condition with a heat transfer coefficient of 5 W∕ðm 2 KÞ was set on the surface.The temperature of the remaining boundaries was fixed at 293.15 K (cold).We consider the limiting switching speed of the switch in the case that the value of the phase shifter is not reported, i.e., the highest value between the rise and fall time constants.
b Value obtained through numerical simulation.
of conventional metal-based phase shifters.In addition, numerical simulations revealed that the graphene layer could induce optical losses around 5 dB, further challenging its practicality for this application.Subsequent advancements were made by adopting a similar approach and silicon structure as outlined in Ref.31, where a graphene heater was implemented atop a silicon microdisk, replacing the metallic counterpart. 43This configuration achieved a power consumption of 23.5 mW and a switching speed of ∼10 μs, with the insertion loss attributable to the graphene heater being negligible (<2 × 10 −4 dB∕μm).This minimal interaction between the heater and the optical mode of the microdisk resonator contributed to the device's enhanced performance.
A breakthrough was reported with the use of a graphene heater on a silicon waveguide, achieving a record FOM value of less than 40 mW μs (P π ¼ 11 mW and τ ¼ 3.5 μs). 36The design included two intermediate layers, HSQ and Al 2 O 3 , positioned between the silicon waveguide and the graphene heater, with a meticulously optimized gap of 240 nm to maximize performance while minimizing optical loss.It is noteworthy that the reported power consumption was characterized at a wavelength of λ ¼ 1310 nm, with potential variations at λ ¼ 1550 nm due to differences in optical mode confinement.
Beyond graphene, CNTs have been proposed as an alternative for crafting transparent heaters, offering the principal advantage of lower absorption in the near-infrared spectrum.Direct integration of CNTs atop silicon waveguides has been explored for thermo-optic tuning purposes. 39Despite their promising optical properties, a significant limitation of CNTs is their incompatibility with standard CMOS fabrication processes.Moreover, the performance metrics reported, including a power consumption of 14.5 mW and a switching speed of 4.5 μs, do not exhibit marked improvements over analogous devices based on graphene.
Transition-metal dichalcogenides, particularly a single layer of MoS 2 (molybdenum disulfide), have shown better prospects as heater materials when positioned in close proximity (30 nm) to the silicon waveguide. 40This configuration yielded an impressively low power consumption of 7.5 mW in a 283-μmlong MoS 2 microheater, alongside a minimal insertion loss of ∼0.42 dB.However, the relatively slow response time of the phase shifter, around 25 μs, can be attributed to the Schottky contact formed between the MoS 2 layer and the Au electrical pads.Future enhancements could potentially be realized by establishing ohmic contacts with low resistance, optimizing the device's performance further.
The synergy between transparent heaters and the augmentation of light-matter interactions through slow-light phenomena offers a pathway to substantial improvements in the power efficiency and speed of TOPSs.The slow-light effect, facilitated by the elevated group index in photonic crystal waveguides (PhCWs), enhances tuning efficiency dramatically.As a result, switching time under 1 ms and power consumption as low as 2 mW (yielding a FOM of less than 2 mW μs) have been achieved in ultracompact phase shifters, measuring merely 20 μm in length, based on a PhCW integrated with a graphene heater. 37The minimal gap of only 11 nm between the heater and the PhCW contributes to this high efficiency, despite the graphene layer, inducing an optical loss of 1.1 dB.
Furthermore, ultracompact device switches can be realized through the development of a photonic crystal cavity (PhCC). 38his innovative approach allows for a switching power, defined as the energy required to transition from a low loss state to a high loss state, to be less than 2 mW, coupled with a switching speed of ∼1.5 μs for a device with a footprint of only 5 μm.
TCO-based microheaters stand out for their CMOS-compatible manufacturing processes and thermo-optical characteristics.
A key advantage of TCOs, such as ITO, resides in their capacity to modulate the concentration of mobile electrons within the near-infrared spectrum.This unique property enables these materials to function akin to metals with minimal loss at the operational wavelengths of devices, thus mitigating the optical losses typically associated with metal-based heaters.As a result, the spacer between the silicon waveguide and the heater can be substantially reduced, enhancing power efficiency and switching speed without incurring the significant optical losses characteristic of thinner metal gaps. 20Specifically, a compact ITO/Si TOPS, measuring only 50 μm in length, demonstrated a power consumption of 9.7 mW and a switching time of 5.2 μs.
Further advancements were achieved with the introduction of a hydrogen-doped indium oxide (IHO) microheater, implemented directly atop the waveguide. 41This 10-μm-long IHO heater not only showcased an insertion loss of ∼0.5 dB but also achieved a submicrosecond switching speed (0.98 μs) while consuming 9.6 mW.Consequently, this led to an exceptionally low FOM of 9.41 mW μs.
Doped Silicon
Doped silicon serves a dual purpose in the topic of TOPSs, acting simultaneously as both the heater resistor and the silicon waveguide.The doping process, which can involve n-type or p-type dopants such as arsenic (As), boron (B), or phosphorus (P), introduces free carriers into the silicon, leading to inherent optical losses.This effect creates a fundamental trade-off between the resistivity of the heaters and the optical absorption they introduce.To achieve a balance that minimizes optical losses while ensuring resistance values are compatible with electrical drivers and intended applications, silicon is typically doped to a carrier concentration of ∼10 18 cm −3 .In addition, employing multiple heater resistors in parallel is a common strategy to lower the total resistance, enhancing the device's compatibility with electrical systems [illustrated in Fig. 4(a)].
It is important to note that doped silicon heaters exhibit specificity toward the silicon photonic platform and may not be directly transferable to other photonic materials such as silicon nitride.Table 3 compiles experimental studies that have utilized doped silicon as the heating element, detailing their main specifications.
Employing doped silicon wires as heaters presents a viable alternative to traditional metallic heaters.Such resistive elements are typically built by doping the edges of a rib waveguide, maintaining a distance of less than 1 μm from the core to mitigate excessive optical loss, while the central region of the waveguide remains undoped.Consequently, the electrical current flows parallel to the waveguide's length.This configuration allows for power consumption levels comparable to those of metallic heaters positioned atop the waveguide (∼20 mW) but offers the advantage of faster switching speeds (ranging from 2 to 5 μs).The enhanced speed is attributable to the reduced distance over which heat must propagate. 19,47n the other hand, doped silicon waveguides can facilitate even faster switching through direct current injection.This approach enables heat generation directly within the waveguide itself, as depicted in Fig. 4(b), effectively bypassing the limitations associated with heat propagation from external sources.In addition, this approach offers a slight reduction in power consumption compared with parallel heaters adjacent to the silicon waveguide.Rib waveguides, characterized by heavily doped edges and a lightly doped center, are essential for facilitating electrical current injection into the waveguide, as depicted in Fig. 4(c).This doping configuration ensures an optimal overlap between the thermal profile and the optical mode, minimizing the optical loss due to free carriers.
The phase shifter may also be designed as a series of individual resistors in parallel, allowing for customization of the device's resistance and driving voltage/current by adjusting the number of unit cells independently of its length.Such configurations have achieved insertion losses as low as 0.2 dB, power consumption of around 25 mW, and switching time of ∼3 μs. 46ptimizing the waveguide geometry further reduces power consumption without significantly affecting optical loss or switching speed.Notably, power consumption was minimized to 12.7 mW using a compact silicon-doped heater, ∼10 μm in length, integrated directly into the waveguide.An adiabatic bend was employed to minimize optical loss from free-carrier absorption and avoid optical mismatch, thereby preventing undesired reflections or the excitation of higher-order modes. 45oreover, leveraging the field pattern distribution in MMI devices facilitates achieving low insertion loss, compact footprints, and fast switching.Electrical connections are strategically placed at positions corresponding to field pattern minima within the MMI.A 35-μm-long device demonstrated power consumption and switching time of 29 mW and 2 μs, respectively, Fig. 4 (a) Illustration of a TOPS utilizing a silicon-doped heater, where the heat generation occurs within the doped silicon waveguide.In this configuration, the waveguide is of the rib type, with several silicon-doped heaters arranged in electrical parallel to minimize total resistance.Metallic contacts are linked to the silicon waveguide via silicon-doped strips.(b) Simulated temperature distribution within the TOPS, consisting of a 500 nm × 220 nm silicon waveguide atop a 100-nmthick slab, with 1 μm-thick SiO 2 cladding.Temperature distribution analysis was performed by solving the conductive heat equation with the COMSOL Multiphysics simulation tool, considering the waveguide core as the heat source, based on thermal constants from the literature. 20 nonuniform tetrahedral mesh, with element sizes ranging from 1 to 500 nm, was employed.A conductive heat flux boundary condition, with a heat transfer coefficient of 5 W∕ðm 2 KÞ, was applied on the surface, while the temperature for all other boundaries was fixed at 293.15 K (cold).with a moderate insertion loss of 2 dB. 51Subsequent improvements reduced the insertion loss to below 1 dB by minimizing the number of electrical connections, while the switching speed was enhanced to 500 ns through the incorporation of a thin Al heat sink. 52ntegrating a pn junction within a silicon waveguide, as illustrated in Fig. 4(d), enhances the operational stability of TOPSs.The saturated I-V response characteristic of pn junctions serves as a safeguard against thermal runaways by inherently limiting the current flow.Furthermore, the diode-like behavior of the junction facilitates the independent driving of multiple heaters using the same electrical pads. 50This configuration involves two diode heaters arranged in parallel, with the cathode of one heater connected to the anode of the other and vice versa, allowing for selective heating by simply reversing the voltage polarity.Reported configurations demonstrated power consumption of ∼21 mW and switching speeds nearing 100 μs. 50To decrease the overall resistance and, consequently, the required driving voltage, a total of eight diode heaters were placed in parallel, each 50 μm in length (8 μm p-doped) and 1.2-μm wide, placed 0.75 μm from the waveguide in the same plane.
To address the inherent challenge of nonlinear phase shift responses to applied voltage in diode heaters, the authors in Ref. 50 developed a linear response technique through the utilization of pulse-width modulation (PWM).By fixing the PWM signal amplitude above the diode heater's threshold voltage and modulating the signal's duty cycle, power delivery was linearized and controlled effectively.This diode heater configuration has been successfully applied to manage larger silicon photonic circuits, allowing for the digital control of matrix topologies comprising N rows and M columns by connecting N × M heaters. 53Employing PWM signals and time-multiplexing across different channels, the system obviates the need for digital-to-analog converters, requiring only M þ N wires for comprehensive circuit control.An experimental demonstration controlling a 3 × 5 matrix with a 1 × 16 power splitter tree and 15 TOPSs via eight bond pads showcased this concept's effectiveness. 53r further acceleration of switching time, the pn junction can be directly integrated into the silicon waveguide, enhancing speed to the microsecond range 49 or even down to hundreds of nanoseconds. 48However, this direct integration method results in a notable increase in the optical loss for the phase shifter, ∼2 dB. 49
Advanced Configurations
Advanced configurations in TOPSs aim to decouple the traditionally correlated lengths of the heater and the light path to enhance energy efficiency.This approach is characterized by extending the light-path length while maintaining the heater's length constant, thereby facilitating a greater phase shift for the same level of power consumption.The primary limitation of this strategy, however, lies in the requirement for larger device footprints to significantly reduce power consumption.
Folded Waveguides
Folded waveguides provide a straightforward method to extend the waveguide path length.By folding the silicon waveguide multiple time beneath the heater, for example, in a spiral configuration [illustrated in Figs.5(a) and 5(b)], significant increases in path length can be achieved.Densmore et al. 54 reported the fabrication of a waveguide spiral comprising a total of 59 folds.To mitigate coupling, the separation between adjacent waveguides was maintained at 2 μm.A meander Cr/Au heater, separated from the photonic spiral by a 1.5-μm-thick SiO 2 layer, facilitated a temperature change ΔT π ¼ 0.67°C across an active length of 6.3 mm for the TM polarization, resulting in a power consumption of ∼6.5 mW. 54When compared with a phase shifter employing a straight waveguide, the folded configuration demonstrated a fivefold reduction in power consumption (from 36 mW).The switching time was observed to be 14 μs, constrained by the thickness of the SiO 2 cladding surrounding the waveguide.Employing varying widths between adjacent waveguides can further mitigate phase matching and subsequent coupling. 55In addition, releasing the entire phase We consider the limiting switching speed of the switch in the case that the value of the phase shifter is not reported, i.e., the highest value between the rise and fall time constants.
b The value corresponds to the entire switching device.The optical loss of the phase shifter is not reported.
shifter structure can minimize power consumption to as low as 0.095 mW, albeit at the cost of a prolonged switching time of ∼1 ms (Table 4).Additional optimization in folded TOPSs has been achieved through the incorporation of noncircular clothoid bends and the optimization of the heater's width and position. 56This design facilitates a more efficient harnessing of generated heat.Peripheral waveguides are utilized to recollect residual heat energy, thereby enhancing the efficiency of the phase shifter without resorting to thermal isolation techniques such as air trenches or undercuts.This approach has demonstrated a power consumption of 2.56 mW and a switching speed of ∼35 μs.Subsequent research has yielded even higher performance, with a reported power consumption as low as 3 mW and a fast switching time of 11 μs. 57In addition, optical losses in such devices have been minimized to 0.9 dB, achieved by introducing a slight offset at the junction between the bend and straight waveguide segments to mitigate the excitation of higher-order modes.
Multipass Waveguides
A recent innovative TOPS configuration relies on a multipass photonic architecture, enhancing the effective path length of light through a mode multiplexing approach.This strategy reduces the power consumption of the phase shifter while preserving high switching speed and, more importantly, broadband operation. 58Indeed, while conventional resonant cavities enhance the effectiveness of phase shifters, this approach comes at the cost of narrowing the optical bandwidth.By contrast, the multipass strategy utilizes spatial mode multiplexing to circulate light multiple times through the phase shifter, with each pass converting the light to a higher-order orthogonal spatial mode.This method increases the effective path length without the need for a resonant cavity.It operates on the premise that the effective refractive indices of higher-order modes exhibit greater sensitivity to temperature changes due to their stronger dispersion.Thus, by integrating a TOPS into this multipass structure, light accumulates significant phase shifts from all passes.
The working principle is illustrated in Fig. 6: light is launched into the multipass structure in the TE 0 mode.As detailed in Ref. 58, the light is converted to the TE 1 mode upon exiting the multimode waveguide through a mode converter consisting of an adiabatic directional coupler.The TE 1 mode then circulates within the multimode waveguide in the opposite direction.Subsequently, light exits the multimode waveguide to be converted into the TE 2 mode and is sent back to the a We consider the limiting switching speed of the switch in the case that the value of the phase shifter is not reported, i.e., the highest value between the rise and fall time constants.The value corresponds to the entire switching device.The optical loss of the phase shifter is not reported.c With air trenches.d With undercut.multimode waveguide in the forward direction, and the process continues.Ultimately, the fundamental TE 0 mode is outputted from the structure.
This design was experimentally realized with a 360-μm-long Pt heater placed atop the multimode waveguide and separated by an intermediate 1-μm-thick SiO 2 layer.The device exhibited a switching time of 6.5 μs.Interestingly, the number of passes does not influence the device's switching time but does affect power consumption and optical loss.The effective path lengthand consequently, the optical loss-increases with the number of passes due to the greater number of adiabatic couplers involved.For a three-pass phase shifter, the power consumption and insertion loss were measured at 4.6 mW and 1.2 dB, respectively.Increasing the passes to seven resulted in reduced power consumption, down to 1.7 mW, albeit with an elevated loss of almost 5 dB.
Other Phase Shifter Mechanisms and Technologies
In addition to leveraging the silicon thermo-optic effect, various mechanisms and technologies have been proposed to address the inherent limitations of TOPSs, including energy consumption, switching speed, and device footprint.Table 5 provides a comprehensive summary of both established and emerging electro-optical phase shifter technologies within the realm of SiPh.
Silicon Plasma-Dispersion Effect
The plasma-dispersion effect in silicon offers a well-established approach for implementing phase shifters.The underlying physical phenomenon is inherently rapid (on the order of Parra, Navarro-Arenas, and Sanchis: Silicon thermo-optic phase shifters: a review of configurations… hundreds of picoseconds) and can be realized through n-∕pdoping of the silicon waveguide, utilizing the same fabrication processes available in microelectronic CMOS foundries. 59,60In addition, the power consumption associated with such phase shifters is moderately low, typically in the microwatt range.However, these devices face two primary limitations.First, the plasma-dispersion effect alters both the real and imaginary components of the silicon refractive index, 61 leading to relatively high optical losses (>1 dB) in these phase shifters.][64][65][66][67] To mitigate the issue of large footprints, resonant structures such as MRRs have been explored.[70][71][72][73]
Silicon Microelectromechanical Systems (MEMSs)
Over recent decades, silicon MEMS technology has achieved maturity, offering promising avenues for mechanical devices in photonics.5][76][77][78][79] These mechanical devices function by altering the modal cross section of a suspended silicon waveguide through geometrical adjustments, facilitated by a MEM actuator.The application of a voltage bias between the movable shuttle and a fixed, anchored electrode generates an attractive force within the actuator.This force diminishes the gap between the sets of teeth, causing displacement of the free-hanging shuttle.Consequently, a phase shift is achieved due to changes in the effective refractive index of the guided mode, resulting from this geometrical tuning.The induced optical losses are minimal, primarily originating from optical mismatches caused by structural transitions.
The primary challenges associated with MEMS-based phase shifters include their switching speed (ranging from ∼0.1 to 1 MHz), the relatively high driving voltage (exceeding 20 V), and the complexity of fabrication.Although MEMS technology is compatible with microelectronic industry manufacturing standards, the fabrication processes involved are intricate.
Plasmonics
1][82][83] Nonetheless, a significant challenge of this approach is the very high optical loss, typically exceeding 5 to 10 dB, which stands as a principal limitation.In addition, the reliance on non-CMOS-compatible metals such as Au hinders the mass production of plasmonic devices.The longterm reliability and stability of the organic polymers used also necessitate further investigation. 846][87] Notably, the significant free-carrier dispersion effect of ITO has been exploited to realize subwavelengthlong phase shifters capable of subnanosecond switching speeds.This is achieved by electrostatically tuning the ITO carrier concentration close to, but not within, the high-loss epsilon-nearzero plasmonic region. 88Despite these advancements, further optimization is required, as the insertion loss associated with these devices remains substantial (>5 dB).
Ferroelectrics
Ferroelectric materials are recognized for their capacity to enable high-performance electro-optic devices by harnessing the Pockels effect.Unlike silicon, which lacks the Pockels effect due to its material symmetry, ferroelectrics offer ultrafast operational speeds (on the order of picoseconds) without contributing to optical loss.In recent years, various platforms have been proposed to utilize these distinctive properties for the development of ferroelectric-based phase shifters, ensuring compatibility with silicon photonic devices.Predominantly, these efforts have centered around lithium niobate (LN), a material with a longstanding history in commercial fiber-based electro-optic modulators. 89,90Innovations in phase-shifting devices have led to the demonstration of both ultralow loss, ultrafast standalone LN thin films, 91 and hybrid LN/Si phase shifters, 92 noted for their high energy efficiency (less than pJ).
][97][98][99][100] Recent advancements include the development of a multilevel nonvolatile phase shifter based on BTO/Si. 101The direct growth of BTO on silicon highlights its potential for monolithic integration with electronic circuits and mass manufacturing within silicon photonic platforms.Furthermore, waferscale production has also been showcased in standalone LN on insulator 102 and LN on silicon nitride through heterogeneous integration. 103
Phase-Change Materials (PCMs)
PCMs are distinguished by their dramatic optical refractive index change, facilitating the development of photonic devices with ultracompact footprints spanning only a few micrometers.The predominant PCMs utilized in photonics are chalcogenides, 104 capable of nonvolatile transitions between amorphous and crystalline states.This attribute may significantly decrease power consumption, as no static power is needed to maintain the material state. 105State switching is typically achieved by locally heating the PCM through photothermal excitation with optical pulses or joule heating via microheaters, 104 leading to comparatively slower switching time (on the order of microseconds).Among various chalcogenide compounds, Ge 2 Sb 2 Te 5 (GST) has been extensively used. 106However, GST's high optical absorption in both material states positions it as an ideal candidate for absorption-based devices such as optical memories 107,108 but limits its use in phase-based devices.][111][112][113][114][115][116] In this regard, Sb 2 Se 3 ∕Si phase shifters have achieved an insertion loss of merely 0.36 dB with phase modulation up to 0.09π∕μm. 107owever, the long-term reliability and endurance of PCMs in photonics remain challenging, attributed to material property degradation after numerous switching cycles. 117Reversible switching operation up to only 10 4 cycles has been recently demonstrated in a Sb 2 Se 3 ∕Si phase-shifter device. 118Thus, the application of PCMs in phase shifters might be confined to scenarios not demanding extensive cycling over time.
In this review, a comprehensive overview of the current landscape of PIC technology is based on TOPSs.It has examined the most relevant heater technologies and advanced waveguide-heater configurations, highlighting the prevalent use of metallic heaters as the standard in SiPh due to their compatibility with CMOS foundry processes.Despite their widespread adoption, metallic heaters have been criticized for their high power consumption and slow response time.An alternative strategy, involving the release of the silicon waveguide, has been shown to significantly reduce power consumption, albeit at the cost of device speed.
The exploration of transparent materials, such as graphene and TCOs, offers promising avenues for enhancing performance by enabling closer placement of the heater to the waveguide.Nevertheless, the literature on these innovative approaches remains limited, underscoring a need for further investigation, particularly regarding their practical application and integration into the silicon photonic foundry fabrication processes.
Doping the silicon waveguide emerges as a preferable option for phase shifters requiring swift operation and minimal power consumption, as it facilitates internal heat generation within the waveguide.However, this method introduces optical losses due to free carriers.In addition, its application is confined to silicon waveguides, precluding its adoption in other photonic platforms, such as silicon nitride.
Addressing these open questions and challenges is crucial for advancing the field of TOPSs in PICs.Future efforts should aim at demonstrating the practical applications of these technologies and exploring their integration into standard fabrication processes, thereby paving the way for more efficient, faster, and versatile photonic devices.
Advanced waveguide-heater configurations present a promising avenue to augment the capabilities of conventional TOPS schemes.While existing implementations predominantly utilize metal heaters, the exploration of alternative materials, such as those based on transparent heaters, holds the potential to further capitalize on the advantages offered by these configurations.Notably, advanced approaches, including folded waveguides and light recycling, aim at minimizing power consumption without adversely affecting switching speed and optical bandwidth.This contrasts with strategies involving released waveguides, where power efficiency improvements often come at the cost of reduced operational speed.
A critical challenge associated with these advanced configurations is the inverse relationship between power consumption reduction and the TOPS footprint.In scenarios demanding high device density, such as in the deployment of deep-neural networks, the increased footprint could impose significant constraints.Consequently, there is a pressing need for novel strategies that concurrently optimize speed, power efficiency, and device compactness.Such developments would not only overcome existing limitations but also enable broader application of TOPSs in densely packed PICs.
This review also has explored various alternative mechanisms and technologies for phase shifters, each presenting unique advantages, limitations, and potential application scopes.The silicon plasma dispersion effect offers significantly faster operation speeds (lower than nanoseconds) while retaining fabrication compatibility with CMOS foundries.However, this approach incurs moderate insertion losses (>1 dB) and necessitates millimeter-scale footprints due to free-carrier effects and the inherently weak modulation mechanism.
Hybrid ferroelectric-SiPh platforms, utilizing materials such as LN or BTO, propose an avenue for ultralow loss (<1 dB) phase shifters capable of ultrafast speeds (on the order of picoseconds).Despite these advantages, their millimeter-long footprints may limit their applicability in densely integrated systems.
MEMS-based phase shifters emerge as a compact alternative (∼100 μm), featuring low optical losses and ultralow power consumption (in the nanowatt range).Their operation, predicated on the mechanical displacement of released silicon waveguides via an external electric field, leverages CMOS-compatible fabrication processes.Nonetheless, the slow operational speeds (on the order of microseconds) and the necessity for high voltages, which are incompatible with standard CMOS voltages, pose significant drawbacks.
Plasmonic phase shifters have demonstrated potential for energy-efficient and ultrafast operation within the ultracompact footprints.The primary challenge for plasmonics lies in their very high optical losses (>5 dB), constraining scalability and suitability for certain applications, such as quantum optics.
PCMs stand out for applications requiring ultracompact devices or benefiting from nonvolatile phase tuning, offering the advantage of zero static energy consumption.However, the principal challenge for PCMs is ensuring long-term stable operation across numerous switching cycles, a critical requirement for many applications.
In summary, silicon's relatively high thermo-optic coefficient, alongside the potential for negligible insertion losses, positions thermal tuning as the most versatile and widely applicable approach in the vast array of integrated photonic applications, spanning fields from computing and quantum technologies to artificial intelligence.The choice of TOPS optimization strategy and configuration will inevitably be guided by the specific requirements of each application, considering the inherent trade-offs among power consumption, speed, and ease of fabrication.Consequently, additional research efforts are crucial for overcoming these challenges.Emerging technologies that offer alternative methods for implementing integrated phase shifters within the SiPh platform present a promising avenue for superseding traditional TOPSs.However, the determination of which technology will ultimately be embraced by existing CMOS foundries remains an open question, underscoring the dynamic and evolving nature of this field.
Fig. 1
Fig. 1 (a) Illustration of a TOPS using a metallic heater on top of the waveguide.(b) Cross section of the TOPS.(c) Simulated temperature distribution of the TOPS.(d) Temporal response of the TOPS upon a square electrical signal applied to the heater with (solid blue line) and without (dotted red line) employing pulse pre-emphasis.The considered TOPS comprises a 500 nm × 220 nm Si waveguide with a 2 μm × 100 nm Ti heater on top.The gap between the waveguide and the heater is 1 μm.The temperature distribution in the cross section was obtained by solving the conductive heat equation using the COMSOL Multiphysics simulation tool.We considered the thermal constants reported in the literature.20A nonuniform tetrahedral mesh, with element sizes ranging from 1 to 500 nm, was employed.A conductive heat flux boundary condition with a heat transfer coefficient of 5 W∕ðm 2 KÞ was set on the surface.The temperature of the remaining boundaries was fixed at 293.15 K (cold).
Fig. 2
Fig.2(a) Illustration of a TOPS using a metallic heater on top of the waveguide with thermal isolation by etching the top cladding and buried oxide.(b) Cross section of the free-standing TOPS.(c) Simulated temperature distribution of the free-standing TOPS.The considered TOPS comprises a 500 nm × 220 nm silicon waveguide with a 2 μm × 100 nm Ti heater on top.The gap between the waveguide and the heater is 1 μm.The temperature distribution in the cross section was obtained by solving the conductive heat equation using the COMSOL Multiphysics simulation tool.We considered the thermal constants reported in the literature.20A nonuniform tetrahedral mesh, with element sizes ranging from 1 to 500 nm, was employed.A conductive heat flux boundary condition with a heat transfer coefficient of 5 W∕ðm 2 KÞ was set on the boundaries in contact with air.The temperature of the remaining boundaries was fixed at 293.15 K (cold).
Fig. 3
Fig.3(a) Illustration of a TOPS using a transparent heater directly on top of the waveguide.(b) Cross section of the TOPS.(c) Simulated temperature distribution of the TOPS using an ITO heater.The considered TOPS comprises a 500 nm × 220 nm silicon waveguide with a 2 μm × 100 nm ITO heater on top.The gap between the waveguide and the heater is 100 nm.The temperature distribution in the cross section was obtained by solving the conductive heat equation using the COMSOL Multiphysics simulation tool.We considered the thermal constants reported in the literature.20A nonuniform tetrahedral mesh, with element sizes ranging from 1 to 500 nm, was employed.A conductive heat flux boundary condition with a heat transfer coefficient of 5 W∕ðm 2 KÞ was set on the surface.The temperature of the remaining boundaries was fixed at 293.15 K (cold).
Fig.4(a) Illustration of a TOPS utilizing a silicon-doped heater, where the heat generation occurs within the doped silicon waveguide.In this configuration, the waveguide is of the rib type, with several silicon-doped heaters arranged in electrical parallel to minimize total resistance.Metallic contacts are linked to the silicon waveguide via silicon-doped strips.(b) Simulated temperature distribution within the TOPS, consisting of a 500 nm × 220 nm silicon waveguide atop a 100-nmthick slab, with 1 μm-thick SiO 2 cladding.Temperature distribution analysis was performed by solving the conductive heat equation with the COMSOL Multiphysics simulation tool, considering the waveguide core as the heat source, based on thermal constants from the literature.20A nonuniform tetrahedral mesh, with element sizes ranging from 1 to 500 nm, was employed.A conductive heat flux boundary condition, with a heat transfer coefficient of 5 W∕ðm 2 KÞ, was applied on the surface, while the temperature for all other boundaries was fixed at 293.15 K (cold).(c), (d) Cross-sectional views of the TOPS featuring (c) direct current injection and (d) a pn junction setup.
Fig. 5
Fig. 5 (a) Illustration of a TOPS using folded waveguides based on a spiral waveguide with a wide heater on top.(b) Cross section of the folded TOPS.The folded waveguide needs to be designed to avoid cross-coupling between adjacent waveguides. b
Fig. 6
Fig. 6 (a) Illustration of a TOPS utilizing a multimode waveguide where light is recycled N times through a multipass structure, demonstrating how power consumption decreases as the number of passes increases.(b) Cross section of the TOPS within the multimode waveguide.(c) Depiction of optical mode conversion as a function of the multipass structure's length.Light enters the structure in the fundamental mode and, after N passes, is converted to the Nth-order mode before being output from the structure and reverted to the fundamental mode.
Table 1
Summary of basic experimental TOPSs using metallic heaters in SiPh.
Table 2
Summary of basic experimental TOPSs using transparent heaters in SiPh.
Table 3
Summary of basic experimental TOPSs using doped silicon heaters in SiPh.
Table 4
Summary of advanced experimental TOPSs using folded waveguides and metallic heaters in SiPh.
Table 5
Comparison of mainstream and emerging electro-optic technologies for implementing phase shifters in SiPh.
|
v3-fos-license
|
2023-07-16T15:12:05.282Z
|
2023-07-01T00:00:00.000
|
259910980
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2227-9067/10/7/1223/pdf?version=1689335776",
"pdf_hash": "1bee9ff469af88254dd10480dfeddde9323f1371",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46132",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "6ba65bf0236615d2bd4bccd0abca4b06477d11dd",
"year": 2023
}
|
pes2o/s2orc
|
Spoken Expressive Vocabulary in 2-Year-Old Children with Hearing Loss: A Community Study
Through a cross-sectional community study of 2044 children aged 2 years, we (1) examine the impact of hearing loss on early spoken expressive vocabulary outcomes and (2) investigate how early intervention-related factors impact expressive vocabulary outcomes in children with hearing loss predominantly identified through universal newborn hearing screening. We used validated parent/caregiver-reported checklists from two longitudinal cohorts (302 children with unilateral or bilateral hearing loss, 1742 children without hearing loss) representing the same population in Victoria, Australia. The impact of hearing loss and amplification-related factors on vocabulary was estimated using g-computation and multivariable linear regression. Children with versus without hearing loss had poorer expressive vocabulary scores, with mean scores for bilateral loss 0.5 (mild loss) to 0.9 (profound loss) standard deviations lower and for unilateral loss marginally (0.1 to 0.3 standard deviations) lower. For children with hearing loss, early intervention and amplification by 3 months, rather than by 6 months or older, resulted in higher expressive vocabulary scores. Children with hearing loss demonstrated delayed spoken expressive vocabulary despite whole-state systems of early detection and intervention. Our findings align with calls to achieve a 1-2-3 month timeline for early hearing detection and intervention benchmarks for screening, identification, and intervention.
Introduction
Universal newborn hearing screening (UNHS) effectively reduces the median age of congenital hearing loss detection [1,2]. This enables earlier diagnosis, hearing amplification, and intervention for deaf and hard-of-hearing (DHH) children, with improved language outcomes reported at school age compared to those detected or amplified later [3,4]. However, these improved language outcomes remain poorer than normative expectations and population means [5]. Therefore, we should continue to seek modifiable factors that may improve early language outcomes for DHH children, over and above well-established early detection, to reduce this gap between children with and without hearing loss to the greatest extent possible.
There are relatively few reports of the spoken language outcomes of young DHH children. Early intervention (EI)-based studies show mixed results, either demonstrating children can achieve age-appropriate speech/language outcomes by three years of age [6]
Study Design and Participants
This was a cross-sectional study of spoken expressive vocabulary in 2-year-old children from two longitudinal cohorts: DHH children from the Victorian Childhood Hearing Longitudinal Databank (VicCHILD) [15], and children without hearing loss from the Early Language in Victoria Study (ELVS) [16], both born or residing in the state of Victoria (population 6.7 million), Australia. Combining these two cohorts formed a single community sample enriched for hearing loss that represented expressive vocabulary skills across the full range of hearing. The studies were approved by the Ethics Committees of The Royal Children's Hospital (VicCHILD and ELVS) and La Trobe University (ELVS), with parents/caregivers having provided written informed consent.
Participants with Permanent Hearing Loss
VicCHILD is a population-level data repository for over 1100 children with any degree or type of permanent hearing loss. All children identified with hearing loss through Victoria's UNHS program (99% uptake, 1.8% loss to follow-up) are invited to participate, as are children who attend the Royal Children's Hospital Caring for Hearing In Children Clinic for congenital or late-onset hearing loss. Most VicCHILD participants are under one year of age at enrollment. All participants also have access to government-supported hearing amplification and EI programs. Data are collected longitudinally with repeated measures as the child grows. Further information on VicCHILD's methodology is detailed elsewhere [15]. This study included all VicCHILD participants born between 2013 and 2019, with hearing, demographic, early language, and service use data from the first two collection points (enrollment and age around 2 years).
Participants without Known Permanent Hearing Loss
ELVS has documented the speech and language development of a community sample of children without known permanent hearing loss, developmental delays, or serious disabilities when recruited in 2003-4 at ages 8-10 months [17]. Over 1900 children were recruited from six of Victoria's 31 metropolitan local government areas, selected to represent children from different socio-economic backgrounds, and have been followed in successive waves. Demographic and spoken vocabulary outcomes data from participants at around 2 years of age were used, when the participant retention rate was 91.1% [16]. Further information on ELVS's methodology is detailed elsewhere [17].
Outcome
The primary outcome was parent-/caregiver-reported spoken expressive vocabulary at around 2 years, collected in both cohorts using closely related measures. ELVS used the 680-word vocabulary checklist in the MacArthur Bates Communicative Development Inventory (MCDI) Words and Sentences test [18], whereas VicCHILD used the 100-word checklist in the Sure Start Language Measure (SSLM) [19], a validated measure based on the MCDI: UK Short Form [20]. Both the MCDI and SSLM have mean standard scores of 100 (standard deviation of 15) and, when compared, show high reliability and concurrent validity [19]. The study included DHH children aged 18-30 months at assessment whose parent/caregiver completed the SSLM and children without hearing loss aged 23.5-25.5 months whose parent/caregiver completed the MCDI.
The expressive vocabulary outcome was derived from standardized SSLM scores for both cohorts. For the ELVS cohort, the SSLM raw score was calculated for the 100 items in the MCDI common to the SSLM. For any combination of words, such as "sofa/couch" being a single item on the SSLM but two items on the MCDI, a single score was assigned in the SSLM raw score calculation if at least one of the words was selected on the MCDI. Based on this process, MCDI scores were converted to SSLM raw score equivalents, then to standardized scores based on age (in months) and sex.
Exposures
The exposure for aim 1 was hearing loss, defined as no hearing loss or a combination of the degree of loss (mild (21-40 dB)/moderate (41-60 dB)/severe (61-90 dB)/profound (>91 dB)) and presence in one/both ears (unilateral/bilateral). Degree of loss, calculated using three or four frequency averages, was obtained from UNHS records or parent/caregiversupplied audiology results, classified using national decibel ranges for the affected ear (unilateral) or better hearing ear (bilateral) [21].
For aim 2, intervention-related factors were considered as separate exposures to explore the impact of intervening on single characteristics individually. Within the bilateral hearing loss cohort, exposures were hearing amplification status at survey completion (amplified vs. unamplified), frequency of hearing device use (never/rarely (no device or use <4 h/day), sometimes/often (4-8 h/day) or always (>8 h/day) derived from parent/caregiver-estimated hours of daily use at time of assessment), and age first enrolled into an EI program (≤3 months, 3.1-6 months, >6 months). Age at first hearing amplification fitting (≤3 months, 3.1-6 months, >6 months) was an exposure for children ever fitted with hearing amplification.
Potential Confounders
Directed acyclic graphs were developed to model assumptions about the causal structures (see Figure S1). For both aims, we identified possible demographic-related confounders as child sex, parent education level (completed at least undergraduate education: yes/no), primary language at home (English only: yes/no), and social disadvantage (measured using the Australian census-based Socio-Economic Indexes for Area (SEIFA), national mean 1000, SD 100; higher scores represent less disadvantage) [20]. Possible birth-related confounders were Neonatal Intensive Care Unit (NICU) admission and gestational age.
Statistical Analyses
All analyses were completed in statistical software R version 4.0.2 [22] and conducted for children with complete data for the respective aim only.
Aim 1
For aim 1, the samples of individuals with and without hearing loss were combined. We used a causal modeling approach to estimate the impact of hearing loss on expressive vocabulary at age 2 years, outlined as follows: Linear regression models were fitted to the data, modeling the outcome conditional on exposure, with three different models considered. Model 1 was an unadjusted model. Model 2 included demographic confounders as covariates, with Model 3 additionally including birth-related factors as covariates. The latter two models included interaction terms where appropriate (see Appendix A for further detail). Models 2 and 3 were fitted to consider different confounding adjustment sets and, additionally, the trade-off between potential bias from the exclusion of birth-related confounders (Model 2) and from sparse data in confounder substrata (Model 3). Estimates of the mean difference in standardized SSLM scores between hearing loss groups were obtained by standardizing (i.e., g-computation [23]) over the hearing loss sample to estimate the effect of hearing loss on SSLM score within the hearing loss population. Confidence intervals and standard errors were obtained via non-parametric bootstrapping. Estimates from Model 3 were interpreted as the primary analysis, which adjusted for demographic and birth-related potential confounders aiming to minimize confounding bias in causal effect estimates, with results under all models also provided. Further details on the statistical analysis methods can be found in Appendix A and Supplementary Figure S1.
Aim 2
For aim 2, hearing-related characteristics of individuals with any hearing loss were described. When estimating the impact of intervention-related factors on expressive vocabulary, we hypothesized the impact would be different for individuals with unilateral or bilateral loss. Also, considering the small sample size of individuals with unilateral hearing loss, we focused on the bilateral group when estimating this impact. Intervention-related factors were considered separate exposure variables and analyzed individually. Three regression models were again considered for each exposure, with the same covariates included as were used in aim 1 based on the same assumption around potential confounders and causal structures. Models 2 and 3 were also adjusted for bilateral hearing loss severity, as we considered this to be an influential factor in the expressive vocabulary of individuals. As with aim 1, Model 3 was interpreted as the primary analysis. For categorical variables with more than two levels, estimates of the mean difference in SSLM scores were calculated compared to a baseline level. We selected the baseline to reflect the "best case scenario", i.e., earlier intervention, earlier age of detection.
Sample Characteristics
The sample included 302 DHH children (75.5% of VicCHILD participants approached) and 1742 children from the ELVS cohort who were considered to not have permanent hearing loss ( Figure 1). The mean age at assessment was similar between groups (25.4 months and 24.2 months, respectively), with more children without hearing loss being from an English-speaking-only household (Table 1). Children without hearing loss tended to be less disadvantaged than DHH peers (mean SEIFA index 1037.6 versus 1015.6), although both were above the national SEFIA mean of 1000. Of the DHH children, 209 (69.2%) had bilateral hearing loss, with a relatively equal distribution across degrees of loss.
Aim 1: Impact of Hearing Loss on Expressive Vocabulary
Estimates from Model 3 are presented in Table 2 as the primary analysis, along with estimates from Models 1 and 2. Adjusted mean expressive vocabulary scores were estimated to be lower for DHH children (of any severity and in either ear) compared to children with no hearing loss ( Table 2). For bilateral hearing loss, the impact became more substantial as the degree increased. Compared to no hearing loss, adjusted mean expressive vocabulary scores ranged from 7.3 points (or 0.5 of a standard deviation) lower for mild bilateral loss (95% CI −11.4 to −2.5, p < 0.01) to 13.5 points (0.9 standard deviations) lower for profound bilateral loss (95% CI −18.5 to −8.4, p < 0.01). For unilateral hearing losses, the impact was less substantial, with adjusted mean scores ranging between 1.5 and 4.4 points lower (0.1 and 0.3 of a standard deviation, respectively) ( Table 2).
Aim 1: Impact of Hearing Loss on Expressive Vocabulary
Estimates from Model 3 are presented in Table 2 as the primary analysis, along with estimates from Models 1 and 2. Adjusted mean expressive vocabulary scores were estimated to be lower for DHH children (of any severity and in either ear) compared to children with no hearing loss ( Table 2). For bilateral hearing loss, the impact became more substantial as the degree increased. Compared to no hearing loss, adjusted mean expressive vocabulary scores ranged from 7.3 points (or 0.5 of a standard deviation) lower for mild bilateral loss (95% CI −11.4 to −2.5, p < 0.01) to 13.5 points (0.9 standard deviations) lower for profound bilateral loss (95% CI −18.5 to −8.4, p < 0.01). For unilateral hearing losses, the impact was less substantial, with adjusted mean scores ranging between 1.5 and 4.4 points lower (0.1 and 0.3 of a standard deviation, respectively) ( Table 2).
Aim 2: Impact of Intervention-Related Factors on Expressive Vocabulary in Children with Hearing Loss
Estimates from Model 3 are again presented, with the distribution of hearing loss under each exposure group presented in Table S1. Two hundred and sixteen children (71.5% of 302 children from aim 1) used hearing amplification at survey completion, with 86.6% having bilateral hearing loss (187/216). Of the 22 children with unaided bilateral hearing loss, 81.8% had mild loss (Figure 2). For unilateral hearing loss, 31.2% were fitted with hearing amplification at the time of the survey (29/93) ( Table 3). Unlike bilateral losses, there was no obvious relationship between the degree of unilateral loss and the distribution of intervention-related factors. For children with bilateral hearing loss, adjusted mean expressive vocabulary scores were lower if hearing amplification was used at time of assessment compared to scores of children without hearing amplification (adjusted mean difference 5.4 points, 95% CI −2.5 to 13.2 points, p = 0.18) ( Table 4). median (IQR) 6 (4, 10) 8 (5, 12) 6 (3, 9) Age EI program enrolment-n (%) 94 (31. Reflective of a well-functioning UNHS environment, the median age at diagnosis was 1.0 month for children with any hearing loss. Most children received hearing amplification early (median age 3 months; Table 3), with fitting generally earlier if the degree of bilateral hearing loss was greater (Figure 2), whereas 46% of children with mild bilateral loss were fitted after age 6 months. Hearing amplification at 3 months or younger was associated with higher mean expressive vocabulary scores compared to first amplification when older than 3 months for individuals with bilateral hearing loss (adjusted mean difference of 8.6 points, 95% CI 3.1 to 14.1, p < 0.01 and 4.3 points, 95% CI −1.7 to 10.3, p = 0.16 for ages 3.1 to 6 months and >6 months, respectively; Table 4).
The relationship between hearing amplification use and degree of loss appeared stronger for children with bilateral loss versus unilateral loss; children rated as always wearing their hearing device tended to have greater degrees of bilateral loss compared to other use categories (Figure 2). Higher expressive vocabulary scores were estimated for children with bilateral hearing loss who never/rarely (rated to have <4 h daily average use or no hearing device fitted) wore hearing amplification, followed by those who always wore (>8 h daily average use), with those who sometimes/often (4-8 h daily average use) wore hearing amplification demonstrating the lowest expressive vocabulary scores (Table 4).
Most DHH children (with either unilateral or bilateral hearing loss) were enrolled in EI services early (median age 6 months) and were enrolled with a service at survey completion (Table 3). Children enrolled at older ages tended to have mild or moderate bilateral hearing loss ( Figure 2). Mean SSLM scores for children with bilateral hearing loss were higher if enrolment was in the first 3 months of life compared to older ages (adjusted mean difference of 5.4 points, 95% CI −0.1 to 11.0, p = 0.05, and 10.0 points, 95% CI 4.2 to 15.7, p < 0.01 for enrolment at ages 3.1-6 months and >6 months, respectively, Table 4).
Principal Findings
Despite Victoria's sophisticated early identification and intervention systems reaching essentially all children, the spoken expressive vocabulary of 2-year-old DHH children lagged behind those without hearing loss. Spoken expressive vocabulary of children with bilateral losses was increasingly impacted as the degree of hearing loss increased, ranging from 0.5 to 0.9 standard deviations below expected levels after adjustment for potential demographic and birth-related confounders. Children with unilateral losses had an expressive vocabulary closer to (yet, on average, still poorer than) children without hearing loss, without a clear relationship between outcome and hearing loss severity.
Importantly, we demonstrated enrollment in EI by 3 months of age resulted in higher spoken expressive vocabulary scores. A similar pattern towards higher expressive vocabulary scores was also observed in the presence of earlier amplification. This may provide support for the narrower 1-2-3 month alternative timeline to the 1-3-6 EHDI indicators [24]. However, the association between hearing amplification use and language outcomes was not straightforward. We describe a U-shaped relationship where children rated as either "never/rarely" or "always" using hearing amplification showed greater expressive vocabulary scores than those children rated as "sometimes/often" using their hearing device(s).
Strengths of the Study
Our 96% UNHS-identified community cohort represents the common early diagnosis pathway occurring in countries with well-resourced EHDI systems. By including children with any degree of hearing loss in either ear and those with unamplified hearing losses, this enables a more generalizable estimate of expressive vocabulary performance than previous studies of early intervention cohorts (which excluded unilateral losses [9] or only included children with hearing amplification [9] or who had very low birthweight [25]). Despite sample sizes restricting the precision of estimates, our causal modeling approach is more flexible than other analysis approaches used elsewhere [9] (see Appendix A).
Limitations
Our statistical approach attempted to minimize potential confounding biases. When interpreting the results, we acknowledge the trade-off between sparse data in confounder substrata (under Model 3) and the potential for additional unmeasured confounding bias (under Model 2). Therefore, results are presented under all models to allow the reader to interpret the estimates while acknowledging these potential limitations. Unmeasured residual confounding remains possible, such as nonverbal IQ and differences between our hearing and hearing loss groups with respect to the inclusion of children with developmental delays. Limited by sample size, we analyzed exposures separately for aim 2. This meant we were unable to consider the potential combined effects of these exposures.
Our study was limited to examining spoken vocabulary as a measure of expressive language in DHH children, acknowledging that sign language is an important form of communication for DHH children. Among our VicCHILD participant children, less than 3% were reported to use Australian sign language (Auslan) as the predominant mode of communication. Moreover, there are no standardized or validated tools to measure sign language outcomes in preschool-aged children.
Children with hearing loss were drawn from our population-level data repository, with 60% of participant parents/caregivers reporting their child had no additional special health need or medical diagnosis. This is in keeping with other studies reporting that around 40% of DHH children have concomitant medical comorbidities; however, our study did not collect adequate information about medical comorbidities to determine whether this could be a factor influencing language outcomes.
For aim 2, we assumed the impact of intervention-related factors on expressive vocabulary would be different between individuals with unilateral or bilateral loss and therefore presented the estimated impact for bilateral losses only. A secondary analysis was conducted to estimate the impact for individuals with any permanent hearing loss-unilateral or bilateral. This secondary analysis indicated a similar relationship to that observed for bilateral hearing loss only, with attenuated effect sizes. However, this analysis assumed a constant causal effect across children with both lateralities of hearing loss, which we believe may not be reflective of the true behavior, and therefore have presented bilateral loss only within the main paper.
Our study design surveyed hearing amplification use at a single timepoint, which may miss capturing the variance in and accumulative effect of hearing amplification use over time [26]. All children with a permanent hearing loss who have Australian citizenship are eligible for hearing devices at no cost. We included children with no hearing device fitted in the never/rarely device use category for the aim 2 analysis. This approach may not be as relevant in populations with less generous and equitable access to hearing devices.
While born into the same geographic population, expressive vocabulary assessment occurred around a decade earlier for participants without known hearing loss compared to our DHH cohort. However, we believe our language performance comparisons remain valid since we do not expect typically developing children's language outcomes to have changed greatly across time.
Interpretation in Light of Other Studies
Our findings support and extend findings from other studies by including children with unilateral loss and not limiting eligibility to those enrolled in EI programs. Like other existing studies of older children, we have demonstrated that children with even a mild degree of hearing loss have an expressive vocabulary below what is expected for their peers without hearing loss [27,28]. As also seen elsewhere [9,29], children with bilateral hearing loss demonstrated poorer expressive vocabulary as their degree of hearing loss increased. Slight delays for children with unilateral hearing loss align with reports for similar-age children with minimal hearing losses (unilateral and mild bilateral) [25,30].
We found no clear association between frequency of hearing amplification use and mean expressive vocabulary at age 2 years. The U-shaped performance curve where high vocabulary scores were found both for children reported to wear hearing amplification "never/rarely" and "always" warrants exploration. High hearing amplification use in children with high vocabulary scores may reflect the positive impact earlier detection through UNHS has on language outcomes [31] and the influence of hearing amplification use across time [29]. Alternatively, this group may represent a subgroup of children with hearing loss more likely to score highly in outcomes and adhere to hearing amplification use recommendations due to unmeasured factors. Reverse causation [32] cannot be ruled out for children with high vocabulary scores and low reported hearing amplification use, whereby parents/caregivers cease to enforce hearing aid use for children who are doing well and appear to be hearing without their amplification. This would also explain better expressive vocabulary scores in children with unamplified hearing loss at age 2 years.
There is some evidence that children without hearing loss with larger spoken expressive vocabularies at age 2 will, at age 5, show better performance in academic and behavioral outcomes compared to those with smaller spoken expressive vocabularies [33]. Early identification of spoken expressive vocabulary delays could personalize management, focusing on learning activities and experiences that help optimize skills for children with hearing loss across early childhood. Future research should explore underlying etiological or genetic markers that could predict language trajectories to direct resources to those DHH children in most need of intensive intervention.
Conclusions and Implications
This study provides population-level evidence of delayed spoken expressive vocabulary in DHH children at age 2 years, even with early detection through UNHS. We confirmed that the earlier the enrolment in EI programs and access to hearing amplification, the better the spoken expressive vocabulary outcomes. Specifically, our data supports the 1-2-3-month goal to screen, identify, and enter intervention rather than the original 1-3-6 EHDI benchmarks. While the extent of gains that could still be made is unknown, it is clear that interventions are not yet optimized for DHH children, as their early spoken expressive vocabulary outcomes are still poorer than their hearing peers. Moving forward, we must aim to more precisely understand the factors that impact language development in DHH children so that intervention can be targeted at those who need it most.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/children10071223/s1, Figure S1: Statistical approach with directed acyclic graphs; Table S1: Frequencies of hearing loss across exposures used for aim 2.
Author Contributions: P.C. conceptualized and designed the study, contributed to data interpretation, and drafted and revised the manuscript; he takes overall responsibility for all aspects of the study; D.A.S. conceptualized and designed the study, was responsible for the analysis and interpretation of the data, drafted and revised the manuscript; L.S. was responsible for acquisition of the data and reviewed and revised the manuscript; T.H. provided guidance regarding data collection instru-ments and reviewed and revised the manuscript; M.L. assisted with designing the study, provided guidance regarding data collection instruments, and reviewed and revised the manuscript; E.L.B. and S.R. established ELVS, contributed to data analysis and reviewed and revised the manuscript; M.W. established VicCHILD and ELVS, conceptualized and designed the study, and reviewed and revised the manuscript; V.S. conceptualized and designed the study, provided guidance regarding the data collection instruments and reviewed and revised the manuscript. All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the participants to publish this paper.
Data Availability Statement:
All available data supporting the reported results can be found within this publication and its Supplementary Files. Data are not publicly available because not all VicCHILD and ELVS participants have provided consent for data sharing, and data sharing is limited to ethically approved research.
included between exposure (hearing loss) and two confounders (sex and parent education), determined a priori based on content knowledge. G-computation was used to estimate the causal effect of interest, relaxing the strict assumption of a constant causal effect across confounder substrata under the traditional regression approach (used in previous studies). For aim 2, no interaction terms were included in the model due to the limited sample size under a complete case analysis.
|
v3-fos-license
|
2019-05-25T13:03:01.272Z
|
2019-05-01T00:00:00.000
|
163167805
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1128/mra.00038-19",
"pdf_hash": "5f67f51b5d33a33b33e4c0762d5aaf16794d716e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46133",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "5f67f51b5d33a33b33e4c0762d5aaf16794d716e",
"year": 2019
}
|
pes2o/s2orc
|
Complete Genome Sequences of Two Melissococcus plutonius Strains with Different Virulence Profiles, Obtained by PacBio Sequencing
Melissococcus plutonius attacks honeybee larvae, causing European foulbrood. Based on their virulence toward larvae, M. plutonius isolates were classified into three types, highly virulent, moderately virulent, and avirulent.
T he causative agent of European foulbrood, Melissococcus plutonius, infects honeybee larvae, with serious impacts on bee health (1). Based on multilocus sequence typing analysis, M. plutonius isolates were classified into three clonal complexes (CCs), CC3, CC12, and CC13 (2). These CCs exhibited different virulence profiles toward honeybee larvae in experimental infections; CC12 and CC3 strains were extremely and moderately virulent, respectively, whereas the representative CC13 strain was avirulent (3). To clarify the genetic basis of the distinct pathological characteristics of each CC, we performed complete genome sequencing of M. plutonius DAT606 and DAT585, which are representative CC3 and CC13 strains, respectively. Previously, we sequenced the genomes of two M. plutonius strains, one type strain and one highly virulent strain belonging to CC12 (4,5). Taken together with the previous genomic data, we have covered all virulence profiles of M. plutonius.
M. plutonius DAT606 and DAT585 were isolated from diseased European honeybee (Apis mellifera) larvae in Japan (6) and cultured anaerobically on brain heart infusion agar supplemented with KH 2 PO 4 and starch (KSBHI agar) for 4 days at 35°C. Then, genomic DNA was extracted as described previously, with a slight modification (6); proteinase K treatment was not performed.
Whole-genome sequencing of M. plutonius DAT585 and DAT606 was performed on the PacBio (Menlo Park, CA, USA) RS II platform. The library was prepared using single-molecule real-time (SMRT) cell 8Pac V3 and the P6 DNA polymerase binding kit (PacBio), according to the manufacturer's instructions. Reads were filtered and assembled using SMRT Analysis v2.3 (PacBio) with default settings. The DAT585 genome yielded 100,098 reads encompassing 950,202,716 bp. The mean subread length and N 50 value were 9,492 bp and 13,788 bp, respectively. The DAT606 genome yielded 76,697 reads covering 615,765,788 bp. The mean subread length and N 50 value were 8,028 bp and 12,056 bp, respectively. Subsequently, the filtered reads for the two genomes were assembled de novo, producing two circular contigs. As reported previously (5), virulent strains possess another plasmid, pMP19; therefore, for virulent strain DAT606, Sanger sequencing was conducted using conventional primer walking, followed by sequence assembly with Sequencher 5.2 software (Gene Codes, Ann Arbor, MI, USA). Primary coding sequence extraction and initial functional assignment were performed using the automated annotation server RASTtk (7). To verify the annotation, the data were inspected and revised manually using the MolecularCloning software v7.07 (In Silico Biology, Kanagawa, Japan). To search phage DNA components in the DAT585 and DAT606 genomes, we used the Web server PHASTER (8).
The chromosomes of both strains contain 60 tRNA genes for all amino acids and four rRNA operons. Additionally, both chromosomes harbor two prophages, one intact and one incomplete. The DAT606 genome contains two plasmids, pMP1 and pMP19, although pMP19 was partially sequenced because of long repeated sequences in a plasmid gene. However, the avirulent strain, DAT585, harbors the pMP1 plasmid only (Table 1).
Data availability. The whole-genome sequences of the chromosome and two plasmids of M. plutonius DAT585 and DAT606 were deposited in DDBJ/GenBank under accession numbers AP018524 to AP018528 ( Table 1). The raw sequence reads were deposited in the DDBJ Sequence Read Archive (DRA)/NCBI SRA under accession numbers DRA008260 and DRA008261 (Table 1).
ACKNOWLEDGMENTS
This study was supported by a Grant-in-Aid for Scientific Research (C) (17K08818) from the Japan Society for the Promotion of Science.
D.T. designed the study, and K.O. and D.T. determined the sequences. K.O. deposited the data in DDBJ and GenBank. All authors contributed to data analysis and preparation of the manuscript and approved the final version.
We declare no competing interests.
|
v3-fos-license
|
2019-03-07T14:18:14.247Z
|
2015-11-02T00:00:00.000
|
70591601
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/archive/2015/617074.pdf",
"pdf_hash": "1acc1126573f5fefa748b21c68b08aad0cf15c96",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46137",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "1acc1126573f5fefa748b21c68b08aad0cf15c96",
"year": 2015
}
|
pes2o/s2orc
|
Effect of Low Dose Dexmedetomidine on Emergence Delirium and Recovery Profile following Sevoflurane Induction in Pediatric Cataract Surgeries
This randomized trial was conducted to assess the efficacy and recovery profile of low dose intravenous dexmedetomidine in prevention of post-sevoflurane emergence delirium in children undergoing cataract surgery. Sixty-three children aged 1–6 years were included. Anesthesia was induced with sevoflurane and airway was maintained with LMA.They were randomized to group D 0.15 (received intravenous dexmedetomidine 0.15 μg/kg), groupD0.3 (received dexmedetomidine 0.3μg/kg), or groupNS (received normal saline).The incidence of emergence delirium, intraoperative haemodynamic variables, Aldrete scoring, pain scoring, rescue medication, and discharge time were recorded. Emergence delirium was significantly reduced in dexmedetomidine treated groups with incidence being 10% in group D 0.15, none in group D 0.3, and 35% in the NS group (p = 0.002). Significantly lower PAED scores were observed in D 0.15 and D 0.3 group compared to the NS group (p = 0.004). Discharge time was significantly prolonged in the NS group compared to D 0.15 and D 0.3 (45.1min ± 4.4 versus 36.8min ± 3.8 versus 34.4min ± 4.6), p < 0.02. Intravenous dexmedetomidine in low doses (0.3 and 0.15 μg/kg) was found to be effective in reducing emergence delirium in children undergoing unilateral cataract surgery.
Introduction
Emergence delirium is often witnessed after sevoflurane anesthesia with an incidence of approximately 20%-60% [1,2].It is characterized by mental disturbance consisting of hallucinations, delusions, and confusion manifested by moaning, restlessness, involuntary physical activity, and thrashing about in bed during the recovery from general anaesthesia [3].While emergence delirium remains a poorly understood phenomenon, a variety of potential etiologies including pain, stressful induction, hypoxemia, rapid awakening in hostile environment, and physical stimulation (noise) have been implicated [4].It carries a significant risk of bleeding from the site of surgery, psychological trauma to the parents, and delayed discharge of the patient from the postanesthesia care unit (PACU) [5].
Dexmedetomidine, an alpha-2 agonist, is a recent drug used to prevent post-sevoflurane agitation.However, due to the variation in the dose used in various studies, to date, there has been no consensus on the dose of DEX used for prevention of emergence agitation [11][12][13].Doses of 0.5 g/kg and above have been effectively used to reduce post-sevoflurane agitation but were associated with increased incidence of side effects such as reduction in heart rate, blood pressure, delayed emergence, and extubation [12].
Anxiety before and during induction of anesthesia has often been associated with an increased risk of postoperative negative behavioural changes apart from emergence delirium [7]; as a result, avoiding premedication drugs may not be desirable in the routine clinical practice.However, majority of the studies evaluated children without premedication and this made previous studies less realistic.
In the literature, there is a considerable overlap in the terminologies of emergence delirium and agitation.Most of the previous studies have evaluated emergence agitation instead of emergence delirium, resulting in inconsistent and variable results.
Oral midazolam 0.5 mg/kg is routinely used as premedicant in our institution 20-30 minutes before surgery.We hypothesized that two lower doses of intravenous (IV), that is, 0.15 g/kg and 0.3 g/kg, dexmedetomidine in conjunction with oral midazolam might be more effective than placebo in reducing the incidence of post-sevoflurane agitation in children undergoing elective cataract surgery.
We, therefore, aimed to study dexmedetomidine 0.15 g/kg and 0.3 g/kg in children premedicated with midazolam (0.5 mg/kg) using a validated scale.
Materials and Methods
This study was conducted in the ophthalmic centre of a tertiary care institute from July 2010 to December 2011.After obtaining the local ethical committee approval and written informed consent from parents/legal guardian of the children, 63 American Society of Anesthesiologists physical status I and II children, aged between 1 and 6 years, undergoing elective cataract surgery were included in the randomized, controlled, double-blind study.Children with a history of allergy to anesthetic agents, seizures, mental retardation, endocrine disorder, psychiatric disorder, and emergency procedure or history of previous episode of emergence delirium or who refused to take premedication were excluded from the study.
All children were kept fasting according to the NPO guidelines.They were premedicated with oral midazolam syrup 0.5 mg/kg, 30 minutes before induction of anaesthesia.A 4-point scale for parental separation score and induction score was noted (Appendices A and B) [14] prior to shifting the child to the operating room.Inside the operating room, routine anesthesia monitoring was established while taking care not to stimulate the child.Anesthesia was induced with 5-8% sevoflurane and 100% oxygen.After induction of anesthesia, venous cannulation was established and once the adequate depth of anesthesia (MAC = 2.0) was ensured, laryngeal mask airway (LMA) of appropriate size was inserted for the maintenance of the airway.
Anesthesia was maintained with 1-1.5% of sevoflurane with 60% nitrous oxide in oxygen with spontaneous breathing.Patients were randomized into 3 groups using a computer based randomization chart.Group NS ( = 20) received normal saline, group D 0.15 ( = 20) received IV dexmedetomidine 0.15 g/kg, and group D 0.3 ( = 23) received IV dexmedetomidine 0.3 g/kg and results of randomization were concealed in opaque envelopes.The study drug was prepared by an independent anesthetist not participating in the study after opening a sealed envelope.The anesthetist, surgeon, and observer were all blinded.The master code was held by a person not participating in the study.All the study drugs were diluted in 5 mL of normal saline and administered over 5-minute duration by a blinded anesthetist.Analgesia was given in the form of sub-Tenon block with 0.1 mL/kg of 0.5% bupivacaine, administered by the surgeon, and supplemented with IV paracetamol 15 mg/kg.Ventilation was assisted if end-tidal carbon dioxide (EtCO2) levels increased above 45 mm of Hg.At the end of the procedure, LMA was removed and sevoflurane was switched off after the removal of the LMA.The child was shifted to a calm, quite, mildly illuminated postanesthesia care unit (PACU) recovery room with one of the parents who was allowed to stay in PACU with the child.
All children were monitored continuously until their discharge from the PACU and oxygen saturation, heart rate, and noninvasive blood pressure were recorded every 15 minutes.The same trained independent PACU nurse, blinded to the anaesthetic technique, repeatedly recorded the degree of agitation every 15 min up to 1 hour after admission.All assessors were trained and experienced in the application of the assessment scales.The state of agitation was recorded using Pediatric Anaesthetic Emergence Delirium (PAED) scale (Appendix C) [14,15] and score 10/20 was considered as delirium.If agitation was present, the first measure was to console the child by parent; if the child is inconsolable for 5 minutes, rescue medication with fentanyl 0.5 g/kg was used.Postoperative pain was assessed using Face, Legs, Activity, Cry, Consolability (FLACC) pain scale (0-10 score range) (Appendix D) [16].If FLACC score was more than five, intravenous fentanyl 0.5 g/kg was given [16].The pain score of >5 was considered as a cut-off point for rescue medication with fentanyl.Modified Aldrete scores (Appendix E) were recorded during PACU stay [17].Children were considered ready for discharge from the PACU with an Aldrete score ≥9.Primary outcome of the study was the incidence of postoperative delirium as measured with PAED score.Secondary end points included intraoperative haemodynamic variables, Aldrete scoring, pain scoring, rescue medication, and discharge time.
The sample size was based on a pilot study.Assuming the incidence of emergence delirium to be 40%, reduction to 8% (80% reduction) was considered clinically significant justifying addition of dexmedetomidine.Based on these assumptions, with error of 0.05 (one-sided) and power of 80%, 18 patients were required in each group.However, to compensate for possible dropouts, 63 patients were enrolled.
SPSS (SPSS Inc., Chicago, IL, version 16.0 for Windows) was used for statistical analysis.Demographic data (continuous variable) of the 3 groups were expressed as mean ± SD and were analysed by ANOVA.Nonparametric data (scores) were expressed as median ± IQR and analysed using the Kruskal Wallis test.If found significant, Mann-Whitney test was used for pairwise comparison.Serial changes in intraoperative parameters (heart rate, systolic BP, diastolic BP, mean BP, EtCO2, and respiratory rate) and postoperative heart rate and postoperative BP were analysed using twoway ANOVA.In this analysis, qualitative and quantitative variables were recorded repeatedly over time for each subject.Paired Student's -test was used to compare baseline variables at different time interval.A value of < 0.05 was taken to be significant.
Data was tested for normality using Kolmogorov test, for homogeneity of between-groups variance using Levene's test, and for sphericity using the Mauchly test.If the Mauchly test was significant, indicating violation of assumption of sphericity, we used Greenhouse Geisser test within subject effects.
Results
Out of the sixty-nine children enrolled in the study, guardians of six patients refused to give consent; therefore, sixty-three children completed the study and were analysed (Figure 1).Children in the three groups were comparable with respect to their age, gender, weight, duration of anesthesia, preoperative parental separation scores, and induction scores (Table 1).
Seven children (35%) in the normal saline group developed clinically significant emergence delirium with PAED score ≥10.The incidence was significantly greater than that encountered in the dexmedetomidine groups (2 of the 20 patients in group D 0.15, but none in group D 0.3 developed PAED score ≥10).The PAED scale score for the first 15 minutes postoperatively was significantly different among the three groups ( < 0.05, Kruskal Wallis) (Table 3).Pairwise comparison revealed significantly lower PAED scores in groups D 0.3 and D 0.15 compared to group NS in the first fifteen minutes postoperatively ( < 0.05, Mann-Whitney).However, scores were comparable between groups D 0.3 and D 0.15.After 15 min in PACU, the PAED scale scoring was comparable in all the three groups ( > 0.05, Mann-Whitney).
No differences were found between the study groups with respect to pain and no patient in any of these groups attained pain score >5 requiring rescue medication (Table 2).However, the rescue medication was given in 35% of patients in the NS group and 10% of patients in the D 0.15 group for PAED score ≥10/20 as these were refractory to the parental counselling.Time to meet discharge was comparable between group D 0.3 (34.4 min ± 4.6) and group D 0.15 (36.8 min ± 3.8), > 0.05.However, the discharge time was significantly longer in group NS compared to D 0.15 and D 0.3, = 0.029.
No significant reduction of heart rate, blood pressure, respiratory rate, and saturation was noted in the three groups.None of the children developed bradycardia, hypotension, or desaturation throughout the study period.Vomiting was observed in 2 children, 1 each in groups D 0.15 and NS in postoperative period which responded to IV ondansetron.
Discussion
The results of our study suggest that intravenous dexmedetomidine (0.3 g/kg) effectively reduces emergence agitation in children premedicated with oral midazolam and undergoing cataract surgery.The incidence of emergence delirium in the control group in our study was 35% in accordance with Ghai et al. [10] that reported an incidence of emergence delirium of 27.5% in the control group in pediatric cataract surgery.
There is no clear definition of emergence delirium and often it has been used interchangeably with emergence agitation (EA).The two have been defined independently by Sikich and Lerman who defined emergence delirium (ED) as "a disturbance in a child's awareness of attention to his/her environment with disorientation and perceptual alterations including hypersensitivity to stimuli and hyperactive motor behaviour in the immediate postanesthesia period."This primarily cognitive disturbance needs to be distinguished from EA, where pain, previous underlying anxiety, and other unspecified factors contribute to the restlessness of the child in the postoperative period [18][19][20].
In our study, an attempt to eliminate this confounding factor was made with an effective sub-Tenon block with local anesthetic and intravenous paracetamol supplementation, which has been reported to provide adequate analgesia in pediatric cataract surgeries, and none of our patients required intraoperative rescue medication for inappropriate pain therapy.
Although the use of dexmedetomidine to prevent and treat post-sevoflurane agitation [11][12][13] has been documented in the literature, the evaluation of its effect on emergence delirium which now forms a distinct entity remains underevaluated.Exact mechanism of 2 agonist in prevention of emergence delirium is yet to be elucidated.The proposed mechanism is a reduction in noradrenergic output from locus ceruleus, thereby facilitating the firing of inhibitory neurons such as Gamma aminobutyric acid system [20].
Shukry et al. studied the effects of a continuous perioperative infusion of 0.2 microg⋅kg(−1)⋅h(−1) dexmedetomidine on the incidence of ED in 50 children aged 1-10 years scheduled for sevoflurane-based GA.They found significant reduction in the incidence of ED with dexmedetomidine ( = 0.036).Additionally, the number of episodes of ED was lower with dexmedetomidine ( < 0.017).However, the pain scores, times to extubation, and discharge from PACU were the same [2].
Patel et al. [11] evaluated postoperative emergence delirium as well as analgesic sparing effect of 2 mcg/kg dexmedetomidine, compared to 1 mcg/kg fentanyl in children undergoing adenotonsillectomy.Emergence delirium was evaluated by PAED scale as well as Cole scale [21].The results demonstrated a significant reduction in emergence agitation in dexmedetomidine treated group (incidence of EA, 18% compared to 45.9% in fentanyl).However, there was a significant drop in heart rate as well as blood pressure in dexmedetomidine treated group compared to fentanyl ( < 0.001).
However, on the other hand, Bong et al. failed to demonstrate any effect of 0.3 mcg/kg dexmedetomidine on the incidence of emergence delirium in children undergoing general anaesthesia for magnetic resonance imaging.
In our study, we used 0.3 mcg/kg and 0.15 mcg/kg doses of dexmedetomidine in midazolam premedicated children aged 1-6 years undergoing cataract surgery.The children in the age group of 1-6 years have the highest incidence of emergence agitation and delirium [4,5,7].Notably, in our study, emergence delirium reduced from 35% in NS group to 0% and 10% in groups D 0.3 and D 0.15, respectively.This is in contrast to the study by Bong and Ng and Ibacache et al. [13,16] using similar doses.The difference in results could be attributed to the lack of premedication and difference in the nature of procedures done in the abovementioned trials.Midazolam premedication is routinely used in our setup for children undergoing cataract surgery.Preoperative anxiety results in restless recovery [11,22].Midazolam premedication decreases preoperative anxiety and calms the child, thus facilitating parental separation.It is also reported to decrease emergence agitation following sevoflurane anesthesia with no delay in discharge [22].In contrast, other studies have shown no effect of midazolam premedication on reduction of emergence agitation after sevoflurane anesthesia [23].Aouad and Nasr [24] speculated that midazolam, a short acting premedicant, may result in residual effect at the end of a short procedure and decrease the incidence of EA, while the serum level might be too low to sedate a child after longer procedures.As cataract surgery is a short procedure, we assume a residual effect of midazolam in reduction of agitation.
Dexmedetomidine (0.15 and 0.3 g/kg) was associated with shorter discharge time compared to NS group.This could be due to greater number of children receiving rescue medication in the form of intravenous fentanyl postoperatively in the NS group (35% compared to 10% and 0% in groups D 0.3 and D 0.15, resp.).Our results are contrary to the meta-analysis which showed extended discharge time with dexmedetomidine compared to the placebo [25].The eight randomized trials evaluated in the meta-analysis had used comparatively higher doses of dexmedetomidine or a bolus injection followed by an infusion dose, which could be a reason for the difference in the results.
In our study, we used the PAED scale, the only validated scale for rating emergence agitation, in our study.The investigators who developed the PAED scale assessed children 10 min after awakening (child remained awakened thereafter).This was a problem in our study in early stages as children who were asleep were receiving a rating of 4 on the first 3 items of the PAED scale (i.e., they were not able to make eye contact, they were not aware of the surroundings, and their actions were not purposeful) Therefore, we had to modify the scoring on the scale and rate these items as zero.It is obvious that children who were asleep were not agitated.Similar observation was reported by Patel et al. [11] in their study.We used PAED score ≥10 as the indicator of agitation.A score of ≥10 on the PAED scale has been defined as the best discriminator between presence and absence of clinical agitation as reported by Bong and Ng [16].
Rapid awakening in a hostile environment might frighten the children and may provoke agitation.In order to eliminate this, we allowed parental presence in a quiet and warm postanesthesia care unit.
It is difficult to completely discriminate between painrelated agitated behavior and emergence delirium in nonverbal and preschool children as there is some overlap of categories in scales assessing pain and emergence delirium.Lack of evaluation of the correlation between PAED and FLACC score can be taken as a shortcoming of our study.However, as the pain scores were comparable in all the three study groups and none of our patients required intraoperative rescue medication for inappropriate pain therapy, the bias due to this seems unlikely.
In conclusion, intravenous dexmedetomidine 0.3 g/kg is effective for reducing agitation after sevoflurane anesthesia in children premedicated with 0.5 mg/kg of midazolam undergoing cataract surgery without causing adverse side effects such as increased sedation, hypotension, bradycardia, or delayed discharge.Intravenous dexmedetomidine 0.15 g/kg was also effective with no delay in discharge time; however, it did not completely eliminate agitation.Note: total score must be at least 18 in order for the patient to be discharged home.
Table 1 :
Demographic and baseline variables.Group NS: intravenous normal saline.Data are presented as mean ± SD except when stated otherwise.
Table 5 :
FLACC score.Minimum score 0 and maximum score 10.Intensity of pain increases with increase in score.
|
v3-fos-license
|
2018-04-03T01:48:28.881Z
|
2017-08-13T00:00:00.000
|
12292794
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/omcl/2017/9080869.pdf",
"pdf_hash": "30a40ff2ba406b7f89424c98b4b0d5ec94968ae1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46138",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "9620734ef76e3fa3e080954b586fdd5712d81561",
"year": 2017
}
|
pes2o/s2orc
|
Cellular and Molecular Mechanisms of Diabetic Atherosclerosis: Herbal Medicines as a Potential Therapeutic Approach
An increasing number of patients diagnosed with diabetes mellitus eventually develop severe coronary atherosclerosis disease. Both type 1 and type 2 diabetes mellitus increase the risk of cardiovascular disease associated with atherosclerosis. The cellular and molecular mechanisms affecting the incidence of diabetic atherosclerosis are still unclear, as are appropriate strategies for the prevention and treatment of diabetic atherosclerosis. In this review, we discuss progress in the study of herbs as potential therapeutic agents for diabetic atherosclerosis.
Introduction
Cardiovascular diseases (CVDs), including atherosclerosis, are important complications of diabetes and the leading causes of mortality in patients with diabetes. Systemic factors accompanying diabetes, such as dyslipidaemia and hypertension, are thought to affect the development of diabetic vascular diseases. In addition, insulin resistance and excess production of advanced glycosylation end products (AGEs) contribute to disorders of lipid metabolism, oxidative stress, endothelial dysfunction, monocyte recruitment, foam cell formation, phenotype changes in vascular smooth muscle cells (VSMCs), and thrombosis formation [1]. In addition, diabetes and atherosclerosis exhibit common pathologies, although the underlying mechanisms are still being explored.
Botanical and natural drugs have a long, documented history in treating diabetes and vascular diseases. The vascularprotective properties of herb medicines include their ability to scavenge free radicals, inhibit apoptosis, and reduce inflammation and platelet aggregation [2]. Most recently, Li et al. [3] uncovered the antidiabetes effect of artemisinins, and the mechanism involves driving the in vivo conversion of pancreatic cells into functional β-like cells by enhancing GABA signalling. Furthermore, owing to the multitarget effects and comprehensive sources of herbal medicines, it remains of utmost importance to improve our understanding of their potential use in the treatment of diabetes and diabetic atherosclerosis despite their reported side effects. Since diabetic atherosclerosis is a multifactorial disease, in this review, we first discuss the cellular and molecular mechanisms for the pathogenesis of diabetic atherosclerosis and then the progress in the study of herbs as potential therapeutic agents for diabetic atherosclerosis.
Cell Types Involved in Diabetic Atherosclerosis
2.1. Myeloid Cells. Diabetes and atherosclerosis are chronic inflammatory conditions. Myeloid cells (neutrophils, monocytes, and macrophages) are involved in both atherosclerosis and diabetes. The migration of circulating monocytes into the vessel wall is critical for the development of diabetic atherosclerosis. Moreover, intercellular cell adhesion molecule-1(ICAM-1), chemoattractant protein-1(MCP-1), and macrophage migration inhibitory factor (MIF), which regulate the adhesion of monocytes, are dysregulated in hyperglycemia-induced atherosclerosis in animal models. Increased foam cells derived from macrophages promote the acceleration of atherosclerotic lesions in diabetic ApoE −/− mice [4,5]. A more inflammatory monocyte/macrophage phenotype with secretion of higher levels of proinflammatory cytokines was detected in both animal models and patients with diabetes mellitus [6]. Increases in long-chain acyl-COA synthetase 1 (ACSL1), toll-like receptor (TLR) 2, and TLR4 contribute to the increased inflammatory monocyte/ macrophage phenotype in the context of diabetes [6]. Neutrophil infiltration also has a role in diabetic atherosclerosis. In addition, T-cell function is closely related to atherosclerosis in the diabetic environment, and inflammatory monocytes have been shown to activate Th17 cells under diabetic conditions [7].
Endothelial Cells.
Endothelial dysfunction due to inflammation and oxidative stress is a crucial characteristic in diabetes mellitus-linked atherosclerosis. Endothelial dysfunction is associated with decreased nitric oxide (NO) availability, either through loss of NO production or NO biological activity [8,9]. The excess generation of free oxygen radicals leads to apoptosis in endothelial cells [9]. In hyperglycemia, chronic inflammation increases vascular permeability, promotes the generation of adhesion molecules and chemokines, and stimulates accumulation of monocytes in the artery wall. The interleukin-1 (IL-1) antagonist anakinra improves endothelial dysfunction in diabetic animals via attenuation of the proinflammatory enzymes cyclooxygenase(COX) and inducible nitric oxide synthase (iNOS) triggered by diabetes in the vascular wall [10,11].
Smooth Muscle Cells.
Proliferation and accumulation of smooth muscle cells are detected in both type 1 and type 2 diabetes mellitus. However, it is still unclear whether changes in smooth muscle cells are a result of the diabetic environment directly or are caused by endothelial injury and macrophage recruitment. According to a report by Chen et al. [12], various concentrations of glucose (5.6, 11.1, 16.7, and 22.2 mM) increase the proliferation of vascular smooth muscle cells (VSMCs) in a concentration-dependent manner after 48 h of incubation. Another study showed that after initial injury, growth factors and cytokines released by endothelial cells, inflammatory cells, and platelets promote changes in VSMC phenotypes, thereby enhancing VSMC proliferation and migration [13]. Additionally, aortic smooth muscle cells isolated from NOX −/− ApoE − mice exhibit a dedifferentiated phenotype, including loss of contractile gene expression [14].
Platelets.
Accumulating evidence has shown that platelet hyperreactivity is a crucial cause of diabetic atherosclerosis in both animal models and diabetic patients. Enhanced platelet aggregation and synthesis of thromboxane A2 were detected within days of streptozotocin-(STZ-) dependent induction of diabetes in a rat model [15]. Platelets from patients with diabetes have been shown to have decreased sensitivity to antiaggregation agents, such as prostacyclin (PGI2) and NO [16]. Glycated low-density lipoprotein-(GlyLDL-) and hyperinsulinemia-induced impairment of calcium homeostasis, activation of protein kinase C (PKC), increased generation of reactive oxygen species (ROS), and decreased NO bioactivity result in hyperactivation of platelets [17]. According to a report by Wang et al., a significant correlation between plasma CTRP9 concentrations (a novel adiponectin paralog) and platelet aggregation amplitude was observed in high-fat diet-induced diabetic C57BL/6J mice. Enhancing CTRP9 production and/or exogenous supplementation of CTRP9 may protect against diabetic cardiovascular injury via inhibition of abnormal platelet activity [18]. [20]. Schmidt et al. found that the interactions between AGEs and RAGEs enhance the adhesion of monocytes to endothelial cells via stimulating the expression of nuclear factor-κB (NF-κB-) dependent proinflammatory and prothrombotic molecules [21]. Consequently, the AGE/RAGE axis contributes to diabetic atherosclerosis by attracting monocytes to the vascular intima, increasing oxidative stress, inducing endothelial dysfunction, and promoting vascular wall remodeling [22,23]. Menini et al. have shown that d-carnosine-octylester-(DCO-) attenuated AGE formation is related to its reactive carbonyl species-(RCS-) quenching activity [24]. Moreover, they also revealed that DCO treatment attenuated lesion size, necrotic area, and apoptotic cells in diabetic ApoE-null mice. These protection effects were more effectively achieved by early treatment (60 mg/kg body weight, from weeks 1 to 11, DCO early) than by late treatment (60 mg/kg body weight, from weeks 9 to 19, DCO late) [25]. Zhu et al. [26] demonstrated that immunized diabetic ApoE −/− and low-density lipoprotein (LDL) receptor knockout (LDLR) −/− mice with AGE-LDL significantly reduced atherosclerosis, indicating that vaccination with AGE-LDL may offer a novel approach for the treatment of atherosclerosis in patients with diabetes. Inhibition of RAGE using murine-soluble RAGE (sRAGE) attenuates atherosclerotic lesions in STZinduced diabetic ApoE −/− mice and ApoE −/− /db/db mice [27]. These findings further supported the roles of AGE and RAGE in the macrovascular complications of diabetes, and blockade of RAGE may be a potential therapeutic strategy in diabetic atherosclerosis.
ACSL1.
A recent study showed that monocytes and macrophages expressed increased levels of ACSL1 (an enzyme that catalyzes the thioesterification of fatty acids) in both diabetic mouse models and human subjects [6]. ACSL1 is markedly induced by the TLR4 ligand lipopolysaccharide (LPS) in isolated macrophages, suggesting that ACSL1 may be a downstream effector of TLR4 cascade in macrophages [28]. Myeloid-specific ACSL1 deficiency results in a specific reduction in 20:4-COA levels and completely prevents the increased release of prostaglandin E2 (PGE2) and increased inflammatory phenotype in monocytes and macrophages from diabetic mice, suggesting that the inflammatory phenotype is associated with increased expression of ACSL1. In addition, increased chemokine (C-C motif) ligand 2 (CCL2) secretion from macrophages in diabetic mice is completely prevented by ACSL1 deficiency, supporting that monocyte recruitment is reduced by ACSL1 deficiency [6]. Kanter et al. [29] demonstrated that ACSL1 could directly influence ATP-binding cassette transporter A1 (ABCA1) levels and cholesterol efflux in mouse macrophages. Mouse macrophages deficient in ACSL1 displayed increased ABCA1 levels and increased apolipoprotein A-I-dependent cholesterol efflux in the presence of unsaturated fatty acids compared with those of wild-type mouse macrophages. Conversely, overexpression of ACS1 led to reduced ABCA1 levels and reduced cholesterol efflux in the presence of unsaturated fatty acids. Taken together, the reduced levels of cholesterol efflux and expression of ABCA1 in mouse macrophages in the context of diabetes and elevated fatty load were partly mediated by ACSL1.
Paraoxonase (PON1).
In the state of high oxidative stress, such as STZ-induced diabetes, serum PON1 and arylesterase activities were reduced [30,31]. Decreased serum PON activity is related to glycation and glycol oxidation of high-density lipoprotein (HDL) in the hyperglycemic state, thus leading to impairment of HDL activity, such as protection of LDL from oxidation, cholesterol efflux from cells, and inhibition of monocyte migration to endothelial cells [32,33]. Taş et al. have demonstrated that vitamin B6 supplementation enhances serum PON1 and arylesterase activities, which could be related to the potential direct effects of this vitamin on the enzyme and/or to its ability to reduce oxidative stress [34]. These results suggested that protection of PON1 from inactivation may be a potential therapeutic approach for the treatment of diabetic atherosclerosis.
3.1.4. Insulinotropic Polypeptide (GIP). As illustrated previously, high glucose accelerates atherosclerosis and foam cell formation. GIP potently stimulates insulin release from the pancreas under conditions of normal glucose tolerance. However, under diabetic conditions, the activity of GIP is reduced [35]. Thus, GIP is thought to be involved in diabetic atherosclerosis. According to Nogi [37]. Villeneuve et al. [38] first demonstrated the role of miR-125b in vascular complications of diabetes. They verified that miR-125b, which targets SUV39H1, led to increased levels of inflammatory genes, such as IL-6 and MCP-1. miR-125b mimic significantly increased monocyte binding to smooth muscle cells in db/db mice. According to a study by Reddy et al. [39], the expression levels of miR-200b and miR-200c were increased, whereas Zeb1 protein levels were decreased in VSMCs and aortas from db/db mice relative to those in control db/+mice. Transfection with miR-200 mimic downregulated Zeb1, upregulated the inflammatory genes COX-2 and MCP-1, and promoted monocyte binding in db/+ VSMCs. Both miR mimics and Zeb1 siRNA increased the proinflammatory response in db/db VSMCs. In contrast, miR-200 inhibitors reversed the enhanced monocyte binding of db/db VSMCs. Moreover, miR-504 significantly upregulated in db/db VSMCs compared with that in db/+VSMCs [40]. miR-504 may enhance extracellular regulated protein kinase 1/2 (ERK1/2) activation by targeting Grb10 and thereby contribute to changes in the VSMC phenotype. According to Xu et al. [41], higher miR-138 levels and reduced expression of silent information regulator 1 (SIRT1) were observed in SMCs isolated from db/db mice. Additionally, miR-138 promotes smooth muscle cell proliferation and migration in db/db mice through downregulation of SIRT1, whereas transfection with miR-138 inhibitor reverses these effects.
3.1.6. Tribbles Homolog 3 (TRIB3). The expression of TRIB3, a protein made up of 358 amino acids, is increased in patients and animals with type 2 diabetes [42]. Endoplasmic reticulum stress, an important feature of diabetes, has also been shown to increase TRIB3 expression, thus promoting cell death in response to endoplasmic reticulum stress [43].TRIB3 impairs insulin metabolic signalling by increasing serine phosphorylation of insulin receptor substrate 1 (IRS-1), reducing activation of phosphatidylinositol 3-kinase (PI3K)/Akt [44], or directly inhibiting the phosphorylation of Akt [45,46]. Several mechanisms are involved in TRIB3-dependent promotion of atherosclerotic lesions. TRIB3 impairs IRS-1/ Akt signalling in endothelial cells mediated by insulin [47], leading to reduced endothelial nitric oxide synthase (eNOS) and NO bioavailability [46], which is associated with endothelial dysfunction and increased leukocyte adhesion to endothelial cells, important steps for atherosclerotic lesion formation ( Figure 1) [9]. In addition, TRIB3 is involved in lipid metabolism and macrophage apoptosis, an important feature for vulnerable plaques [48,49]. According to a study by Wang et al. [49], silencing of TRIB3 in STZ plus diet-induced diabetic ApoE −/− /LDLR −/− mice significantly decreases insulin resistance and blood glucose and reduces the numbers of apoptotic cells and macrophages in atherosclerotic lesions. Consequently, silencing of TRIB3 attenuates the atherosclerosis burden and promotes plaque stability in diabetic mice. Thus, TRIB3 is a promising target for the treatment of diabetic atherosclerosis.
The Janus Kinase (JAK)/Signal Transducers and
Activators of Transcription (STAT) Cascade. JAK/STAT is an essential intracellular pathway that regulates leukocyte recruitment, foam cell formation, and proliferation and migration of VSMCs, which are important features in atherosclerosis [50][51][52]. STAT isoforms have been found in the atherosclerotic lesions in both humans and animal models [53,54]. STAT signalling cascade contributes to the macrophage apoptosis in advanced atherosclerotic plaque [55]. Inhibition of JAK2, STAT1, and STAT3 reduces lesion size and neointimal hyperplasia [56,57]. Moreover, JAK/STAT is also a pivotal inflammatory mechanism through which hyperglycemia contributes to the pathogenesis of diabetes mellitus and its vascular complications [58][59][60]. High glucose stimulates endothelial IL-6 secretion via redox-dependent mechanisms, which may consequently induce STAT3 activation and ICAM-1 expression; the specific STAT3 inhibitor SI-201 (20 μM) suppresses high glucose-induced ICAM expression in cultured human umbilical vein endothelial cells (HUVECs) [61]. The suppressor of cytokine signalling (SOCS) family regulates JAK/STAT signalling through STAT binding, kinase inhibition, targeting for proteasomal degradation, or direct suppression of JAK tyrosine kinase activity [62,63]. According to a report by Recio et al. [64], SOCS1 peptide inhibits STAT1/STAT3 activation and target gene expression in VSMCs and macrophages and blocks the migration and adhesion of macrophages in vitro. Their results showed that intraperitoneal injection of SOCS1 into STZ-induced diabetic ApoE −/− mice (ages 8 and 22 weeks) for 6-10 weeks suppressed STAT1/STAT3 activation in atherosclerotic plaques and significantly attenuated lesion size for both early and advanced lesions. The accumulation of lipids, macrophages, and T lymphocytes was decreased following treatment with SOCS1 peptide, whereas collagen and smooth muscle cell content were significantly increased. Thus, the SOCS/JAK/STAT cascade was a key molecular mechanism through which diabetes promoted atherosclerotic plaque formation, and SOCS1 endogenous protein may be a feasible target for modulating inflammation-related complications of diabetes mellitus. Approaches to supplement SOC1 or mimic native SOCS1 function may have therapeutic effects on accelerated atherosclerosis in diabetes.
3.2.2.
The eNOS/NO Pathway. NO produced by eNOS is an important vasodilator that possesses multiple antiatherosclerotic properties. eNOS-derived NO has been shown to inhibit platelet aggregation, block vascular inflammation by inhibiting the activation of NF-κB [65], and suppress VSMC proliferation. As illustrated previously, decreased bioactivity of NO is associated with exposure of endothelial cells to high-glucose concentration [9]. High-glucose levels block endothelial injury repair by circulating endothelial progenitor cells (EPCs) through decreasing eNOS/NO bioavailability [9,66,67]. According to a study by Sun et al. [68], the Akt kinase inhibitor (GSK690693) inhibited Akt and eNOS phosphorylation, suggesting that Akt may be necessary to activate eNOS. The PI3K inhibitor LY-29402 inhibits vaspin-induced eNOS and Akt phosphorylation, suggesting that PI3K acts upstream of Akt activation and eNOS. Additionally, vaspin induces endothelial protective effects via the PI3K/Akt/eNOS pathway. Ouchi et al. [69] showed that adiponectin-induced Akt phosphorylation, eNOS phosphorylation, and cell migration and differentiation in HUVECs were abolished when the cells were transduced with a dominant-negative form of AMP-activated protein kinase (AMPK). However, AMPK phosphorylation was not affected by dominantnegative transduction in HUVECs, suggesting that AMPK acts upstream of Akt in the Akt/eNOS/NO pathway to regulate endothelial function under conditions of hyperglycemia. SIRT1 is a class III histone deacetylase that has been shown to stimulate NO production by deacetylating eNOS at lysine residues [70] or by mediating the activation of AMPK [71]. Yang et al. [70] found that SIRT1 had a positive role in improving the expression of eNOS impaired by high glucose, and the low level of NO in endothelial cells cultured in the presence of high glucose may be partly related to the decreased expression of SIRT1 (Figure 1).
The Mitogen-Activated Protein Kinase (MAPK)
Pathway. The MAPK pathway, including p38 MAPK, ERK, and c-Jun N-terminal kinase (JNK) branches, is involved in vascular inflammation. ERKs are typically initiated by Ras, which can be stimulated by inflammatory cytokines from high-glucose injured endothelial cells, leading to the proliferation of SMCs [72][73][74][75]. p38 MAPK activation is associated with diabetes and its complications, and the detrimental effects of high glucose can be blocked by coincubation with a p38 MAPK inhibitor [67]. Moreover, p38 MAPK has been shown to be involved in diabetic atherosclerosis. Hyperglycemic culture conditions accelerate the onset of EPC senescence, leading to impairment of endothelial repair, potentially through the activity of the p38 MAPK pathway [76]. Microparticles (MPs), which are submicron membrane vesicles (0.1-1 μm) shed from the plasma membrane of activated or apoptotic cells, are significantly increased in the presence of high-glucose levels. According to a study by Jansen et al. [77], p38 MAPK is activated to phospho-p38MAPK within 30 min when human coronary endothelial cells (HCAECs) are treated with "injured" EMP (iEMP, MPs derived from glucose-treated HCAECs), whereas there is no change following treatment with normal endothelial cellderived MPs (EMPs). iEMP-induced expression of ICAM-1 and VCAM-1 and monocyte adhesion to HCAECs were significantly reduced by pretreatment of HCAECs with the p38MAPK inhibitor SB-203580 (1 μm). Consequently, these results showed that p38MAPK was involved in iEMPinduced endothelial dysfunction and monocyte adhesion.
Further studies have shown that iEMP increases ROS production through NADPH oxidase (NOX) activation in endothelial cells, thus leading to activation of p38 MAPK. The p38 MAPK signalling pathway in diabetic atherosclerosis is illustrated in Figure 1.
The Protein
Kinase (PKC) Pathway. High concentrations of glucose and nonesterified fatty acids result in activation of PKC. Active PKC is involved in vascular inflammation through the generation of proinflammatory cytokines and chemokines. PKC activates NOX, the major source of ROS production in high-glucose stress, thus leading to the activation of signalling pathways such as ERK, p38 MAPK, and NF-κB and decreased NO bioavailability ( Figure 1) [78,79]. ROS not only activate p38 MAPK but also act as an agonist to activate the nucleotide-binding domain-like receptor 3 (NLRP3) inflammasome, further disrupting endothelial function. These effects could be prevented by AMPK [80]. Durpès et al. [78] showed that PKCβ decreases the expression of IL-18-binding protein (IL-18BP), a molecule involved in a negative feedback mechanism in response to elevated IL-18 production, thus enhancing the production of cytokines and cellular adhesion molecules, which promote atherosclerotic plaque formation and instability in STZ-induced diabetic ApoE −/− mice. Kong et al. [81] found that activated plasma membrane-bound PKCβ is elevated in the aortas of low-dose STZ-induced hyperglycemic ApoE −/− mice and that pharmacological inhibition of PKCβ attenuates atherosclerotic lesions in hyperglycemic ApoE −/− mice. Deficiency of PKCβ blocks the upregulation of Egr-1, ERK1/2, and JNK and results in diminished lesional macrophages and CD11c-expressing cells in diabetic ApoE −/− mice. In vitro, inhibitors of PKCβ and ERK1/2 significantly decrease high glucose-induced expression of CD11c, CCL2, and IL-1β in U937 macrophages. These studies suggest that selective PKCβ inhibitors may have potential therapeutic effects in diabetes-associated atherosclerosis.
3.2.5. The Peroxisome Proliferator-Activated Receptor (PPAR)γ Signalling Pathway. Accumulating evidence has shown that PPARγ has protective effects in both diabetes and atherosclerosis. In a combined diabetes/atherosclerosis mouse model, PPARγ agonists were found to exert antiatherogenic effects independent of a reduction in insulin resistance and plasma glucose [82], indicating that attenuation of insulin resistance is not the only mechanism through which PPARγ functions as an antiatherognic agent. PPARγ agonists activate AMPK, which in turn increases the bioactivity of eNOS and prevents PKCactivated NOX caused by high glucose [80,83]. Pioglitazone downregulates RAGE expression and inhibits ROS production and NF-κB activation via PPARγ activation, which may prevent the inflammatory effects of the AGE/ RAGE system in diabetes [84]. Recent studies have shown that pioglitazone attenuates platelet-derived growth factor (PDGF)-induced VSMC proliferation through AMPKdependent and -independent inhibition of mammalian target of rapamycin (mTOR)/p70S6K and ERK signalling [85]. Furthermore, PPARγ agonists have been reported to promote cholesterol efflux from macrophages via upregulation of ABCA1 expression [86,87]. The PPARγ signalling pathway in antiatherosclerosis under hyperglycemic conditions is illustrated in Figure 1.
The Nuclear Factor of Activated T Cells (NFAT)
Signalling Pathway. NFAT proteins are a family of Ca 2+ /calcineurin-dependent transcription factors first characterized in Tlymphocytes as inducers of cytokine gene expression. There are four well-characterized members of the NFAT family, which function in VSMC proliferation in the context of atherosclerosis and hypertension and have roles in glucose and insulin homeostasis [88]. According to a study by Nilsson et al., in intact cerebral arteries, raising the extracellular glucose concentration from 11.5 mM (control) to 20 mM [HG] for 30 min significantly increases NFAT nuclear accumulation, accompanied by enhanced transcriptional activity. UTP and UDP mediate glucose-induced NFAT activation via P2Y receptors. High-glucose concentrations downregulate glycogen synthase kinase 3 (GSK) β and JNK activity, leading to decreased export of NFATc3 from the nucleus and enhanced robust NFATc3 nuclear accumulation, representing another mechanism for glucose-induced NFAT activation [89]. NFATc3 is activated by hyperglycemia, thereby inducing the expression of osteopontin (OPN), a cytokine that promotes diabetic atherosclerosis [90]. Zetterqvist et al. demonstrated a link between NFAT activation and diabetic atherosclerosis using STZ-induced diabetic ApoE −/− mice. In vivo treatment with the NFAT inhibitor A285222 (0.29 mg/ kg/day i.p.) for 4 weeks prevented diabetes-associated atherosclerosis lesions in the aortic arch independent of blood glucose lowering, accompanied by decreased expression of IL-6, OPN, MCP-1, and ICAM-1 and the macrophage markers CD68 and tissue factor (TF) in the aortic arch. These findings revealed that the NFAT signalling pathway may be a promising target for the treatment of diabetes-associated atherosclerosis [91].
3.2.7. The Nrf2 Signalling Pathway. Ungvari et al. demonstrated the vasoprotective role of Nrf2 in diabetes using Nrf2 −/− mice. They showed that the expression of Nrf2 downstream genes was significantly upregulated in diabetic Nrf2 +/+ mice, but not in diabetic Nrf2 −/− mice [92]. Under normal conditions, Nrf2 constitutively interacts with keap1, a negative regulator, for ubiquitination and degradation in the cytosol. Under high-glucose stress, Nrf2 is released from keap1 and translocates to the nucleus and subsequently binds to antioxidant-responsive elements (ARE); this results in increased transcription of genes such as NADPH: quinine oxidoreductase 1 (NQO1), heme oxygenase-1 (HO-1), superoxide dismutase (SOD), and catalase (CAT). These antioxidant enzymes decrease the levels of ROS, thus attenuating diabetic atherosclerosis ( Figure 2) [93]. These results suggest that Nrf2 activators may have efficacy in the management of diabetic atherosclerosis.
Herbal Medicines: Promising Therapeutic
Agents for the Management of Diabetic Atherosclerosis 4.1. Ginkgo biloba. Ginkgo biloba is a dioecious tree with a history of use in traditional Chinese medicine and has many pharmacologic effects. Ginkgo has vascular protection functions due to its antioxidant effects, free radical scavenging activity, stabilization of membranes, and inhibition of platelet-activating factor. Ginkgo biloba extract (GBE), produced from Ginkgo biloba leaves, is commonly used in dietary supplements for aliments and has shown excellent clinical effects in many cases. GBE contains terpenoids, flavonoids, alkylphenols, polyprenols, and organic acids. Terpenoids (including ginkgolides and biobalide) and flavonoids are the two major groups of active substances in Ginkgo leaves. The basic structures of ginkgolides, biobalides, and Ginkgo biloba flavonol aglycones are shown in Figures 3(a), There have been several reports showing that EGB761, a standard GBE, improves glucose homeostasis, possibly because of increased plasma insulin levels, via protection of pancreatic β-cells and/or stimulation of insulin secretion. Cheng et al. reported that GBE (100,200, and 300 mg/kg) administered orally once a day for 30 days caused a significant dose-and time-dependent reduction in blood glucose levels in diabetic rats. In their study, GBE increased the activities of SOD, CAT, and glutathione peroxidase (GSH-Px) in diabetic rats and resulted in protection of pancreatic β-cells [94]. In addition, several reports have shown that GBE lowers blood glucose by improving insulin resistance [95][96][97]. Thus, GBE may attenuate atherosclerosis in the context of diabetes. According to a study by Lim et al. [98], neointimal formation in balloon-injured carotid arteries is significantly reduced when insulin-resistant rats are treated with EGb761 (100 or 200 mg/kg/day) for 6 weeks, resulting in reduced proliferation and migration of VSMCs. EGb761 (50-200 μg/mL) decreases the proliferation of rat aortic SMCs in a concentration-dependent manner in vitro. In addition, EGb761 at both 100 and 200 μg/mL suppresses the expression of ICAM and VCAM in HUVECs. Zhao et al. [99] found that GBE improves SOD activity and reduces the rate of apoptosis of EPCs within the peripheral blood of diabetic patients in a dose-dependent manner. According to Tsai et al. [100], GBE inhibits high glucose-induced ROS generation, adhesion molecule expression, and monocyte adhesiveness in human aortic endothelial cells (HAECs) via the Akt/eNOS and p38 MAPK pathways. Another study showed that Ginkgolide A at 10, 15, and 20 μM inhibits high glucose-induced IL-4, IL-6, and IL-3 expression in HUVECs. Ginkgolide A attenuates vascular inflammation by regulating the STAT3-mediated pathway [61]. According to a study by Wang et al. [101], treatment with the rutin (30 and 100 μM) significantly restores NO production by decreasing NOX4 mRNA and protein levels and reducing the generation of ROS in HUVECs under high-glucose conditions. Furthermore, rutin at doses of 35 and 70 mg/kg improves endothelium function by restoring impaired NO generation from glucose-triggered endothelial cells and ameliorating the endothelial contraction and relaxation response in thoracic aortas of rats with a high-glucose diet. The potential mechanism for GBE in the treatment of diabetic atherosclerosis is shown in Figure 4.
Tetramethylpyrazine (TMP)
. TMP is a biologically active compound isolated from rhizomes of Ligusticum chuanxiong, a traditional Chinese medicine (Figure 3(d)). Several studies have shown that TMP exerts antiatherosclerosis effects through promotion of endothelial protection, inhibition of VSMC proliferation, reduction of oxidative stress, and suppression of inflammation and apoptosis. The link between TMP and NO generation has been verified by several researchers. For example, Lv et al. [102] demonstrated that TMP pretreatment in vivo enhances Akt and eNOS phosphorylation. Additionally, Xu et al. reported that Qiong Huo Yi Hao (QHYH), which consists of several herbals based on the "clearing heat and detoxifying" principle of traditional Chinese Medicine, is a potent antioxidant acting to scavenge superoxide anions in endothelial cells treated with high concentrations of glucose [103]. TMP, an active compound in QHYH, has been shown to be the strongest component of QHYH in the prevention of ROS production, functioning to block Akt/eNOS phosphorylation and reduce NO generation in endothelial cells treated with high concentrations of glucose [104]. Xu et al. further demonstrated that TMP ameliorates high glucose-induced endothelial dysfunction by increasing mitochondrial biogenesis through reversing high glucose-induced suppression of SIRTI1 [105]. These findings provide evidence for the endothelial protection function of TMP in the context of hyperglycemia. Studies have shown that TMP can suppress the proliferation of VSMCs [106], and the ERK and p38MAPK pathways may be involved in this process [107]. Additionally, TMP can block LPS-induced IL-8 overexpression in HUVECs at both protein and mRNA levels, which could be attributed to inhibition of the ERK and p38MAPK pathways and the inactivation of NF-κB [108]. The antiapoptotic function of TMP can be attributed to the inhibition of JAK/STAT signal transduction [109]. Importantly, Lee et al. [110] investigated the effects of TMP on lipid peroxidation in STZ-induced diabetic mice. The results showed that TMP dose dependently inhibited glucose concentrations, blood urea nitrogen elevation, and the degree of lipoperoxidation. Thus, TMP may be an effective agent for the treatment of diabetes and related vascular complication. The mechanisms through which TMP protects against diabetes are shown in Figure 5.
Danggui.
Danggui-Buxue-Tang (DBT) is a well-known traditional formula. Zhang et al. [111] found that oral administration of DBT (3 or 6 g/kg/day for 4 weeks) decreased the concentrations of c-reactive protein and tumour necrosis factor-α and resulted in higher survival rates and lower body weight loss in diabetic GK rats; the diabetic atherosclerosis rats were induced by NO inhibition (I-NAME in drinking water, 1 mg/mL) plus a high-fat diet. They also investigated the effects of DBT on blood lipids and the expression of genes related to foam cell formation during the early stage of atherosclerosis in diabetic GK rats. The results demonstrated that DBT could regulate blood lipids and inhibit the expression of MCP, ICAM-1, and CD36 genes in the aorta [112]. Galgeun-dang-gwi-tang (GGDGT), a Korean herbal medicine, has traditionally been prescribed for the treatment of diabetes. In a study by Lee et al. [113], lipid metabolism and insulin resistance were shown to be improved by GGDGT in ApoE −/− mice fed with a Western diet. Immunohistochemical staining showed that GGDGT suppressed ICAM expression, whereas the expression of eNOS and IRS-1 was restored by GGDGT in the thoracic aorta and skeletal muscle. GGDGT attenuates endothelial dysfunction via improvement of the NO-cylic guanosine monophosphate signalling pathway and promotes insulin sensitivity in diabetic atherosclerosis.
Salvia miltiorrhiza (Danshen) and Salvianolic Acid.
Salvia miltiorrhiza (Danshen), a traditional Chinese herbal medicine, is commonly used for the prevention and treatment of cardiovascular disease. Salvianolic acid B is the most abundant water-soluble compound extracted from Danshen (Figure 3(e)). Inhibition of inflammation, improvement of antioxidative effects, regulation of leukocyte endothelial adhesion, and modulation of NO production in endothelial cells are involved in the cardiovascular protection mechanism for Danshen and its bioactive compounds [114,115]. Danshen extract and purified salvianolic acid B exert anti-inflammatory effects by inhibiting iNOS expression and NO production induced by LPS in RAW267.4 macrophages by inducing Nrf2-mediated HO-1 expression [114,116]. Lee et al. [117] also demonstrated that salvianolic acid B inhibits platelet-derived growth factor-induced neointimal hyperplasia in arteries through induction of Nrf2-dependent HO-1. In addition, salvianolic acid B increases NO production in the endothelium of isolated mouse aortas via inhibition of arginase activity [114]. According to Raoufi et al., administration of salvianolic acid B at doses of 20 or 40 mg/kg/day (i.p.) for 3 weeks significantly decreases serum glucose and improves oral glucose tolerance test (OGTT) in STZ-induced diabetic rats via attenuation of oxidative stress and apoptosis and augmentation of the antioxidant system [118]. The vascular endothelial protective function of Salvia miltiorrhiza and salvianolic acid B under high-glucose conditions has been verified both in vitro and in vivo. According to Qian et al. [119], Salvia miltiorrhiza (10 μg/mL) significantly decreases vascular endothelial ROS formation in human microvascular endothelial cells exposed to 30 mM glucose. Ren et al. [120] demonstrated that salvianolic acid B significantly restores eNOS in STZ-induced diabetic rats and decreases the levels of NOX and endothelial cell apoptosis. The mechanism through which salvianolic acid B protects against diabetic atherosclerosis is shown in Figure 2.
Catalpol.
Catalpol is the most abundant bioactive component in the roots of Rehmannia glutinosa (Figure 3(f)). Catalpol ameliorates plasma glucose in STZ-induced diabetic rats [121], and total cholesterol, triglycerides, and LDL cholesterol are reduced, whereas HDL cholesterol is elevated when the rats fed with high-cholesterol chow are treated with catalpol [122]. Additionally, atherosclerotic lesions and inflammatory markers are markedly reduced in the catalpol group, and catalpol attenuates atherosclerotic lesions and delays the progression of atherosclerosis in alloxan-induced diabetic rabbits. These protective effects are associated with regulation of glucose insulin homeostasis and inhibition of oxidative stress and inflammation [123]. 4.6. Resveratrol. Resveratrol (trans-3,5,4 ′ -trihydroxystilbene) is a natural polyphenol phytoalexin (Figure 3(g)) with various biological effects. The beneficial cardiovascular effects of this drug are attributable to its anti-inflammatory, antioxidative stress, endothelial protection, antiplatelet, and insulin-sensitizing effects [124]. Resveratrol increases NO bioavailability by regulating SIRT1, AMPK, and ROS. According to a study by Yang et al. [125], resveratrol restores the NO bioavailability impaired by high glucose in human endothelial cells in a SIRT1-dependent manner. Other studies have shown that resveratrol downregulates NF-κB induced by high glucose in smooth muscle cells and decreases the proliferation and migration of smooth muscle cells, a function similar to miR-138 inhibitors, which result in upregulation of SIRT1 [41]. Zhang et al. [126] demonstrated that resveratrol prevents impairment of the effects of AGEs on macrophage lipid homeostasis partially by suppressing RAGEs via PPARγ activation, thus providing new insights into the protective roles of resveratrol against diabetic atherosclerosis. Furthermore, resveratrol lowers lipid levels and decreases hepatic lipid accumulation by stimulation of AMPK dependent on SIRT1 activity. These findings suggest that resveratrol may have potential therapeutic effects through regulation of dyslipidaemia-associated atherosclerosis in diabetes by targeting SIRTI/AMPK signalling [127,128]. The underlying antiatherosclerotic mechanisms of resveratrol in the context of diabetes are illustrated in Figure 6 curcumin twice daily for 28 days [129]. Usharani et al. [130] showed that administration of a standardized preparation of curcuminoids (NCB-02, two capsules containing 150 mg curcumin, twice daily) for 8 weeks significantly improved the endothelial function of patients with type 2 diabetes mellitus. Curcumin also blocks oxidative stress and inflammation by modulating PPARγ and Nrf2 activity [131]. Zheng et al. showed that the curcumin analogue L3 alleviates dyslipidaemia and hyperglycemia and reduces oxidative stress in diabetic mice induced by STZ and a high-fat diet. Additionally, L3 effectively decreases lectin-like oxidized low-density lipoprotein receptor-1 expression in the aortic arch. These results suggested that curcumin ameliorates diabetic atherosclerosis through multiple mechanisms [132].
Future Perspectives
Insulin resistance and hyperglycemia are associated with diabetic atherosclerosis, and endothelial dysfunction, vascular inflammation, myeloid cell recruitment, oxidative stress, VSMC phenotype changes, and platelet hyperreactivity all contribute to diabetic atherosclerosis. As reported recently, crosstalk between macrophage polarization and autophagy may be involved in diabetes and related atherosclerosis complications [133,134]. Extensive preclinical studies have identified the molecule targets and herbs that act on these targets as potential therapeutic agents for the management of diabetic atherosclerosis (see Table 2). However, currently, most clinical studies have small sample sizes and are not performed using a randomised design. The lack of high-quality clinical trials hampers the application of herbal medicines in patients with diabetic atherosclerosis. Therefore, more rigorous clinical trials of herbs on diabetic atherosclerosis, with large sample sizes and a randomised, controlled design, are needed. Furthermore, detection of new molecules and signalling cascades that regulate diabetes and atherosclerosis will help to improve treatment approaches owing to the multifaceted characteristics of diabetic atherosclerosis. Investigation of the mechanisms of multitargeted effects of herbs will also help to establish novel drugs for the treatment of diabetes and diabetic atherosclerosis. In the future, the combination of herb with western medicine may also facilitate the treatment of diabetic atherosclerosis. Thus, further studies on drug interactions and safety are needed.
Conflicts of Interest
The authors have no conflict of interest to declare.
Authors' Contributions
Yue Liu conceived the topic and helped in drafting the paper. Shuzheng Lyu helped in revising the manuscript. Jinfan Tian searched the literature and wrote the manuscript together with Yanfei Liu, and they are co-first authors. Keji Chen helped in drafting the manuscript. All authors read and approved the final manuscript.
|
v3-fos-license
|
2022-07-07T13:30:34.186Z
|
2022-07-07T00:00:00.000
|
256554070
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41377-022-00882-w.pdf",
"pdf_hash": "6ce9a7134b721d99d15e793f715d92736aef0bc0",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46139",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "ed867fafdccc6c679b50944f91f519acd3724115",
"year": 2022
}
|
pes2o/s2orc
|
Liquid crystal between two distributed Bragg reflectors enables multispectral small-pitch spatial light modulator
The ability of controlling the phase of light at the subwavelength scale can be a game-changer due to its extraordinally wide angle-range in the wavefront shaping. By combining two conventional material and configuration, liquid crystal and distributed Bragg reflectors, we are getting close to this ultimate goal.
In 1801, Thomas Young demonstrated the wave behavior of light using the double-slit experiment, where waves passing through each slit interfered constructively or destructively forming the fringe pattern with bright and dark stripes.
The key principle underlying associated with this phenomenon can be elucidated by the concept of the phase of light. The wave function at a certain point at a specific time is given by the superposition of waves propagating from the sources, and the bright spots are formed when the summation of each phase becomes an integer multiple of 2π.
The ability of manipulating the phase of light in each source by external control signals allows us to generate on-demand wavefront as desired. The spatial light modulator (SLM) is a gadget that enables such phase control at will and is composed of a one-or two-dimensional array of individual pixels that can change the amplitude or phase of reflected/transmitted light. Most conventional approaches for the SLMs rely on liquid crystal (LC) or micro electromechanical systems. We can find numerous applications based on SLMs including digital holographic systems, optical communication, and biomedical imaging, to name a few 1 .
Let us get back to the Young's double-slit experiment. One can imagine that, as we decrease the gap between two slits, we observe the increased distance between the bright and dark stripes. The angle range between two consecutive bright or dark fringes is called the field of view, and plays a crucial role of important metrics because wider field of views allow enhanced performance in most applications, for example, larger eyebox in holography and increased sensing area in the light detection and ranging (LiDAR) 2 . Consequently, there has been considerable research on expanding the field of view by reducing the pixel size. Conventional LC-based SLMs, however, are subject to the limitation of reducing the pixel size. This is because they require enough vertical thickness, called cell gap, to achieve full 2π accumulated propagation phase. Thus the reduction of pixel sizes in horizontal dimension under a certain value may give rise to the fringing field of the electric field, which in turn causes deficient phase expression.
As alternative approaches, there has been considerable research toward the reduction of pixel sizes by invoking the reconfigurable metasurfaces [2][3][4][5] . Metasurfaces are arrays of optical scatterers with strong light-matter interaction that allow extremely localized optical response with a substantially suppressed crosstalk. By adding timedependent variation of these responses from metasurfaces through active materials with tunable refractive indexes, one could implement novel SLMs with pixel sizes at the subwavelength regime. However, most metasurfacebased SLMs to this day still suffer from limitations such as the narrow range of the phase change far below 2π and narrow bandwidth of the operating regime.
© The Author(s) 2022
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Correspondence: Junghyun Park (jhy.park@samsung.com) 1 Advanced Sensor Lab, Device Research Center, Samsung Advanced Institute of Technology, Suwon, Republic of Korea A recent research paper in Light: Science & Applications, entitled "High Resolution Multispectral Spatial Light Modulators based on Tunable Fabry-Perot Nanocavities", by Kuznetsov's group, introduces remarkable progress in seeking for the SLM solution with the small pixel, the wide phase-change range, and the multi-spectral response 6 . The configuration includes the LC encapsulated between the upper and lower distributed Bragg reflectors, forming a Fabry-Pérot cavity (Fig. 1). The incident beam from the upper side coupled into the resonator runs back and forth, and the over-coupled resonance occurs when the round trip phase becomes an integer multiple of 2π. The over-coupling dynamics allows 2π spectral phase as one sweep the wavelength. The large birefringence Δn = 0.29 of the nematic LC molecule, QYPDLC-001C, enables the experimental demonstration of near 2π under the applied bias V rms of 8 V.
The proposed Fabry-Pérot-SLM features two distinguished points: the three simultaneous operating wavelengths in the visible regime (multispectral response) and the small pixel size of 1.14 µm. Indeed, those two factors are under trade-off relationship. The secret ingredient is a judiciously designed thickness of the cavity filled with LCs. If the authors used the fundamental mode in the Fabry-Pérot cavity, the required thickness for the fundamental Fabry-Pérot resonance would be given by the half wavelength, t c = λ 0 /(4n), around 150 nm, where t c is the cavity thickness, λ 0 is the operating wavelength, and n is the refractive index of the LC. Such a thin cavity thickness would be advantageous for the small pixel size and the wide field of view, because it can suppress fringing field effects between neighboring pixels. Kuznetsov and his colleagues intentionally increased the cavity thickness to 530 nm (the original design of 750 nm), and they could employ higher order Fabry-Pérot resonances at the three wavelength regimes in the visible; red (λ 0 of 640 nm) for 4 th order, orange (λ 0 of 596 nm) for 5 th order, and blue (λ 0 of 503 nm) for 6 th order.
Despite the increased cavity thickness 530 nm for the higher order modes, the total thickness of each pixel between the upper and lower electrodes is around 2 µm, which is way smaller than that of conventional LC-based SLMs (~5 µm). This small thickness allowed the authors to achieve reduced pixel size down to 1.14 µm. They demonstrate multi-spectral programmable beam steering with field of view of~18°as well as multi-spectral vary-focal lensing.
Despite the pioneering achievements in this work, there are still remaining tasks to be solved in the future. Although the phase response versus the applied bias in the nonpixelated structures show successful near-2π phase sweep, the beam steering in the real pixel-arrays results still exhibit non-vanishing side lobes. This kind of degradation in performance under the migration from non-pixelated unit cell characterization to pixelated array operation is observed quite often even in state-of-the-art studies, but should be resolved in the real applications. This may be ascribed to the potential crosstalk between 1.14-μm-pitch pixels. If we define the pixel size not by its physical appearance (the pitch of electrodes) but by its functioning unit, i.e., the pitch of pixels that allow 2-pixel supercell showing the side mode suppression ratio more than 10 dB, for example, the claimed smallest pixel of 1.14 µm could be slightly increased. To be used in the real-life applications, further efforts need to be made to suppress the undesirable side One-dimensional 1.14 µm (this work) 1.14 µm 3 1.60 µm 7 2 µm 8 Two-dimensional 3.74 × 3.74 µm 2 9 36 × 36 µm 2 11 1 × 9 µm 2 10 lobes. In addition, the response time or the switching speed, which was not comprehensively studied in this work, may also be addressed. Nevertheless, the proposed platform of LCs in two distributed Bragg reflectors is a significant contribution to the small-pixel SLMs with multi-spectral response and could be extended to two-dimensional arrays or transmissive type in the future (Table 1).
|
v3-fos-license
|
2024-04-27T06:18:04.368Z
|
2024-04-25T00:00:00.000
|
269385331
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "73a6d91e88857d462466c694145a5a4ecb16a51e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46144",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "bdc8a139450d450b1387a9e2a88bf4cb4208afb5",
"year": 2024
}
|
pes2o/s2orc
|
Predictive modeling of initiation and delayed mental health contact for depression
Background Depression is prevalent among Operation Enduring Freedom and Operation Iraqi Freedom (OEF/OIF) Veterans, yet rates of Veteran mental health care utilization remain modest. The current study examined: factors in electronic health records (EHR) associated with lack of treatment initiation and treatment delay; the accuracy of regression and machine learning models to predict initiation of treatment. Methods We obtained data from the VA Corporate Data Warehouse (CDW). EHR data were extracted for 127,423 Veterans who deployed to Iraq/Afghanistan after 9/11 with a positive depression screen and a first depression diagnosis between 2001 and 2021. We also obtained 12-month pre-diagnosis and post-diagnosis patient data. Retrospective cohort analysis was employed to test if predictors can reliably differentiate patients who initiated, delayed, or received no mental health treatment associated with their depression diagnosis. Results 108,457 Veterans with depression, initiated depression-related care (55,492 Veterans delayed treatment beyond one month). Those who were male, without VA disability benefits, with a mild depression diagnosis, and had a history of psychotherapy were less likely to initiate treatment. Among those who initiated care, those with single and mild depression episodes at baseline, with either PTSD or who lacked comorbidities were more likely to delay treatment for depression. A history of mental health treatment, of an anxiety disorder, and a positive depression screen were each related to faster treatment initiation. Classification of patients was modest (ROC AUC = 0.59 95%CI = 0.586–0.602; machine learning F-measure = 0.46). Conclusions Having VA disability benefits was the strongest predictor of treatment initiation after a depression diagnosis and a history of mental health treatment was the strongest predictor of delayed initiation of treatment. The complexity of the relationship between VA benefits and history of mental health care with treatment initiation after a depression diagnosis is further discussed. Modest classification accuracy with currently known predictors suggests the need to identify additional predictors of successful depression management. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-024-10870-y.
Background
Depression is highly prevalent worldwide and rates are especially high among Veterans returned from Operation Enduring Freedom and Operation Iraqi Freedom (OEF/ OIF), with some reports showing rates as high as nearly 60% [1].Relative to past cohorts of Veterans, rates of depression in OEF/OIF have also increased over the past decade [2].To reduce depression-related morbidity and mortality and barriers to care, [3] the Veterans Health Administration (VHA) responded to Veterans' increasing mental health needs by implementing annual depression screens, embedding psychologists and psychiatrists in primary care to decrease response time, and encouraging same-day referrals for depressed Veterans.Although these changes have increased mental health treatment utilization over time (e.g., from 20% in 2004 to 26% in 2010 utilization of psychotherapy), [4] mental health care utilization for depression among Veterans has remained modest (e.g., 10-26%) [4,5].
The VA's substantial efforts to increase access to care underscores a continuing need to address patients' personal and attitudinal barriers.Specifically, Mojtabai and colleagues [6] highlight the need for a re-evaluation of how we identify patients who do not seek treatment.Negative correlates of receiving mental health treatment over a 12-month period include male gender or being single [7].Clinical factors have also been implicated in treatment behaviors.For example, Veterans with a history of prior treatment or who have experienced longer, more intense depressive episodes are more likely to seek treatment [8].
In this study, we sought to further understand treatment contact correlates in a cohort of OEF/OIF patients with a positive depression screen and a depression diagnosis that received VA services at any time between 2001 and 2021.Our objectives were to: (1) examine factors reliably present in EHR that are associated to lack of treatment initiation and delay in treatment; (2) evaluate the accuracy of proposed models to predict lack of treatment in the future using both regression and machine learning strategies.
Data source and cohort selection
We obtained data from the VA Corporate Data Warehouse (CDW) through the Veterans Affairs Informatics and Computing Infrastructure (VINCI), which contains EHRs of all VHA patients in the United States of America.Inclusion criteria were as follows: Veterans from OEF/OIF cohort between January 2001 -January 2021 (N = 1,419,000), with at least one depression diagnosis and one positive depression screen at any time in the study period (N = 293,265).Once primary inclusion criteria were applied, patients were excluded for the following reasons: less than 365 days in the VA system before their first depression diagnosis (n = 98,984) and if they had received psychotherapy (based on Current Procedural Terminology (CPT) codes) or an antidepressant prescription dispensed the year before their first depression diagnosis (n = 51,195) to ascertain that patients were depression treatment free for a full year [9]; fewer than 365 days in the VA system after their first depression treatment or after their last depression diagnosis if no treatment occurred (n = 9,561) to allow for at least one full year follow-up period for all patients, and if they had a bipolar, schizophrenia/schizoaffective, or personality disorder diagnosis (n = 4,586) or intellectual disabilities or dementia diagnoses (n = 1,516) given that all these patients likely went through a different treatment receipt process than the average patient with depression.
Measures
Patient demographics Extracted CDW data included age, gender, race, Hispanic ethnicity, distance from patient's residence to the nearest primary, secondary, and tertiary care VA facilities, and rurality or urbanity of patients' residence.Patient's VA benefit status was recorded as either being service connected (> 0-100%) or not (0%).Finally, patients' 12-month healthcare cost incurred by the VA was extracted.All baseline characteristics were anchored on the date of the first depression diagnosis.Patient clinical characteristics: International Classification of Disease, Ninth and Tenth Revision (ICD9/10) codes were extracted to ascertain the presence of at least one depression diagnosis.The first depression diagnosis on file was coded as the index episode marking the baseline of each patient.Episode qualifiers available in the ICD codes were coded into separate variables identifying first (versus recurrent episodes) and mild (versus moderate and severe episodes).ICD codes were also used to extract baseline 12-month comorbidities dating the year prior to the first depression diagnosis: anxiety and adjustment disorders, alcohol and substance disorders, and posttraumatic stress disorder diagnoses.
We employed Nosos scores to risk adjust for clinical co-morbidities [10], computed based on information from the Centers for Medicare and Medicaid Hierarchical Condition Categories (HCC) version-21 using ICD-9 code, age, and gender.The risk scores are then adjusted by incorporating patient pharmacy records and VA-specific factors (e.g., VA priority and costs).The adjusted Nosos score estimates are rescaled to a population mean of one [10].
PHQ-9 and PHQ-2 scores obtained during any health care visits at the VA were extracted for the month preceding the first depression diagnosis.Scores were qualified as positive if the PHQ-2 scores were above 2 and the PHQ-9 scores were above 9 [11].The PHQ-9 is a nine-item instrument that assesses the symptoms of depression corresponding to the Diagnostic and Statistical Manual Version IV (DSM-IV) diagnostic criteria for a major depressive episode and the PHQ-2 is the short version capturing only anhedonia and mood items.Item responses are on a four-point scale (from occurring "not at all" to "nearly every day" over the past two weeks) resulting in a score range from 0 to 27 for the PHQ-9 and 0 to 6 for the PHQ-2.
Mental health treatment for depression
Treatment initiation was measured as at least one completed psychotherapy visit (extracted via CPT codes) associated with a depression diagnosis or a dispensed prescription for any antidepressant within 180 days of a depression diagnosis.This variable was coded as Treatment not initiated = 1 and Treatment initiated = 0.
Treatment delay was defined as number of days from date of first depression diagnosis to date of first treatment for depression.Based on VA goals for initiating contact with patients in need of mental healthcare, the continuous variable was transformed into a categorical variable based on the following preset groups coded as: up to 1 week = 6, 1 week − 1 month = 5, 1 month − 3 months = 4, 3 months − 6 months = 3, 6 months − 12 months = 2, and over 1 year = 1.
Statistical analysis
Descriptive statistics for sociodemographic and clinical characteristics were tabulated for the full cohort of included Veterans.Next, the cohort was randomly split into 70% (n = 89,142) testing and 30% (n = 38,281) validation participant subsamples to control for spurious findings; effect sizes across variables were less than 0.05 confirming comparability of the randomly created subsamples [12].
Univariate and multivariate analyses identified predictors of two outcomes: lack of treatment initiation and treatment initiation delay.Final models were tested using regression adjusting for nested data using Generalized Estimating Equations (GEE) for count, binomial, and multinomial distributions.The best fitting model based on QIC indices was applied to the validation subsample and results are presented in this manuscript.Next, we conducted a series of sensitivity analyses.Specifically, we tested the predictive values of the PHQ-9 score, of a dummy variable identifying those with a positive PHQ-9 score (i.e., PHQ-9 score > 9), reporting any anhedonia, reporting any suicidal ideation.Sensitivity analyses were performed among patients with a PHQ-9 assessment up to a month prior to their depression diagnosis (n = 13,610), a timeframe used in other studies to provide a reliable evaluation of the baseline depression severity.All predictors were evaluated based on OR < 0.90 or > 1.1 to avoid multiplicity and OR 95% CI to exclude OR = 1.Given coding, for treatment initiation models ORs < 0.90 reflect odds of initiating treatment and ORs > 1.1 reflect odds of not initiating treatment.Conversely, for treatment delay, ORs < 0.90 reflect odds of longer delay and ORs > 1.1 reflect odds of more timely treatment.
Using the best fitting model, individual Veteran predicted values were saved as the probability of treatment initiation.Based on these probabilities, the area under the receiver operating characteristic (AUC ROC) evaluated the predictive accuracy of the model and determined an optimal cut point to accurately identify patients with depression that would not initiate mental health treatment.
In addition to the GEE regression models, we constructed a machine learning model to perform a multimethod prediction precision evaluation.This study used an updated version of the C4.5 algorithm developed by Quinlan [13] as C5.0.The C5.0 application was compiled from the Global Public License (GPL) C code distributed freely by Quinlan [13].Specifically, we used decision trees with a 10 × 10 cross validation on the full cohort; we constructed several models with varying values for parameters such as pruning algorithms, minimum number of cases per branch, boosting trials and probabilistic branching.The best model employed pruning based on error rates for each branch, with a minimum of 4 cases for each branch.Boosting and probabilistic branching did not improve the models.We evaluated our model using traditional precision, recall and F measure (see Supplement for additional details).
Patient characteristics
See Tables 1 and 2 for the demographic and clinical characteristics of the sample who met inclusion criteria (N = 127,423).
Predictors of lack of treatment initiation
Among those patients who met study criteria, 15% (n = 18,966) never initiated treatment over the 20-year study period.The odds of not starting treatment increased with lack of service connection, having received psychotherapy in the past, male gender, being never married, and receiving a mild depression diagnosis or other depressive disorder (adjusted ORs > 1.1) (Table 3).Predictors that decreased odds of never starting treatment were having had a past year substance disorder comorbidity (adjusted ORs < 0.9) (Table 3).
Predictors of delayed treatment
Among those who eventually initiated treatment, 6% (n = 6,752) delayed beyond six months and an additional 19% (n = 20,615) delayed beyond one year after an initial diagnosis.Clinical predictors such as having a trauma related diagnosis (PTSD, adjustment disorder) or no comorbidities, and a mild or single first depression episode were all significant contributors to delayed treatment initiation.Some factors that contributed to starting treatment earlier rather than later (ORs > 1.1) were in order of importance: having received treatment for depression in the past (psychotherapy and/or antidepressant), not being service connected, or having an anxiety disorder (Table 4).Sensitivity analyses evaluating PHQ9 features as potential predictors, reflected that having a positive depression screen accelerated treatment initiation for depression (OR = 1.294,OR 95%CI = 1.047-1.601).
Identification of patients who never initiated treatment
We tested the accuracy with which we can predict at first diagnosis whether a patient will not start treatment.Our set of demographic and clinical characteristics that a patient presented at their first depression diagnosis led to modest identification of individuals who never initiated depression treatment (AUC = 0.594, 95%CI = 0.586-0.602),given that an AUC of 0.50 is no better than chance.
The resulting machine learning tree was small, with former psychotherapy as the sole predictor.Specifically, patients who received psychotherapy in the past were more likely to never seek treatment again.The overall prediction accuracy of this model was 88.2% on the ten sets of holdout data (Table 5).Despite the high predictive accuracy of this model, its overall prediction of when patients would seek treatment (F-Measure = 0.93) was much better than the prediction of when they do not (F-Measure = 0.46) (Table 5).
The precision score was about equal for patients who initiated and did not initiate treatment for depression (0.87 vs. 0.95), indicating that they were effective at distinguishing a true positive from a false positive.With a recall of 0.99, the model was especially effective at identifying patients who eventually seek treatment but very poor at identifying patients who do not (recall = 0.30) (Table 5).
Discussion
Treatment delays and treatment underutilization for depression remain an all-too-common problem that is associated with increased morbidity and mortality [3].Our study sought to understand correlates of treatment initiation in a cohort of OEF/OIF Veterans with a depression diagnosis during nearly two decades of VHA services.Our findings provide an initial evaluation of (1) factors associated with absence of treatment for depression or delay in treatment initiation and (2) accuracy of identifying those patients who do not initiate treatment.Our analyses were guided by the larger goal of learning whether routine EHR can be leveraged to develop predictive tools that will identify Veteran treatment choices.One encouraging finding is that OEF/OIF Veterans with a depression diagnosis initiate mental health care for depression at higher rates than patients in other large health care systems [9].For example, at three months post initial depression diagnosis, our current cohort showed a nearly 50% higher initiation rate relative to an equally large primary care community cohort (e.g., 35.7%) [9].This is likely due to the VHA's efforts to increase same day access for patients with mental health concerns.By a year post initial diagnosis, treatment initiation rate increased by another 50%, so that nearly 70% of our cohort was connecting with mental health care to initiate psychotherapy or antidepressant medication.Our study also found larger initiation rates than other Veteran studies, [4,5] likely in part due to our evaluation of both antidepressant and psychotherapy initiation, while former studies focused on one treatment type.Still, we found that about one third of patients either took more than a year or never initiated treatment despite receiving a depression diagnosis.This reinforced our effort to understand factors that led to treatment delay, given deleterious effects of deferring treatment for depression [14].
Demographic factors that interfered with receiving treatment or delayed treatment
Our findings aligned with previous work showing that demographic factors play a role in limiting treatment initiation.First, the current study substantiates earlier indications that male gender is often associated with lower odds of receiving mental health treatment [7].It is possible that acculturation to the military may heighten this possibility.Military training may impact treatment seeking by instilling the value of emotional control while under stress to promote survival and mission completion [15].These beliefs taken to extreme can promote emotional avoidance and may inadvertently delay treatment seeking [16,17].
Counter to prior findings, [9,18] our rate of treatment initiation and even delay among Veterans was not related to minority status.This discrepancy may be due to increased access to mental health care among Veterans relative to adults in the general population where access to mental health care is reduced for most minority populations [18].For example, having insurance/expanded insurance coverage diminished the difference between Latino and non-Latino white populations when evaluating differences in utilization of mental health services [18].In fact in our study, over 90% of our cohort was service connected for a disability and therefore receiving VA benefits.This factor was the strongest predictor of whether a patient initiated any treatment for depression.While being service connected for a disability was associated with higher rates of treatment initiation, lack of service connection was associated with faster treatment initiation.This may be due to the fact that all Veterans regardless of discharge status which may impact access to VA healthcare, are eligible for one year of mental health services.Therefore, it's possible that Veterans who may have a more precarious or uncertain access to VA benefits long term, do still take advantage of mental health services that are available and therefore initiate care faster.
Clinical factors that predict not initiating treatment or delaying treatment
In addition to demographic factors we found that clinical characteristics contributed to lack of treatment or delayed care in three ways.Firstly, patients were more likely to not receive mental health treatment for depression if their first depressive episode was qualified as mild, unspecified, other, a single episode, or the patient had a past month negative screen, all possibly indicating a recent depression onset.This set of clinical characteristics point to a profile of a less severe, recent diagnosis that may be clinically appropriate for watchful waiting and psychoeducation.Level of illness severity may impact perceived need and thus lead to possible refusal of care; for example, in one study, low perceived need was more often a reason for not seeking treatment among individuals with mild (57.0%) or moderate (39.3%) than severe (25.9%) disorders [6].Secondly, mental health treatment history appeared to differentially impact treatment initiation and treatment delay among those patients who initiated mental health care after a depression diagnosis.Specifically, a history of psychotherapy was both associated with higher odds of never initiating treatment after a depression diagnosis and higher odds of initiating treatment faster if treatment was initiated at all.In the absence of further information about types of treatment and effectiveness, receiving preferred treatment, [19] and therapeutic alliance between provider and patient during treatment [20] in these different groups of patients, it is rather difficult to discern what may have led to this differential impact.Based on prior work there are at least two scenarios for each group.Those patients who never initiated treatment for depression despite a history of psychotherapy, either had a poor experience which led to avoiding psychotherapy despite a new diagnosis or felt confident to reapply strategies learned in the past therapy sessions without initiating a new round of treatment.Patients who reinitiated treatment after a depression diagnosis may have done so quickly because their past experience was successful, or perhaps they were already connected to a provider, [21] or possibly had same day access to services.Further work is needed in this area to more fully understand treatment dynamics over time.
Thirdly, although comorbid substance use disorders were often associated with lack of treatment for depression and PTSD with treatment delay, this may be either because substance use disorders are notoriously associated with low or no treatment initiation [22] or because the VHA has specialty care for both PTSD and substance use disorders readily available, which will likely house most patients with these diagnoses.Although this was not the focus of the current set of analyses, understanding what other mental health services are accessed by patients with depression is an important focus for future work as it has implications for understanding where and whether patients ultimately receive services and support.
Identification of patients who never started treatment
Finally, the second major goal of this project was to evaluate how predictors of treatment initiation performed in identifying patients who never received treatment.Such work was motivated by our plan to work towards building an automated system that will use EHR data to routinely identify patients at risk of disengaging from care.Our evaluation of the predictive accuracy of our GEE models using AUC ROC and the F-measure for machine learning suggests that using machine learning relative to GEE led to higher accuracy in identifying patients that never started treatment.Specifically, the ratio of true positive to false positive was higher in the machine learning model than the GEE model.Unfortunately, both the GEE prediction precision and the machine learning analysis suggest that the study variables identified and used in our models were insufficient for an accurate decision support tool.Ideally, EHR data may ultimately be used to increase patient engagement with preferred treatments [19].
The current study has some limitations.For example, the evaluation of whether treatment is initiated is based on VA services.If patients initiated services outside the VA, these data would not be captured in the current analyses.However, given that over 90% of veterans in this cohort are receiving VA benefits for a disability, they are incentivized to use VA care.Furthermore, the information we used in modeling our outcomes (whether someone defers or delays mental health care) is entirely based on structured data in the CDW.There is clinical data available in clinical notes that may be relevant to the questions raised in this study, but that is more complex to extract.In a recent study we proposed that patients' affective states may impact how they engage with care and that this information could be extracted from clinical note text reliably [23].However this line of work is in its infancy and further validation of this data is necessary.
Conclusions
Future work might consider deriving more targeted variables from EHRs as well as reconsidering key barriers for patients and providers.For example, it is worth investigating how specific debilitating depressive symptoms, like sad mood, [24] influence a Veteran's decision to seek treatment.Studies have shown that treatment initiation increased with higher depression severity but was only 53% among patients with a PHQ-9 above 9 [9].Conversely, anhedonia, one of the most prevalent symptoms in depression, [25] was recently related to lower odds of treatment initiation [26].Finally, providers may also have insights into referral processes that are not evident in structured data from EHRs.This investigation contributes to the field in two ways.First, it reinforces the importance of understanding the barriers and clinical characteristics routinely collected in clinical care, and second, it highlights the need for future work that probes more granularly into clinical characteristics and more specific features that can increase prediction of patients who need support for successful use of VA depression treatment services.Clinically, our work reinforces the importance of providing benefits to Veterans; having adequate health benefits allows patients to engage in care as needed and can have implications for long term depression treatment utilization give heterogeneous trajectories [27].TABLES.
Table 1
Baseline demographic characteristic at first depression diagnosis a a Reporting mean (SD) for continuous variables and N (%) for categorical variables b PC = primary care center, SC = secondary care center, TC = tertiary care center PHQ-9 = Patient Health Questionnaire-9
Table 2
Baseline clinical characteristic at first depression diagnosis a a Reporting mean (SD) for continuous variables and N (%) for categorical variables b Psychotherapy or antidepressants received for other disorders than depression PHQ-9 = Patient Health Questionnaire-9
Table 3
Baseline predictors of lack of treatment initiation for depression among Veterans (validation n = 38,281)
Table 4
Baseline predictors of Veterans delaying* treatment for depression by 1 month to more than 1 year relative to those starting treatment within the first week of a depression diagnosis (validation n = 38,281)
Table 5
Machine learning prediction precision for treatment initiation
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2015-11-05T00:00:00.000
|
18727156
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/srep16145.pdf",
"pdf_hash": "3bd1ec21ddb46bb7625deab8126109ac5f2d2361",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46145",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"sha1": "f329afd51b3a2fd30b4ccee3f04f8eb8c9225cdf",
"year": 2015
}
|
pes2o/s2orc
|
Quantification of NS1 dengue biomarker in serum via optomagnetic nanocluster detection
Dengue is a tropical vector-borne disease without cure or vaccine that progressively spreads into regions with temperate climates. Diagnostic tools amenable to resource-limited settings would be highly valuable for epidemiologic control and containment during outbreaks. Here, we present a novel low-cost automated biosensing platform for detection of dengue fever biomarker NS1 and demonstrate it on NS1 spiked in human serum. Magnetic nanoparticles (MNPs) are coated with high-affinity monoclonal antibodies against NS1 via bio-orthogonal Cu-free ‘click’ chemistry on an anti-fouling surface molecular architecture. The presence of the target antigen NS1 triggers MNP agglutination and the formation of nanoclusters with rapid kinetics enhanced by external magnetic actuation. The amount and size of the nanoclusters correlate with the target concentration and can be quantified using an optomagnetic readout method. The resulting automated dengue fever assay takes just 8 minutes, requires 6 μL of serum sample and shows a limit of detection of 25 ng/mL with an upper detection range of 20000 ng/mL. The technology holds a great potential to be applied to NS1 detection in patient samples. As the assay is implemented on a low-cost microfluidic disc the platform is suited for further expansion to multiplexed detection of a wide panel of biomarkers.
Scientific RepoRts | 5:16145 | DOi: 10.1038/srep16145 The proven clinical relevance of early NS1 detection has stimulated the development of immuno-chromatographic lateral flow assays 8 , which are rapid immunoassays (15-20 min) designed to provide a non-quantitative readout at the point-of-care (PoC) 9 . However, in many cases the virus serotype and the infection status of patients limit the sensitivity and reliability of these tests 10 and laboratory confirmation is often required 11 . Enzyme-linked immunosorbent assays (ELISAs) remain the gold standard in dengue endemic areas but the test can take several hours and requires specialized personnel and laboratory facilities 12 .
In response to these challenges several groups have proposed biosensor technologies for NS1 quantification in formats compatible with decentralized diagnostics. Immunosensors based on immunospot assays using fluorescent nanoparticles 13 , surface plasmon resonance 14 , and electrochemical detection 15,16 have recently been presented. These technologies display a growing capacity to provide sensitive NS1 quantification. However, they require multi-step assay strategies and cannot easily be scaled to simultaneous detection of multiple biomarkers. The challenging integration therefore limits their potential for dengue diagnostics 3 .
Here we present a novel optomagnetic lab-on-a-disk technology for NS1 detection based on aggregation of magnetic nanoparticles (MNPs). Previous validation of the readout principle on a model molecular assay in buffer 17 is now extended to a one-step MNP-based homogeneous immunoassay directly in serum. A biomarker-dependent aggregation of magnetic nanoparticles in raw biological samples is very challenging as nonspecific aggregation cannot be reduced via enhanced stringency of washing steps. Endogenous proteins bind non-specifically and may thus interfere with specific recognition of the target biomarker and impair assay sensitivity. To overcome these challenges, we have designed an anti-fouling surface attachment for the antibodies by means of 'click' chemistry 18 . The passivated nanoparticles are deployed in a magnetic agglutination assay, where a few microliters of serum sample are mixed with two identical populations of MNPs functionalized, respectively, with capture (Gus11) and reporter (1H7.4) monoclonal antibodies (mAb) raised against NS1 protein. Sample incubation in a strong magnetic field (hereafter named "magnetic incubation") induces NS1-mediated MNP aggregation. As a final step, the concentration of the target analyte in solution is quantified by measuring the modulation of the transmitted light upon a magnetic field actuation of the nanoclusters 19 . The entire assay protocol has been implemented on a disc-based platform, which is suited for inclusion of blood-serum separation and for further future expansion to detect a panel of serological markers. We optimize key assay parameters (MNP concentration, incubation conditions, and sample volume) to achieve a clinically relevant NS1 sensitivity range. Ultimately, we present a dose-response curve directly in serum proving robust NS1 quantification in 8 minutes using a serum volume of only 6 μ L. The lower limit of detection is established to 25 ng/mL and the sensitivity range of NS1 extends up to 20000 ng/mL.
Results and Discussion
MNPs coated with capture (Gus11) and reporter (1H7.4) antibodies bind different epitopes on the NS1 antigen and, for this reason, the presence of NS1 causes linking and agglutination of capture and reporter MNPs with kinetics that can be accelerated by a magnetic incubation step (Fig. 1a). During the magnetic incubation, the strong applied magnetic field causes formation of MNP chains. While in close proximity, the MNPs are subject to Brownian motion, which randomizes their relative orientation and further promotes inter-nanoparticle binding 20 . The formed clusters are subsequently detected by applying an oscillating uniaxial magnetic field along the path of the laser light: the controlled movements of the formed clusters modulates the transmitted light intensity either via a rotation of individual MNPs or nanoclusters with an optical anisotropy, or via a reversible formation of chains of MNPs along the magnetic field. The MNPs used in this work have a negligible remnant magnetic moment and are substantially spherical. Therefore, they do not physically rotate in an oscillating uniaxial magnetic field and there is no link between the induced magnetic moment in the particles and the optical transmissivity. Moreover, in the weak magnetic field applied during the measurements the magnetic dipole interaction between two particles is much smaller than the thermal energy and therefore chain formation due to magnetic dipole interactions is not energetically favorable. Hence, a homogeneous suspension of non-aggregated particles is not expected to produce a modulation of the transmitted light intensity. This agrees well with our observations on uncoated particle suspensions where the particles are kept well separated by electrostatic repulsion (data not shown). However, when particles form dimers or larger agglomerates, there is a linked magnetic and optical anisotropy that produces a modulation of the transmitted light during a cycle of the magnetic field 21 . When the field magnitude is large, the structures tend to align with their long axis along the magnetic field, whereas their orientation becomes random, due to thermal agitation, when the field is low, i.e., the transmitted light intensity peaks twice during a cycle of the magnetic field. Accordingly, the light modulation occurs at 2f and the response can be determined from the 2 nd harmonic component of the signal measured from the photodetector. We have previously shown that the complex 2 nd harmonic signal can be used to extract information related to the real and imaginary magnetic susceptibility of the MNPs and that the in-phase 2 nd harmonic signal, ′ V 2 , versus frequency of applied field presents a characteristic peak related to the Brownian relaxation frequency of MNP nanoclusters, which is inversely proportional to the hydrodynamic volume of the nanocluster 17 To illustrate the readout method, Fig. 1c shows the in-phase part of the normalized optomagnetic spectra ( ′ V 2 vs. f ) for two MNP samples. The blank sample contains MNPs coated with capture and reporter antibodies, respectively, mixed in serum. This sample exhibits a peak at a frequency of f = 17 Hz of weak intensity, which is attributed to small population of MNP clusters formed via non-specific interactions during the antibody conjugation and during the magnetic incubation step. The peak position corresponds to a hydrodynamic size compatible with the manufacturer specification 19 . The NS1 positive sample, where NS1 links MNPs with capture and reporter antibodies (cf. Fig. 1a), shows a spectrum with higher intensity and a slight peak shift to low frequencies due to the formation of specific MNP nanoclusters. The number and size of these MNP nanoclusters increase with the NS1 concentration for the investigated range of concentrations to generate signals of increasing intensity. A dose-response curve is extracted by measuring the value of ′ V 2 at a frequency of f = 17 Hz, corresponding to the position of the peak at low NS1 concentrations.
To mitigate the formation of nonspecific aggregates during the magnetic incubation and the influence of complex biological fluids, we have designed an anti-fouling magnetic surface architecture based on generation of a monolayer of blocking proteins (human serum albumin, HSA) on the surface of the nanoparticles. The protein monolayer was used to anchor affinity probes by means of bio-orthogonal Cu-free cycloaddition (Fig. 2a) 23 . We have previously demonstrated that bio-orthogonal conjugation approaches can dramatically impact assay performances in complex matrices 18 . In particular, coating of the hydrophobic regions of polystyrene magnetic nanoparticles with HSA, by means of carbodiimide chemistry helped minimize the adsorption of endogenous interfering agents present in human serum. As it is well known 24 that the acylisourea intermediate is highly unstable in aqueous conditions, the molar excess of HSA required to fully saturate the carboxyl groups available on the magnetic nanoparticles had to be carefully controlled. Figure 2b shows the measured number of HSA molecules per MNP as function of the mass of HSA used per mass of the MNPs. It is observed that the number of HSA molecules immobilized on the nanoparticle surface progressively increased with the input of HSA. The available surface area was completely occupied by HSA when the reaction occurred in presence of more than 100 μ g HSA per mg of nanoparticles. This closely correlates with the value required for a theoretical monolayer 24 . The ligation of HSA molecules on the MNP surface introduced primary amines that were subsequently used as handles for the subsequent layer of the molecular surface architecture. Heterofunctional amine-reactive linkers were covalently bound to HSA to introduce azide moieties on the nanoparticle surface. Similarly, azide-complementary dibenzylcyclooctyne (DBCO) moieties were introduced on the antibodies. The DBCO moiety has a unique spectral fingerprint, with an absorption peak at 310 nm that enabled the degree of modification to be directly monitored by measuring UV absorption of the modified biomolecules. Figure 2c shows the absorption spectra obtained for unmodified Gus11 antibodies and antibodies modified with DBCO linker in 10× and 20× molar excess. We carefully controlled the reaction conditions to ensure to introduce 2-3 DBCO molecules per antibody and verified that the modification did not affect the affinity of the antibody to the NS1 protein 18 . After 'click' between azide and DBCO, approximately 2000 antibodies were flexibly linked to the HSA monolayer per particle by means of hydrophilic linkers (Fig. 2d).
We first validated the efficacy of the magnetic incubation step by investigating the kinetics of the MNP agglutination in the presence of NS1. A blank sample of serum and a positive sample containing NS1 spiked into serum at a concentration of 1000 ng/mL were used. The MNP concentration was kept constant at 0.1 mg/mL. The reaction was monitored for 60 minutes by measuring optomagnetic spectra every five minutes. The ′ V 2 component at f = 17 Hz corresponding to the position of the peak for low NS1 concentrations was monitored vs. time t for both samples, as shown in Fig. 3a. To eliminate the influence of sample-to-sample variation and to ease graphical comparison, the measurements were normalized by the first measured point (t = 0).
The first 30 minutes correspond to the capture phase where NS1 molecules are directly captured onto the surface of the MNPs. This binding reaction triggers the formation of specific MNP nanoclusters and an increase of the intensity of the ′ V 2 signal, which gives rise to a slope in the 1000 ng/mL sample signal vs. time. The signal increase was moderate in the capture phase since nanocluster formation is diffusion-limited and hindered by electrostatic repulsion between the MNPs. The formation of non-specific clusters in the blank sample (with zero NS1 concentration) takes place on a much longer time scale and in 30 min the signal increase is negligible. Subsequently, the magnetic incubation step was performed for 60 cycles (180 s) to speed up the agglutination kinetics between the MNPs.
The magnetic incubation consisted of two steps: first sample incubation between the two permanent magnets to enhance formation of clusters and then a shaking step to break up non-specific MNP nanoclusters and enable re-orientation of the MNPs. The overall magnetic incubation procedure strongly accelerated the capture dynamics of NS1 as well as the nanocluster formation (Fig. 3a). This result is well in line with previous literature which shows that the application of continuous 25 or pulsed 21 magnetic field efficiently enhances MNP agglutination. The data in Fig. 3a show that magnetic incubation provides a 3-fold increase of the signal amplitude in the positive sample compared to the blank sample. Non-specific interactions mediated by magnetically induced collisions, physisorbed biomolecules from the human serum and residual hydrophobic regions (e.g., triazole groups) could occur, even in the presence of a HSA monolayer. We observed a net decrease in the signal amplitude during the subsequent 30 minutes for both the 1000 ng/mL and blank samples. Since this effect is target independent, we attribute the decrease to the loss of weakly bound nanoclusters formed due to non-specific interactions.
To maximize the biomarker depletion during the incubation phase, we have evaluated the assay performance as function of temperature and time. Kinetic theory dictates that increased thermal energy results in higher motility and collision rate between biomolecules in solution. In addition, the antibody association-dissociation constant depends on temperature 26 . We define ∆ as the difference between the ′ ( ) V 17 Hz 2 signals for the positive sample and the blank sample just after magnetic incubation. Figure 3b shows ∆ measured for a 1000 ng/mL sample with four different incubation conditions. The MNP concentration, the sample volume and the magnetic incubation protocol were kept constant and the same as in the previous experiment. The results indicate that a higher or similar value of Δ is obtained at RT compared to 37 °C. Moreover, pre-incubation prior to the magnetic incubation is observed to only result in minor signal improvement compared to no pre-incubation. Hence, we can conclude that the NS1 capture and MNP clustering mainly take place during the magnetic incubation. Other experimental conditions, such as a two-step approach with capture and reporter nanoparticles incubating with the sample in successive steps, were shown to provide a lower ∆ ( Supplementary Information, Section S2).
We further investigated the parameters of the magnetic incubation. We found that prolonged exposure to the magnetic field resulted focusing of the MNPs onto one side of the microfluidic chamber due to the field not being completely homogeneous. For a magnetic field exposure time of 1 s, we achieved a dynamic cloud of MNPs perfusing the microfluidic chamber. The shaking step time, acceleration and speed needed to optimally redistribute the particles were optimized empirically. The optimal magnetic incubation protocol length for the NS1 assay was identified by varying the total number of magnetic incubation cycles. Figure 4 shows ′ ( ) V 17 Hz 2 vs. NS1 concentration spiked in serum (total sample volume 6 μ L) measured after the indicated number of magnetic incubation cycles. In all cases, MNPs were used at a concentration of 0.1 mg/mL. Without magnetic incubation, the signal was low and identical for all NS1 concentrations. After magnetic incubation, the signal increased by a factor of 3-10 with a higher increase for higher NS1 concentrations. For all investigated NS1 concentrations, the signal amplitude was higher for the longest magnetic incubation, which causes as well higher signal in the blank sample (0 ng/mL sample). The lines in Fig. 4 are curve fits obtained using the four parameter logistic (4PL) model with the parameters given in Table S1 ( Supplementary Information, Section S3). Analysis in terms of the 4PL model supports that 180 cycles of magnetic incubation provides the highest sensitivity. In addition, the highest ′ ( ) V 17 Hz 2 amplitude ratio between the 100 ng/mL sample and the blank after 180 cycles also indicate that this number of cycles results in a lower LOD. To further optimize the assay conditions, the effect of the MNP concentration was studied (Supplementary Information, Section S4) and the optimal MNP concentration was established to 0.1 mg/mL.
The performance of the proposed technology was evaluated using the above optimal assay protocol on NS1 spiked into a serum sample with a total volume of 6 μ L. This corresponds to the typical volume that can be extracted out of a single drop of blood from, for example, a patient fingerprick. Figure 5a shows ′ V 2 spectra obtained for the indicated NS1 concentrations. The spectra show a clear signal increase for increasing NS1 concentration with a peak exhibiting a slight shift towards lower frequencies for high NS1 concentrations. For the high investigated NS1 concentrations, multiple NS1 molecules are likely to bind to each MNP (an NS1 concentration of 10000 ng/mL corresponds to 500 NS1 molecules per MNP). Accordingly, larger clusters of MNPs are expected to form with a correspondingly larger hydrodynamic size and lower Brownian relaxation frequency. Figure 5b shows ′ ( ) V 17 Hz 2 vs. NS1 concentration obtained from triplicate experiments. The solid line shows a curve fit of the 4PL model ( Supplementary Information, Section S3) to the data. The Hill coefficient value for this curve was 1.03 and the EC50 value was 566 ng/mL. The coefficient of variation did not exceed 2% for any of the measured samples. This indicates that the method is also robust in serum. The limit of detection (LOD), calculated using the blank signal plus three standard deviations, was 25 ng/mL. The dose-response curve showed a large dynamic range covering almost three orders of magnitude of NS1 concentrations. Saturation is reached at high concentrations partly due to the fact that large MNPs are spun on the side of the chamber during magnetic incubation, which is confirmed by the absence of further peak shift. If needed, the linear range could be tailored to, e.g., higher concentrations, by tuning the magnetic incubation protocol and the MNP concentration.
The presented assay dynamic range and LOD are comparable to those obtained using other sensing technologies based on MNP clustering 21,25,[27][28][29] . Thanks to the efficient antibody coverage on the MNPs surface a comparatively larger dynamic range is obtained in comparison to other approaches where magnetic field actuation is used as well to enhance particle agglutination 21,29 . A lower limit of detection may potentially be reached in an MNP clustering assay with a miniaturized nuclear magnetic resonance (NMR) based readout 28,30,31 . However, a simple optical readout, compatible with any optical transparent microfluidic system and which does not require any sophisticated electronic components or complex 3 s in total). Each protocol was applied for the indicated concentrations of NS1 spiked in serum (total sample used 6 μ L) and a MNP concentration of 0.1 mg/mL Lines are curve fits obtained using the four parameter logistic model as described in the Supplementary Information, Section S3. cartridge manufacturing 32 holds a stronger potential to impact POC diagnostics of infectious diseases. It is indeed important to put these results into perspective for the detection of flavivirus induced diseases, like dengue fever, which predominantly occur in remote areas with minimal access to well-equipped clinical laboratories. This particular diagnostic niche is dominated by ELISA and lateral flow devices due to their cost-effectiveness. ELISA delivers sensitive detection at the expense of a time consuming multi-step procedure whilst the latter offers simplicity with moderate analytical/clinical sensitivity 3 . Our approach delivers a performance comparable to ELISA, but with a much simpler assay protocol and a much shorter total assay time on a lower sample volume. Moreover, the integration with centrifugal microfluidics enables full automation in an out-of-lab setting and is anticipated to lead to future multiplexed detection from the same drop of blood.
The levels of circulating NS1 in dengue infected individuals can vary depending on the severity of the disease, its immunological status (primary or secondary infection) and its serotype. In acute cases, circulating NS1 values in the μ g/mL range have been reported 7 . However, values below 10 ng/mL may occur for early stages of the disease 33 . Current ELISA commercial kits, the gold standard for NS1 detection, only partially cover this low range of concentrations and poor clinical sensitivities for DENV-2 and DENV-4 serotypes have been reported 34 . An improved LoD would be beneficial for early diagnosis of dengue. The presented technology has the potential to deliver rapid and fully automated diagnosis in the field. The main factor limiting the current LOD is the nonspecific binding of MNPs. Further optimization of the MNP surface architecture and the assay protocol are likely to reduce the non-specific background. Moreover, more sophisticated analysis methods, for example, taking into account the full measured frequency spectra before and after magnetic incubation rather than the value at a single frequency, may also improve the sensitivity and LOD. These improvements are essential for translation to the clinics and usage in the field and are topics of ongoing work. We are presently developing novel affinity probes capable of cross-reactive interaction across all four dengue serotypes. These will enable estimation of the clinical sensitivity and specificity by testing samples of dengue infecting patients. Precise quantification of other biomarkers, including serological markers, could help triaging patients and may further provide valuable data for surveillance and epidemic control.
In conclusion, we have presented a novel biosensing platform for dengue NS1 detection spiked in human serum. The assay is fully integrated and requires only a few microliters of clinical sample to be mixed with the bioactive nanoparticles. External magnetic fields actuate the nanoparticles in solution, triggering both biomarker-induced nanocluster formation and a time-dependent transmitted light modulation profile, which correlates with the number of nanoclusters in solution. We have demonstrated minimal non-specific nanoparticle aggregation by coating nanoparticles with an anti-fouling surface molecular architecture. Bio-orthogonal 'Cu-free' click chemistry enables efficient immobilization of > 2000 high affinity antibodies against dengue NS1 protein. The result is a fast (8 minutes) and automated dengue fever diagnostic assay compatible with a fingerprick sample volume. The presented technology shows a limit of detection of 25 ng/mL and a wide dynamic range up to 20000 ng/mL.
Materials and Methods
Chemicals and materials. The generation and characterization of 1H7.4 and Gus11 antibodies have been described previously 11,14,35,36 . In brief the antibodies were produced by immunizing BALB/c mice with subcutaneous injections of NS1 protein, regularly boosted with subsequent monthly injections. Blood samples were collected by tail bleed and the antibody titer was checked by ELISA to identify the mice with the strongest immune response prior to culling. The harvested spleen cells were fused with plasmacytoma cell line SP2/0 at a ratio 10:1 and subsequent selection and characterization of the antibodies was performed by ELISA. To generate purified stocks, antibodies were isolated from ascites fluid using affinity chromatography on protein G sepharose (HiTrap Protein G HP, GE Healthcare) as per manufacturer instructions. After elution in 100 mM glycine pH 2.7, purified antibody stocks were buffer exchanged using centrifugal filtration (50kDa MWCO Amicon Ultra-15, Millipore) into phosphate buffer saline (PBS). Protein concentrations were determined using the Bicinchoninic acid (BCA) Protein Assay (Pierce).
Superparamagnetic polystyrene nanoparticles with embedded magnetic nanograins and a nominal diameter of 170 nm and COOH surface were acquired from Merck (NJ, USA). Purified recombinant Dengue Virus NS1 glycoprotein was obtained from Hawaii Biotech (Lot. 2DNS1). All chemicals were purchased from Sigma Aldrich, unless otherwise noted. MNP storage buffer was prepared with PBS and Tween 20. In this investigation, NS1 was always diluted in CRP-free serum from HyTest Ltd. (Turku, Finland).
The methods were carried out in accordance with the approved guidelines. All experimental protocols were approved by the DTU Nanotech work environment committee. The procedure for coupling of Gus11 and 1H7.4 antibodies to separate populations of MNPs is described in the Supplementary Information, Section S1. Unless otherwise stated, all MNP suspensions were 1:1 mixtures of MNPs with Gus11 and 1H7.4 antibodies and stated MNP concentrations refer to the mixture of MNPs.
Experimental Setup. Figure 1b shows a picture of the readout system previously described in refs 17 and 22. In brief, the major optical elements were a Sanyo Blu-ray optical pick-up unit (λ = 405 nm) and a ThorLabs PDA36A photodetector. The microfluidic disc was placed between two electromagnets used to generate a sinusoidal uniaxial magnetic field parallel to the laser beam direction of fixed amplitude B 0 = 2 mT at a frequency f up to 300 Hz. The pre-amplified signal from the photodetector was acquired using a National Instruments 6251 data acquisition (DAQ) card and analyzed through a software-based lock-in amplifier. The same DAQ card controlled a customized circuit, embedded in the optical pickup unit control board, used to provide the alternating current to the electromagnets. The software extracted the intensity of the 2 nd harmonic signal from the photodetector and its phase lag with respect to the magnetic field excitation and presented it as the complex 2 nd harmonic lock-in signal. The software also extracted the average signal from the photodetector. All signals presented below have been normalized with the average photodetector signal and are hence dimensionless. The normalized in-phase 2 nd harmonic photodetector signal is given the symbol ′ V 2 . The disc was connected to a rotary stepper motor (Maxxon Motor, mod. 273756, Switzerland) used to perform the microfluidic operations. The motor was operated using a digital positioning controller (Maxxon, mod. 347717) via a custom-made Labview program (National Instruments, US). A pair of permanent magnets was placed on the radial position opposite to the two electromagnets and was used to provide a strong approximately homogeneous field during the magnetic incubation step. The field measured at the center of the measurement chamber was around 90 mT. Disc fabrication. Centrifugal microfluidic discs were manufactured using Poly(methyl methacrylate) (PMMA, Axxicon, Netherlands) and pressure sensitive adhesive (PSA, 90106, Adhesives Research, Ireland) as described by Donolato et al., 2014. Each microfluidic disc was made from three 600 μ m thick PMMA DVD halves. Reservoirs were created by cutting through the first PMMA disc using a CO 2 laser (Mini 18, 30W, Epilog, USA). Inlets were laser machined in the second disc and the third disc was used as support. Alignment holes were also cut in all three discs. Following these steps, fluid channels were created in two PSA sheets using a blade cutter machine (Silhouette, USA). The PSA was laminated on the bottom and top discs, which were then aligned and bonded to the central disc containing the reservoirs. This rapid prototyping process allowed for the complete fabrication of a disc with eighteen microfluidic chambers is less than 20 min. Measurement protocol. Experiments were performed as follows: each MNP population was diluted in the proper buffer to the required concentration followed by vortex and ultrasound treatment for 2 min (Model 150 V, Biologics, US). NS1 was spiked in human serum and aliquoted accordingly. Samples were prepared off-disc by mixing the MNP suspension with 6 μ L of NS1 sample and immediately transferred into the disc loading chamber. The disc was then placed on the previously described setup and spun (18 Hz, 3 sec) to drive the sample to the detection chamber. The chamber was aligned with the Blu-ray laser and the two electromagnets, and a first optomagnetic spectrum was recorded. An optomagnetic spectrum consisted of a measurement of ′ V 2 vs. f for fifteen logarithmically equidistant values of f between 1 and 300 Hz and it was recorded in 60 s. Subsequently, the sample was exposed to a magnetic incubation step. The magnetic incubation protocol comprised a number of cycles, each with 1 sec of incubation between the permanent magnets followed by 2 sec of mixing (one full accelerating revolution in each rotation direction). The first step enhanced magnetic agglutination of the MNPs. The second step promoted mixing of the MNPs, random MNP rotation, and random antibody-antigen encounters to Scientific RepoRts | 5:16145 | DOi: 10.1038/srep16145 accelerate the recognition 21 . Magnetic incubation protocols with 60, 120 and 180 cycles were investigated. Finally, after magnetic incubation, the chamber was aligned with the laser to record the second optomagnetic spectrum.
|
v3-fos-license
|
2021-09-05T13:24:49.156Z
|
2021-09-04T00:00:00.000
|
237412867
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://skeletalmusclejournal.biomedcentral.com/track/pdf/10.1186/s13395-021-00277-2",
"pdf_hash": "37cacf148b4b439adc93baf6b2d2182eab936d1f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46148",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "7432bd5a1b67e58ebb8f0feb186e8b47b8fa6e60",
"year": 2021
}
|
pes2o/s2orc
|
Preservation of satellite cell number and regenerative potential with age reveals locomotory muscle bias
Background Although muscle regenerative capacity declines with age, the extent to which this is due to satellite cell-intrinsic changes vs. environmental changes has been controversial. The majority of aging studies have investigated hindlimb locomotory muscles, principally the tibialis anterior, in caged sedentary mice, where those muscles are abnormally under-exercised. Methods We analyze satellite cell numbers in 8 muscle groups representing locomotory and non-locomotory muscles in young and 2-year-old mice and perform transplantation assays of low numbers of hind limb satellite cells from young and old mice. Results We find that satellite cell density does not decline significantly by 2 years of age in most muscles, and one muscle, the masseter, shows a modest but statistically significant increase in satellite cell density with age. The tibialis anterior and extensor digitorum longus were clear exceptions, showing significant declines. We quantify self-renewal using a transplantation assay. Dose dilution revealed significant non-linearity in self-renewal above a very low threshold, suggestive of competition between satellite cells for space within the pool. Assaying within the linear range, i.e., transplanting fewer than 1000 cells, revealed no evidence of decline in cell-autonomous self-renewal or regenerative potential of 2-year-old murine satellite cells. Conclusion These data demonstrate the value of comparative muscle analysis as opposed to overreliance on locomotory muscles, which are not used physiologically in aging sedentary mice, and suggest that self-renewal impairment with age is precipitously acquired at the geriatric stage, rather than being gradual over time, as previously thought. Supplementary Information The online version contains supplementary material available at 10.1186/s13395-021-00277-2.
Background
Skeletal muscle is a highly regenerative tissue. The lifelong capacity for regeneration depends on a population of satellite cells, located peripheral to the muscle fiber but under its basal lamina [1]. Satellite cells proliferate in response to injury to produce a pool of myoblasts that will fuse to form new muscle fibers as well as replacement cells to maintain the stem cell pool [2][3][4], a process referred to as self-renewal [5]. Although the regenerative properties of satellite cells are under intensive investigation, essential questions about the regulation of the satellite cell compartment remain, for example what specifies the number of satellite cells per muscle.
Both skeletal muscle mass and the potential of muscle to regenerate after injury decline with age; however, the role of changes intrinsic to the satellite cells in these declines has been controversial. Numerous groups have documented age-related decreases in satellite cell numbers in mice [6][7][8][9][10][11][12]. Some groups have documented no significant reduction [13] or no change [14]. Pax7 is selectively expressed by satellite cells and required for their maintenance [15]; therefore in these studies, satellite cell numbers, generally from locomotory muscles, were quantified by counting Pax7+ cells in representative muscle sections or representative isolated single myofibers and extrapolating based on estimated number of sections or fibers per muscle. We considered that perhaps some of this discordance is from sampling bias inherent in methods that quantify by extrapolating from a subset, as well as the potential for systematic errors of extrapolation. For this reason, in the current study, we investigate satellite cell numbers across a variety of locomotory and non-locomotory muscles in young vs. old mice using the holistic approach of counting all of the Pax7+ cells present in particular muscles by flow cytometry using the Pax7-ZsGreen BAC transgenic strain, in which satellite cells are labeled green fluorescent [16].
The literature also contains significant discordance on whether the effects of aging are mainly mediated through alterations in the satellite cells or in their environment. There are reports demonstrating the intrinsic regenerative potential of the satellite cell pool being impaired with age [12,[17][18][19]. Other studies argue that although the number of satellite cells is decreased in aged muscle, the intrinsic myogenic potential and self-renewal capacity of satellite cells remains unaltered [11], that aged donor satellite cells are as functional as those from young donors [8,[20][21][22], and that environmental factors rather than cell intrinsic changes are responsible for the impaired regeneration in aged animals [7,14,[23][24][25][26][27][28].
In order to measure both the intrinsic self-renewal and differentiation potential of a population of satellite cells, we have employed a two armed transplantation assay in which a defined number of satellite cells is simultaneously transplanted into both tibialis anterior (TA) muscles of immunodeficient, dystrophin-deficient NSG-mdx 4Cv [29] recipient mice. One limb is used for FACS to count the number of undifferentiated (ZsGreen+) satellite cells 1 month post-transplant, giving a quantitative value to selfrenewal, or the ability to contribute to the satellite cell pool, while the other is processed for histology to count dystrophin+ fibers, giving a quantitative value to the ability of the donor cells to generate fibers (differentiation potential). We apply these assays to evaluate self-renewal and differentiation potential in the context of aging.
Mice
Satellite cells were isolated from Pax7-ZsGreen male mice [16] crossed >15 generations to a C57BL/6 background or C57BL/6 mice. Transplant recipients were NSG-mdx 4Cv mice [29]. Wild-type C57BL/6 mice were obtained from the aging rodent colony of the National Institute on Aging. All procedures were carried out in accordance with protocols approved by the University of Minnesota Institutional Animal Care and Use Committee.
Mouse satellite cell harvest
Bulk isolation of satellite cells was performed as described previously [29,30]. Muscle was carefully dissected. With a razor blade parallel to the muscle fibers, forceps were used to separate the fibers. The muscle was incubated shaking for 75 min in 0.2% collagenase type II (Gibco, Grand Island, NY) in high glucose Dulbecco's modified Eagle's medium (DMEM) containing 4.00 mM L-glutamine 4500 mg/L glucose, and sodium pyruvate (HyClone, Logan, UT) supplemented with 1% Pen/Strep (Gibco) at 37°C. Samples were washed two times with Rinsing Solution (F-10+), Ham's/F-10 medium (HyClone) supplemented with 10% Horse Serum (HyClone), 1% 1 M HEPES buffer solution (Gibco), and 1% Pen/Strep, and pulled into a sheared Pasteur pipette. The samples were centrifuged and washed again. After aspiration, the sample was resuspended in F-10+ containing collagenase type II and dispase (Gibco), vortexed, and incubated shaking at 37°C for 30 min. Samples were vortexed again, drawn and released into a 10 mL syringe with a 16-gauge needle four times, then with a 18-gauge needle four times to release the cells from the muscle fibers prior to passing the cell suspension through a 40-μm cell strainer (Falcon, Hanover Park, IL). The sample was drawn and released into a 10-mL syringe with the 18-gauge needle four additional times and passed through a new 40-μm cell strainer. After centrifuging, the samples were resuspended in Fluorescentactivated Cell Sorting (FACS) staining medium: Phosphate Buffered Saline (PBS, Corning, Manassas, VA) containing 2% fetal bovine serum (HyClone) and 0.5 μg/mL propidium iodide, for FACS analysis and sorting on a FAC-SAriaII (BD Biosciences, San Diego, CA).
Quantification of satellite cells from single muscle groups was performed similarly. Digested muscle samples were drawn and expelled into a 3-mL syringe four times through a 16-gauge, then four times through an 18-gauge needle. The cell suspension was passed through a 40-μm cell strainer. Three milliliters of F10+ was added to each sample to prevent over-digestion, and samples were centrifuged, then resuspended in FACS staining medium. For transplanted TAs, the samples were stained using an antibody mixture of PE-Cy7 rat anti-mouse CD31 (clone 390), PE-Cy7 rat anti-mouse CD45 (clone 30-F11), Biotin rat anti-mouse CD106 (clone 429(MVCAM.A)), and PE Streptavidin from BD Biosciences (San Diego, CA) and Itga7 647 (clone R2F2) from AbLab (Vancouver, B.C., Canada). The number of donor (ZsGreen +) satellite cells and total satellite cells (lineage negative; VCAM, Itga7 double positive cells) was determined by running the entire volume through the FACS and recording all events.
Flow cytometric counting
For quantification of satellite cells, cells from individual muscles were resuspended in 200 μL FACS staining medium, with the exception of gastrocnemius and diaphragm which were resuspended in 400 μL in order to keep total cell concentrations similar. Samples were run out completely on a BD FACS Aria II, with red (641 nm), blue (488 nm), and yellow-green (561 nm) lasers. Propidium iodide-negative (live) cells were gated into either SSC vs. ZsGreen for unstained samples, or for stained samples: APC (Itga7) vs. PE-Cy7 (Lin), gating Lin-neg cells into APC (Itga7) vs. PE (VCAM), gating double-positive cells into SSC vs. ZsGreen, and counting ZsGreen+ cells, as shown in Fig. 2b.
Pax7 immunostaining
Slides were fixed using 4% paraformaldehyde (PFA) at room temperature for 5 min, air dried, and rehydrated using PBS for 5 min followed by antigen retrieval in citrate Buffer (1.8 mM Citric Acid and 8.2 mM Sodium Citrate in water) using a pressure cooker. Slides were boiled in a Coplin jar for 30 min, then rinsed with cold tap water for 10 min, washed twice with PBS, 5 min each. After antigen retrieval, sections were circled with nail polish, dried, then incubated with 3% H 2 O 2 in PBS for 5 min, and washed twice with PBS, 5 min each. Sections were blocked using 0.5% PerkinElmer Blocking Reagent (Cat # FP1020) in PBS for 1 h at RT followed by overnight incubation at 4°C with primary antibodies, mouse anti-mouse Pax7 from the Developmental Studies Hybridoma Bank, and polyclonal rabbit anti-mouse laminin (Sigma-Aldrich L9393) in blocking buffer. Secondary cocktail, goat anti-mouse IgG1 Biotin (Jackson Immunoresearch Cat# 115-065-205), and goat anti-rabbit IgG H&L Alexa Fluor 488 (Sigma-Aldrich. Cat # A11034) in blocking buffer were applied for 2 h at RT. Following 2 PBS washes, slides were incubated with Vectastain ABC reagent (Vectorlab Cat # PK-6100) for 3 h followed by PBS wash. Finally, slides were incubated 10 min in the dark with Tyramide signal amplification (TSA Cyanine 3 Cat # NEL744) in blocking buffer, washed with PBS, and mounted with anti-fade Prolong gold with DAPI.
Muscle injury and transplantation
As described previously [29], 48 h prior to intramuscular transplantation of cells, approximately 4-month-old NSG-mdx 4Cv mice were anesthetized with ketamine and xylazine and both hind limbs were subjected to a 1200 cGy dose of irradiation using an RS 2000 Biological Research Irradiator (Rad Source Technologies, Inc., Suwanee, GA). Lead shields limited exposure to the hind limbs only. Twenty-four hours prior to transplant, 15 μL of cardiotoxin (10 μM in PBS, Sigma, Saint Louis, MO) was injected into the right and left TAs of each mouse with a Hamilton syringe to induce injury. Twenty-four hours later, a defined number of satellite cells was collected by FACS and resuspended in a volume such that the indicated number of cells per injection was present in 10 μL of sterile saline. Each mouse was then injected with 10 μL into each TA. Four weeks after transplantation, one transplanted TA of each mouse was harvested and prepared for sectioning and staining as described previously [29], while the other transplanted TA was prepared for FACS analysis as described above.
Plates were cultured for 8 days under physiological oxygen growth conditions (5% O 2 , 5% CO 2 , 90% N 2 ) at 37°C in a tissue culture incubator. Wells containing colonies were identified and fixed with 4% paraformaldehyde for 20 min at room temperature. For immunostaining, cells were permeablized with 0.3% triton-X100 for 20 min at room temperature, washed once with PBS, and blocked with 3% bovine serum albumin (BSA) in PBS for 1 h at room temperature. Colonies were stained with a 1:20 dilution of MF 20 antibody supernatant (recognizing sarcomeric myosin, obtained from the Developmental Studies Hybridoma Bank, University of Iowa) in 3% BSA in PBS overnight at 4°C. Plates were washed three times with PBS and incubated with a 1:500 dilution of Alexa Fluor 555 goat anti-mouse secondary antibody (Life Technologies) for 45 min at room temperature. The plates were washed four additional times with PBS before the addition of a 1:1000 dilution of 4,6-diamidino-2-phenylindole (DAPI) in PBS for 20 min at room temperature. After one final wash with PBS, the cells were covered with PBS. 10× magnification images were taken on a Zeiss Observer.Z1 inverted microscope equipped with an AxioCam MRm camera (Thornwood, NY). Merged images were analyzed with the open-source algorithm, G-Tool [31]. This software was utilized to determine the number of nuclei in a satellite cell colony and the percentage of nuclei in myosin heavy chain (MHC) positive cytoplasm.
Histology and dystrophin/laminin immunofluorescence
The TA was removed and placed in OCT Compound (Scigen Scientific, Gardena, CA), frozen in liquid nitrogen-cooled 2-methylbutane (Sigma), and stored at −80°C. Ten-micrometer cryosections were cut on a Leica CM3050 S cryostat (Leica Microsystems, Buffalo Grove, IL). Cryosections were fixed with ice cold acetone, air dried, rehydrated with PBS, and blocked for 1 h with 3% BSA in PBS. After incubating slides for 1 h at RT with a rabbit polyclonal antibody to dystrophin (Abcam, Cambridge, MA) and a mouse monoclonal antibody to laminin (Sigmaclone LAM-89), the sections were incubated for 45 min at RT with Alexa Fluor 555 goat anti-rabbit and Alexa Fluor 488 goat antimouse IgG antibodies (Life Technologies, Grand Island, NY). Coverslips were mounted with Immu-Mount (Thermo Scientific, Kalamazoo, MI). Slides were imaged with a Zeiss Axio Imager.M2 with an AxioCam MRm camera (Carl Zeiss Microscopy, LLC, Thornwood, NY). The numbers of donor (dystrophin+) and total (lam-inin+) muscle fibers were determined after merging tiled images of the entire cross-section from transplanted TAs using Photoshop and manually counting the fibers.
In vitro assessment of TA muscle As described previously [32,33], mice were anesthetized with an intraperitoneal injection of Avertin (250 mg/kg). The distal tendon of the TA muscle and the knee were attached by silk suture to a force transducer. TA muscles were then transferred into an organ bath containing mammalian Ringer solution: 120.5 mM NaCl; 20.4 mM NaHCO 3 ; 10 mM glucose; 4.8 mM KCl; 1.6 mM CaCl 2 ; 1.2 mM MgSO 4 ; 1.2 mM NaH2PO 4 ; 1.0 mM pyruvate, adjusted to pH 7.4 perfused continuously with 95% O 2 / 5% CO 2 . The isolated TA muscles were stimulated by an electric field generated between two platinum electrodes placed longitudinally on either side of the TA muscles (using square wave pulses at 25 V amplitude, 0.2 ms in duration, 150 Hz). Muscles were maintained at the optimum length (Lo) during the determination of isometric twitch force with a 5-min recovery period between stimulations. Optimal muscle length (Lo) and stimulation voltages (25 V) were chosen based on muscle length manipulation and a series of twitch contractions that generated the maximum isometric twitch force. After adjusting the optimal muscle length (Lo) and measuring the maximum isometric tetanic force, total muscle cross-sectional area (CSA) was calculated by dividing muscle mass (mg) by the product of muscle length (mm) and 1.06 mg/mm 3 , the mammalian skeletal muscle density. Specific force (sFo) was then calculated by normalizing maximum isometric tetanic force to CSA.
Statistics
Data were analyzed by two-tailed t-tests and reported as means with standard errors. Differences were considered significant at the α<0.05 level.
Results
Changes in satellite cell number over eight muscles with age In order to obtain a comprehensive view of the size of the satellite cell compartment and changes with age, we investigated eight muscle groups representing different activities: locomotory, masticatory, and respiratory in young (4 months) and old (2 years) male C57BL/6 mice (Fig. 1). Importantly, each individual muscle was processed in such a way that that there was no residual tissue after processing, i.e., every cell and fiber fragment was suspended, and the sample drained, so that the entire muscle digest passed through the flow cytometer. Representative FACS profiles used for the quantification of the total number of satellite cells in the different muscle groups from 4-month-and 2-year-old Pax7-ZsGreen mice are shown in Additional file 1. In most muscles, both satellite cell number (Fig. 1a) and muscle mass (Fig. 1b) declined modestly at 2 years of age, and in concert. The masseter, surprisingly, trended toward an increase in total satellite cell number (>25% increase, p < 0.0951, Fig. 1a) in aged mice. Only two muscles, the TA and EDL (extensor digitorum longus), showed large declines in satellite cell number that were statistically significant, while the remaining limb muscles and diaphragm showed modest declines of between 5 and 20% that were not statistically significant. We investigated age-related loss of muscle mass, sarcopenia, and found that only limb muscles displayed reduced average mass, with TA, EDL, gastrocnemius, and triceps significantly reduced (by about 35%) and soleus and psoas trending lower with age (Fig. 1b). It is notable that the most excessive changes occur in the locomotory muscles, and given that caged mice are sedentary, their aging in the laboratory might not reflect the course of aging under normal conditions. When satellite cell number was normalized to muscle mass to obtain a measure of satellite cell density, only the TA and EDL showed significant declines while the diaphragm trended lower in the 2-year-old samples (p=0.1) (Fig. 1c). No change in satellite cell density was observed in the soleus and psoas samples; density trended higher with age in the gastrocnemius and triceps, and remarkably, satellite cell density was significantly increased (p<0.0001) in the masseter (Fig. 1c).
To validate the unexpected finding that masseter showed an increased number of satellite cells in aged mice, we used an independent assay (immunohistochemical staining for Pax7 and laminin) on the TA and masseter muscles of an independent cohort of 3-month and 22-month-old wild-type mice from another institution (Additional file 2). Immunostaining for Pax7 in this cohort revealed again an increase in satellite cell density with age in the masseter, in contrast to the decline in satellite cell density seen in the TA muscle (Fig. 1d).
Ex vivo clonal assays of aged vs. young cells
We began investigating cell-autonomous functional differences by evaluating the colony-forming potential of single cells from each muscle group at both ages. Cloning efficiency, an integrated measurement of the ability of cells to survive, proliferate, and form colonies in vitro, decreased significantly in almost every muscle between 4 months and 2 years of age, including strong trends in the soleus, diaphragm, and masseter muscle (Additional file 3 a). Differences in colony size were more notable between muscles than between ages, with diaphragm satellite cells giving the largest colonies, as was observed previously [31]. A significant decline in colony size with age was observed in the gastrocnemius, diaphragm, psoas, and triceps, while a moderate increase was seen in the TA (Additional file 3 b). The rate of differentiation (% of nuclei in MHC+ cytoplasm) was unchanged in most muscles, but was significantly reduced with age in the TA, EDL, and psoas muscles (Additional file 3c).
Assaying self-renewal in vivo
To develop a quantitative readout of self-renewal based on the ability of transplanted satellite cells to contribute to the stem cell compartment of engrafted muscle, we first evaluated age-dependence of surface markers that have been described to define this compartment. Satellite cells from Pax7-ZsGreen/BL6 mice were evaluated for either alpha7-integrin (Itga7) and VCAM [34,35]; Itga7 and CD34 [4]; or beta1-integrin (Itgb1) and CXCR4 [36] in conjunction with lineage-negative staining for hematopoietic and endothelial markers. We observed that 97% of the ZsGreen+ satellite cells were Itga7/VCAM double positive and >91% were positive for the other Itga7/CD34 or Itgb1/CXCR4 (Fig. 2a). The inverse analysis gave similar results with >90% of double positive (DP) cells being ZsGreen+ (Additional file 4). We next examined whether the surface marker profile of satellite cells changed in aged mice. ZsGreen+ satellite cells isolated from 2-year-old+ Pax7-ZsGreen mice (Additional file 5) showed a similar surface marker profile to that of 4-month-old mice. For the studies below involving transplantation of ZsGreen+ cells, we use Itga7/VCAM to identify the total (host + donor) pool.
To monitor both self-renewal and differentiation potential of transplanted cell populations, we irradiated and injured both TA muscles of immunodeficient, dystrophin-deficient NSG-mdx 4Cv mice and transplanted each with 300 Pax7-ZsGreen cells. The 15 μl cardiotoxin injection causes the destruction and regeneration of approximately 40% of the entire TA muscle (Additional file 6), providing ample space for injected satellite cells to contribute to both regeneration and self-renewal. One month after transplantation, both TA muscles were dissected; one was processed for FACS and the other for histology. FACS analysis demonstrated that a donor ZsGreen+ cell subpopulation was present within the Lin-VCAM+Itga7+ satellite cell pool; thus some portion of the progeny of the donor cells contributed to renewal of the satellite cell pool (Fig. 2b). Histology demonstrated many dystrophin+ fibers in the other TA, revealing the differentiation potential of the transplanted cells (Fig. 2c).
Skeletal muscle transplantation experiments frequently incorporate irradiation to blunt the response of the host satellite cells, resulting in greater contribution of both fibers and associated satellite cells [20,21]. Irradiation also leads to extensive proliferation of donor stem cells and their progeny (myoblasts) [37]. Recent work suggests that activation of the innate immune system within irradiated muscle plays a role in enhancing donor satellite cell engraftment [38]. To measure the necessity of irradiation in our system, we performed transplantations of 300 cells with and without irradiation and found that both contribution to the satellite cell pool and to new fibers were significantly reduced without irradiation (8-fold and 16-fold respectively, Fig. 2d, e). The effect of irradiation hints at the possibility of competition between satellite cells for contribution to the post-regeneration satellite cell pool, although this has never formally been demonstrated. An accurate assay for the intrinsic self-renewal potential of a cell population requires that contribution to the satellite cell pool not be confounded with the effects of competition between donor cells. To determine whether and how this assay could be used to read out intrinsic self-renewal potential, we performed a dose-response transplantation experiment, starting with 100 donor cells, going up in 3-fold steps to 8100 cells (Fig. 2f). This revealed a severe nonlinearity in contribution at cell numbers 900 or greater, i.e., the increase in contribution of 300 cells over 100 cells is close to 300%, while the increase in contribution of 900 cells over 300 cells is only about 50%, and even less at 2700 and so on. This indicates two important points: first, this assay needs to be performed at numbers close to 300 cells or fewer in order for changes in the contribution rate to correlate linearly with changes in the rate of self-renewal. At 900 cells and higher, significant changes in self-renewal read out as minor changes in ZsGreen+ cell numbers. Second, the donor cells are competing with each other for contribution to the most accessible space. To avoid internal competition, the number of transplanted cells needs to be smaller than the space available, hence our use of 300 cells per TA muscle.
Aging to 2 years does not significantly impair in vivo selfrenewal potential of C57BL/6 satellite cells We next applied this assay to measure the self-renewal potential of satellite cells from aged (2-year-old) vs. young (4-month-old) C57BL/6 mice. We first tested two doses, both in the linear range: 100 and 300 cells into irradiated, cardiotoxin-injured NSG-mdx4Cv mice (Fig. 3a-c). Neither dose demonstrated a statistically significant reduction in self-renewal potential, nor in fiber differentiation potential. These low numbers of donor cells do not produce a detectable increase in contractile function of transplanted muscles [29]; therefore, we also performed transplantations of 2700 cells. Similar self-renewal and fiber differentiation potentials were seen at this cell number (Fig. 3d, e), and we found no difference in maximal tetanic or specific force in TA muscle regenerated with old cells vs. TA muscle regenerated with young cells (Fig. 3f, g).
We considered that because our mice are sedentary during their lifespans, their limb muscles might not be representative of muscles undergoing normal aging. Therefore, we decided to transplant satellite cells from a muscle under constant use-the diaphragm. Nine hundred ZsGreen+ satellite cells isolated from the diaphragm of young or old mice were transplanted into preconditioned TAs which were harvested either 6 weeks (Additional file 7 a, b) or 15 weeks (Additional file 7 c, d) after transplant. No significant differences in self-renewal or fiber contribution were observed at either time point, mirroring the results with hind limb muscle satellite cells, above.
Discussion
In the present study, we use flow cytometry to quantify the number of undifferentiated Pax7-ZsGreen+ cells in individual muscles, both in the context of unmanipulated muscles of aged vs. young mice, as well as in the context of transplantation, where the Pax7-ZsGreen reporter serves both as an indicator of donor origin as well as of undifferentiated status. The FACS-based approach is very efficient; thus, we were able to quantify the satellite cell content of eight different muscles from each animal studied in the aging cohorts. As with any method, the FACS-based approach has caveats, principally that extraction needs to be efficient, and that it may be affected by the extent of ECM or fibrosis. Since aged muscle has greater ECM content, inefficient extraction might lead to underestimation of cell number in aged specimens. We found that few muscle groups showed a statistically significant decline in mean numbers of satellite cells at 2 years of age. The exceptions were the TA and EDL, which experienced large (>50%), statistically significant declines at 2 years of age. Using other methodologies, previous studies have quantified the number of Pax7+ cells in freshly isolated myofibers from EDL [8,9,11,22,39], soleus [39], and TA [2]. Consistent with our results, these studies have found lower numbers of satellite cells in TA and EDL muscles of aged mice, and this has led to the generally-accepted notion in the field that satellite cell numbers decline significantly with age. However, considering a broader sampling of muscles, it is now clear that TA and EDL are not typical. This is unfortunate, because these two muscles are probably the most-studied muscles in the mouse system. Furthermore, age-related loss of muscle mass was much more extreme in the locomotory muscles, which is disconcerting because caged mice are abnormally sedentary. It raises the possibility that satellite cell compartment size changes over time in locomotory muscles of caged mice may be more extreme due to lack of normal use over time than to aging, per se. An increase in satellite cell content has been reported after endurance training in old rats [40] and mice [41]. After 8 weeks of progressive endurance training, the TAs of an old exercised group of mice had significantly more satellite cells than an old sedentary group [42]. At the very least, this implicates exercise as relevant to the loss of satellite cells seen in locomotory muscles in caged rodents. The diaphragm and masseter, muscles used more physiologically under laboratory conditions, did not lose mass with age and showed only very modest decline, or indeed in the case of masseter, an increase, in satellite cell numbers. Thus, our studies are not consistent with the notion of a gradual and progressive satellite cell number decline with age, at least not in the male C57BL/6 mouse model aged for 2years.
In older humans, a greater loss of muscle mass is observed in the lower limbs possibly due to a detraining effect because of the significant reduction in physical activity [43]. The age-related loss of muscle mass was much more extreme in the locomotory muscles. Looking specifically at human satellite cells, their activation in response to damaging and non-damaging exercise as well as their reduction in the context of disuse has been noted (see [44] for review). Aging in humans leads to preferential atrophy of type II muscle fibers, and type II fiber-associated satellite cells have been shown to decline with age [44]. A recent longitudinal study in humans found a decline in satellite cell density in females at the time of menopause, supporting the notion that an agingrelated environmental change in older humans could be responsible for some degree of satellite cell decline [26].
Although not as severely constrained as caged mice, humans are also fairly sedentary, and it will be interesting to see whether the notion of relatively stable homeostatic maintenance of satellite cells within normally used muscles applies in the human context.
Comparing the ability of young and old satellite cells to contribute to the satellite cell pool after transplantation into injured muscle, i.e., measuring self-renewal potential, we found no difference between hind limb satellite cells from 4-month-old and 2-year-old mice. Although this result is consistent with original satellite cell aging literature [8,11,14,23], it was nonetheless somewhat surprising, as several studies have suggested that intrinsic self-renewal potential declines with age [12,[17][18][19]. However, there are important distinctions to point out regarding the series of studies mentioned. The first is that one of the studies [12] found significant declines in "geriatric" mice of 2.5 years, but found by most parameters, including regeneration posttransplantation, that satellite cells from "old" mice (2 years, the typical aged cohort) were not significantly different from cells from young mice. This study showed that somewhere between 2 and 2.5 years, many satellite cells become senescent, which is quite distinct from a model in which cell intrinsic self-renewal potential declines gradually in non-senescent cells as they age. In showing little difference by 2 years, this study is actually consistent with our results and is not inconsistent with the interpretations of the earlier literature. Of the others, one of the studies derives results from transplants of 10,000 cells [19], and our dose-response data clearly shows that this is far outside of the linear range; thus, the differences may have had more to do with competition for space than with intrinsic self-renewal. To address the question of how much space is available for engraftment, we evaluated regeneration in unirradiated TA muscles injected with 15 μl of cardiotoxin and found that this space is at least 40% of the TA. Transplantation of myofiber-associated satellite cells, coupled with an induced muscle injury has been shown to engraft the entire TA [45]. Further implicating competition, in the study of Bernet and colleagues [17], recipient mice were not irradiated; thus, different rates of engraftment into the satellite cell compartment may relate to intrinsic differences in competitive ability rather than self-renewal per se. It is also reasonable to consider that there may be an environmental component to the rate of aging, whereby mice from some laboratories have satellite cells that enter the geriatric, senescence-prone phase slightly earlier than mice from other laboratories. Nevertheless, the data presented in the current study is consistent with the notion that up until the most geriatric stage, satellite cells show little age-related loss in their intrinsic ability to self-renew or regenerate.
Conclusions
This work provides a comprehensive comparison of eight different muscle groups and shows that except for abnormally under-utilized locomotory muscles, aging to 2 years on the C57BL/6 background does not involve a gradual decline in satellite cell density. Upon transplantation into tibialis anterior muscles of immunodeficient dystrophic mice, transplanted cells contribute to both the satellite cell compartment and to newly generated dystrophin+ myofibers. There is negligible decline with age in the ability of transplanted cells to generate new satellite cells or new muscle fibers.
Additional file 1. FACS plots for the quantification of the total number of satellite cells in different muscle groups from four-month-and twoyear-old Pax7-ZsGreen mice. Representative FACS profiles for the gating for ZsGreen+ cells from eight different muscle groups -TA, EDL, Soleus, Gastrocnemius, Diaphragm, Psoas, Triceps and Masseter (n=6, except diaphragm where n=4).
|
v3-fos-license
|
2020-07-16T15:11:11.877Z
|
2020-07-16T00:00:00.000
|
220543317
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://sjtrem.biomedcentral.com/track/pdf/10.1186/s13049-020-00756-3",
"pdf_hash": "7906370867769ce9f8c52d809aa0b21d98423a1a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46151",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7906370867769ce9f8c52d809aa0b21d98423a1a",
"year": 2020
}
|
pes2o/s2orc
|
Quality indicators for a geriatric emergency care (GeriQ-ED) – an evidence-based delphi consensus approach to improve the care of geriatric patients in the emergency department
Introduction In emergency care, geriatric requirements and risks are often not taken sufficiently into account. In addition, there are neither evidence-based recommendations nor scientifically developed quality indicators (QI) for geriatric emergency care in German emergency departments. As part of the GeriQ-ED© research project, quality indicators for geriatric emergency medicine in Germany have been developed using the QUALIFY-instruments. Methods Using a triangulation methodology, a) clinical experience-based quality aspects were identified and verified, b) research-based quality statements were formulated and assessed for relevance, and c) preliminary quality indicators were operationalized and evaluated in order to recommend a feasible set of final quality indicators. Results Initially, 41 quality statements were identified and assessed as relevant. Sixty-seven QI (33 process, 29 structure and 5 outcome indicators) were extrapolated and operationalised. In order to facilitate implementation into daily practice, the following five quality statements were defined as the GeriQ-ED© TOP 5: screening for delirium, taking a full medications history including an assessment of the indications, education of geriatric knowledge and skills to emergency staff, screening for patients with geriatric needs, and identification of patients with risk of falls/ recurrent falls. Discussion QIs are regarded as gold standard to measure, benchmark and improve emergency care. GeriQ-ED© QI focused on clinical experience- and research-based recommendations and describe for the first time a standard for geriatric emergency care in Germany. GeriQ-ED© TOP 5 should be implemented as a minimum standard in geriatric emergency care.
Introduction
Every third patient admitted to prehospital emergency medicine and clinical emergency medicine is older than 65 years old [1][2][3]. Demographic changes have led to unique challenges faced by emergency care.
Functional decline, cognitive impairments, such as delirium or dementia, multiple comorbidities, frailty, falls and polypharmacy often result in negative health outcomes [4][5][6][7][8] It is known that in geriatric emergency patients, the risk of adverse outcomes such as hospital (re) admission, institutionalisation and mortality are increased compared to younger patients [9,10].
The American College of Emergency Physicians (ACEP), the American Geriatrics Society (AGS), the Emergency Nurses Association (ENA) and the Society for Academic Emergency Medicine (SAEM) have developed guidelines for the care of older people in the emergency department (ED) [11]. However, in Australia and Europe, there are currently no consensus on which aspects of care to be included [7,8,12,13]. To bring together both disciplines, geriatrics and emergency medicine, a European curriculum in geriatric emergency medicine was developed and approved by the European Union of Medical Specialists (UEMS) [14]. Additionally, a position paper by the German Society of Emergency Medicine (DGINA), the German Society of Geriatrics (DGG), the German Society of Gerontology and Geriatrics (DGGG), the Austrian Society of Geriatrics and Gerontology (ÖGGG) and the Swiss Society for Geriatrics (SFGG) have identified the need for further research and objective quality indicators (QIs) for geriatric emergency care [15]. A recent review highlighted that "a balanced, methodologically robust set of QIs for care of older persons in the ED" is needed [16]. Well-defined QIs will enable the assessment, benchmarking, and improvement of quality of care for geriatric emergency care patients [17].
During the development of the QIs, the following quality criteria were considered: scientific character, relevance and feasibility [18].
The aim of this paper is to describe the development process of QIs for the management of geriatric emergency patients and to provide a set of structure, process and outcome QIs (GeriQ-ED©).
Methods
Triangulation methodology was applied for the development of the quality indicators, based on exploration of current evidence through a systematic literature search, and expert opinion from an interdisciplinary and interprofessional expert panel.
Action steps (Fig. 1): clinical experience-based quality aspects (QA) were identified and verified, evidence-based quality statements (QS) were formulated and assessed for relevance, preliminary quality indicators (QI) were operationalized and evaluated in order to recommend a feasible set of final quality indicators.
An exploratory literature review was conducted between 09/2014-10/2014 and an expert panel (n = 11) was established to contribute with its expertise on geriatric emergency care through a Delphi process [19]. The expert panel consisted of three emergency physicians and specially trained nurses, a geriatrician, a pharmacologist, a health economist and two participants who represented the views of older emergency patients.
At the first expert meeting (11/2014) a qualitative group discussion among the expert panel was conducted to identify relevant quality aspects of care for geriatric emergency patients. These quality aspects were evaluated using qualitative content analysis according to Mayring supported by MAXQDA [20]. A second systematic literature review (12/2014-03/2015) [search terms:`geriatric OR elderly OR senior`AND`emergency department´; databases: PubMed and CINAHL; inclusion criteria: published scientific papers, reviews, systematic reviews and meta-analyses between 2010 and 2015] was conducted to explore evidence for the potentially relevant quality aspects identified by the expert panel. Another aim of this systematic literature review was to verify the clinical experience-based quality aspects and to formulate evidence-based quality statements. During the second expert meeting (03/2015) an anonymized assessment of the relevance of all quality statements was conducted by the panel using a fourstaged Likert-scale. The assessment took into consideration the importance, benefit and risk of each quality statement, based on the QUALIFY-instrument [19]. During the operationalisation process (third and fourth expert meeting -05/2015 and 06/2015) preliminary quality indicators (structural, process or outcome indicators) including respective reference ranges were defined for every quality statement that was classified as relevant.
To facilitate implementation of the preliminary quality indicators (QIs) into daily practice, QIs were assessed for their feasibility. To find a consensus during the fifth meeting (12/2015), experts used the anonymized two-step approach by RAND UCLA [21]. Finally, the panel was asked to define the QIs of five quality statements they regarded to be most important. These were prioritized as the "top five".
Results
The explorative literature review identified defined topics of geriatric emergency care [7,8] QIs for selected areas in the field [13] and guidelines for geriatric emergency departments (ED) [11]. The potentially relevant quality aspects that were discussed during the first expert meeting were summarized into twelve different categories: education, staff, equipment, communication/ information transfer, nursing care, medical treatment, geriatric screening, and risk factors such as falls, pain, cognitive impairment, medication and care needs (incontinence and the development of pressure sores). The systematic literature review of potentially relevant quality aspects identified nine reviews, seven systematic reviews and two meta-analyses. Based on these results 41 quality statements were formulated. At the second meeting of the expert panel all 41 quality statements were assessed as being relevant. The following quality statements were rated as most relevant (X = mean value): screening for delirium (X 3,93) professional training requirements for emergency care staff (X 3,90) barrier-free access to toilets with the possibility of supported transfer (X 3,90) repetitive pain assessment including appropriate use of analgesics (X 3,90) During their third and fourth meeting the expert panel operationalized the 41 quality statements into 69 QIs. Apart from the statement 'to implement a separate waiting area for geriatric patients', the expert panel considered all other QIs as feasible at the fifth expert meeting.
Finally, a set of 67 clinical experience-and evidence-based GeriQ-ED© QIs (33 process QI, 29 structural QI and 5 outcome QI), which were relevant and feasible, were developed and operationalized (English translation of GeriQ-ED© available under additional online material). In 2017 GeriQ-ED© QIs have been published and are available for free on the website of the German Society of Emergency Medicine (DGINA) [22]. Table 1 shows an example of a GeriQ-ED© quality indicator regarding cognitive impairment/ delirium: In order to facilitate implementation into daily practice, the following five quality statements (associated with twelve quality indicators [22] https://www.dgina.de/ news/geriq-c-quality-indicators-for-geriatric-emergencycare-entwicklung-von-qualitatsindikatoren-fur-die-versorgung-von-geriatrischen-notfallpatienten_63) were defined as the GeriQ-ED© TOP 5: 1. screening for delirium 2. taking a full medication history including an assessment of the indications 3. education of geriatric knowledge and skills to emergency staff 4. screening for patients with geriatric needs 5. identification of patients with risk of falls/ recurrent falls
TOP 1: screening for delirium
Consequences of an undetected delirium include progressive deterioration of functional and cognitive impairment, and an increased risk of mortality [23,24]. Studies show a strong association between the duration of delirium and mortality [25,26]. Thus early detection of delirium in the emergency care setting is essential. Currently only a few screening-tools are validated and feasible in daily practice in the ED, such as the Confusion Assessment Method (CAM), the modified CAM-ED (mCAM-ED) [27,28] and the 4-AT [29]. According to GeriQ-ED©, a standardized screening of delirium is recommended using a validated instrument that is feasible in the department settings. Although the exact timing of the screening in the emergency care process was not defined by the expert panel, delirium should be screened at the earliest time that is feasible in the ED management of the patient. In patients directly discharged from the ED, screening should be conducted prior to discharge. In addition, GeriQ-ED© recommends the implementation of a standardized management for patients at risk of delirium or patients with delirium including the documentation of risk factors as well as initial management of risk reduction as feasible in the ED [22] https:// www.dgina.de/news/geriq-c-quality-indicators-for-geriatric-emergency-care-entwicklung-von-qualitatsindikatorenfur-die-versorgung-von-geriatrischen-notfallpatienten_63.
TOP 2: medication history including indications
Polypharmacy is common among older adults and is associated with an increased risk of adverse outcomes such as adverse drug reactions or medication errors. Adverse drug events (ADR) are a major cause of ED visits among older people [8,[30][31][32]. Nevertheless, most ADR are not detected. Studies have shown that up to 60% of all ADR are potentially avoidable [33]. Special attention should be given to the intake of anticoagulants, benzodiazepines, non-steroidal anti-inflammatory drugs, diuretics and antidepressants. These classes of drugs have in many cases been associated with complaints from older people who have been admitted to ED [32,[34][35][36][37].
Good clinical practice for the detection and prevention of ADRs in vulnerable patients include a detailed documentation and regular review of prescribed as well as over-the-counter medication by using a standardized medication reconciliation [38].
GeriQ-ED© recommends the implementation of a comprehensive medication management, including a detailed documentation of the current medication as well as a possible indication for each medication. Medication history and possible missing information on current medication should also be documented in the ED [22] https://www.dgina.de/news/geriq-c-quality-indicators-for-geriatric-emergency-care-entwicklung-von-qualitatsindikatoren-fur-die-versorgung-vongeriatrischen-notfallpatienten_63.
TOP 3: staff education on geriatric knowledge and skills
Staff education level affects clinical outcomes in the emergency management [39]. In 2015 the Geriatric Section of the European Society for Emergency Medicine (EUSEM) together with the European Geriatric Medicine Society (EUGMS) established a joint task force to developed a curriculum for the care of older emergency patients (European Taskforce on Geriatric Emergency Medicine, ETFGEM). The aim was to outline relevant competencies in the care of older people, especially those with frailty. The curriculum incorporates knowledge on the physiology of ageing, common and atypical complaints, and the identification of geriatric syndromes or psychiatric needs of geriatric patients [14].
GeriQ-ED© confirms the need for an improvement in relevant competencies (knowledge and skills) of staff members who are involved in the care of older emergency patients and recommends for least 60% of the ED staff (physicians and nurses) the participation in at least one special geriatric training every year [22] https:// www.dgina.de/news/geriq-c-quality-indicators-for-geriatric-emergency-care-entwicklung-von-qualitatsindikatoren-fur-die-versorgung-von-geriatrischen-notfallpatien-ten_63.
TOP 4: screening for patients with geriatric needs
A recent meta-analysis showed that risk stratification of geriatric emergency patients is strongly limited by the lack of feasible and validated instruments. Existing instruments designed for risk stratification of older ED patients do not distinguish precisely between high-or lowrisk groups [40]. However, as long as no better screening instruments are developed, it is recommended to use established and validated instruments [41].
GeriQ-ED© proposes the use one of the currently recommended evidence-based screening-tools in the ED to identify geriatric needs for action. Comprehensive geriatric assessment and extrapolated management have been shown to improve the outcome of older multimorbid people [42]. Further, GeriQ-ED© recommends a standardized implementation of management including screening of geriatric needs, and accurate documentation and information transfer. The timing to screen for geriatric needs was not defined [22] https://www.dgina.de/ news/geriq-c-quality-indicators-for-geriatric-emergencycare-entwicklung-von-qualitatsindikatoren-fur-die-versorgung-von-geriatrischen-notfallpatienten_63.
TOP 5: identification of patients with risk of falls/ recurrent falls
Appropriate evaluation of a fallen patient not only implies a thorough assessment for traumatic injuries, but also an assessment of potential causes and a stratification of future risk of falling [43,44]. A proper assessment often requires a multidisciplinary team-approach. Currently no specific tools are recommended for the identification of potential risk factors [11]. The German Expert's Standard for Fall and Fracture Prevention recommends an evaluation of person-, medication-and environmental-related risk factors such as fall history, the use of walking aids, depression, cognitive impairment and the long-term use of more than six different drugs [45].
GeriQ-ED© recommends the assessment and documentation of risk factors for falling during patient's stay in the ED. The corresponding quality indicator recommends the documentation of > 80% of all patient cases in ED patients older than 70 years. Furthermore, it is recommended that every year more than 80% of the emergency nurses are trained on risk factors for falls [22] https://www.dgina.de/news/geriqc-quality-indicators-for-geriatric-emergency-careentwicklung-von-qualitatsindikatoren-fur-die-versorgung-von-geriatrischen-notfallpatienten_63.
Discussion
High-quality geriatric emergency care is needed to ensure patient safety for this high-risk group. QIs are regarded as gold standard to measure, benchmark and improve emergency care. GeriQ-ED© focused on clinical experience and evidence-based recommendations and addressed the knowledge gap in this area. The proposed set of 67 GeriQ-ED© − QIs serves as a guidance for geriatric emergency care to ensure quality of care [7,8,46] and meets the recommendations made by the German position paper. For the first time QIs were developed that cover comprehensive geriatric emergency care and not only selected syndromes or fields of interest among geriatric emergency patients [13,25,47]. The operationalisation of quality statements into QIs enables an integration of them in existing documentation systems. The classification of quality aspects into twelve categories facilitates a thematic selection for special nursing or medical care issues.
In order to facilitate the implementation of QIs for older patient's emergency care, the expert panel defined the top 5 out of the assigned 67 QIs.
Implications for emergency care
GeriQ-ED© provide a set of 67 QIs including 33 process, 29 structure and 5 outcome indicators. They are intended as a framework for the provision of high quality geriatric emergency medicine adapted to the German emergency care. The QIs are intended to give the opportunity to assess own geriatric emergency care and to benchmark with other EDs. The QIs also give the opportunity to set individual goals for quality improvement in geriatric emergency care and to document the improvement accordingly.
To implement the 67 GeriQ-ED© QIs in the emergency care setting, further structural adaptations will be necessary. Individualised care of geriatric patients in order to improve the quality of care will require an adapted calculation of staff numbers in the EDs. Hospital management, leaders of EDs as well as ED nurse managers need to recognise that geriatric emergency patients ought to be considered as a highly vulnerable patient group with special needs that have to be addressed differently from usual care.
Limitation
The process to develop the GeriQ-ED© QIs started in 2014. In 2017 the QIs were published in German [22]. Although GeriQ-ED© QIs refer to screening-tools based on current evidence (e.g. to screening for delirium or identification of geriatric needs) literature review for prior QIs had to be updated. In a recent systematic literature review (02/2020) no additional QIs were identified [search terms:`emergency care´AND`geriatrics´; database: PubMed; inclusion criteria: published between 2015 and 2020].
The majority of the 67 GeriQ-ED© QIs are process-or structure indicators. The small number of outcome indicators was discussed with an expert for QI development. It was agreed that in the ED setting it is difficult to define outcome indicators due to the short stay of the patients and also the limited influence on the care received beyond the ED. Therefore, the development of outcome indicators in the field of emergency medicine is only possible with restrictions [12].
Conclusions
Demographic changes imply big challenges for the emergency care. QIs for this special setting offer a solution to improve geriatric emergency care and patient's safety. For the first time, GeriQ-ED© provides a comprehensive set of 67 QIs which addresses the specialist care needs of older people in the ED to improve patient care.
The methodical approach used for the development of GeriQ-ED© corresponds to required methodical quality criteria. They are evidence-based, relevant and feasible. GeriQ-ED© is based on a consensus among experts in the field. A prospective study is planned to evaluate the QIs in daily practice with a special focus on measuring criteria and feasibility.
However, in German Eds, GeriQ-ED© TOP 5 should be implemented as a minimum standard in geriatric emergency care.
Authors' contributions
The author(s) read and approved the final manuscript.
|
v3-fos-license
|
2022-05-28T15:13:42.342Z
|
2022-05-25T00:00:00.000
|
249119569
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/23/11/5947/pdf?version=1653649153",
"pdf_hash": "b8325e0bbdaca2de7ce2952533ae64b5ebb165d8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46152",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "870dc03afa66fd64a20d5dd75b464da366250857",
"year": 2022
}
|
pes2o/s2orc
|
Comparative Metabolic Study of Two Contrasting Chinese Cabbage Genotypes under Mild and Severe Drought Stress
Chinese cabbage (Brassica rapa L. ssp. pekinensis) is an important leafy vegetable crop cultivated worldwide. Drought is one of the most important limiting factors for the growth, production and quality of Chinese cabbage due to its weak drought tolerance. In order to deepen the understanding of drought stress response in Chinese cabbage, metabolomics studies were conducted in drought−tolerant (DT) and drought−susceptible (DS) genotypes of Chinese cabbage under water deficit−simulated mild and severe drought stress conditions. A total of 777 metabolites were detected, wherein 90 of them were proposed as the drought−responsive metabolites in Chinese cabbage, with abscisic acid (ABA), serine, choline alfoscerate, and sphingosine as potential representative drought stress biomarkers. We also found that drought−tolerant and drought−susceptible genotypes showed differential metabolic accumulation patterns with contrasting drought response mechanisms. Notably, constitutively high levels of ABA and glutathione were detected in drought−tolerant genotype in all tested and control conditions. In addition, proline, sucrose, γ−aminobutyric acid, and glutathione were also found to be highly correlated to drought tolerance. This study is the first metabolomic study on how Chinese cabbage responds to drought stress, and could provide insights on how to develop and cultivate new drought−resistant varieties.
Introduction
Drought is one of the major environmental factors affecting agricultural production and food security, especially in arid and semi−arid areas where water supply is a major challenge [1,2]. Thus, developing new crop varieties with low water consumption is critical for sustaining agriculture and the environment [1].
Chinese cabbage, a fresh leafy vegetable with a high leaf water content and shallow root system, is widely cultivated and consumed around the world, especially in East Asia where the shortage of fresh water is a major challenge for agriculture. Therefore, it is of great importance to increase the drought tolerance of Chinese cabbage for its stable production. Genetic and molecular breeding has become increasingly important for improving drought tolerance of crops, including Chinese cabbage [3,4], and the understanding of the plant's drought response would provide theoretical framework for breeding drought−resistant varieties.
Plants produce huge numbers of metabolites in order to sustain their growth and reproduction, as well as adapt to biotic and abiotic stresses. As plants are physically fixed and cannot move, their metabolic responses are key survival tools to deal with various environment stresses. In recent years, technological advances have been greatly developed and utilized to study plants in omics−based approaches, and metabolomics has become an efficient and important approach to gain panoramic views of how plants respond to stresses at the whole metabolism level [5,6]. However, there are only RNA−seq studies of Chinese cabbage's responses to drought stress on the transcriptional level, while the metabolic responses at the whole metabolome level have not been reported yet [7][8][9].
A large number of metabolomics studies have been conducted to reveal drought stress responses in different plant species, and their metabolisms have been significantly affected by drought stress [10][11][12][13]. In Chinese cabbage, previous studies have shown that drought significantly influences the accumulation of multiple diverse groups of metabolites including glucosinolates, polyphenols, flavonoids, total antioxidant enzyme activities, catalases, and peroxidases in Chinese cabbage [9,14]. Therefore, we propose that Chinese cabbage may also systemically respond to drought stress at the metabolic level.
In this study, we characterized metabolic responses in two Chinese cabbage genotypes with contrasting drought tolerance under mild and severe drought by widely targeted metabolome technology. This study aimed to (i) identify drought−responsive metabolites and potential drought stress biomarkers in Chinese cabbage; (ii) compare the metabolic responses of Chinese cabbage genotypes with contrasting drought tolerance under mild and severe drought; and (iii) explore potential metabolites associated with increased drought tolerance and propose a metabolic response framework in Chinese cabbage under drought stress.
Screening for Drought Tolerant and Susceptible Genotypes of Chinese Cabbage
The seed germination rate of 27 Chinese cabbage inbred lines was observed under 20% polyethylene glycol (PEG−6000) and two lines with extreme phenotypic variations were identified, with 14S837 having the highest germination rate (~80%) while 88S148 having no seed germination ( Figure S1). These two lines were selected as candidates representing drought−tolerant (DT) and drought−susceptible (DS) genotypes, respectively.
The phenotypes of the candidate lines were further studied and validated at the seedling stage under mild and severe drought conditions. Drought stress intensity was determined by monitoring soil water content (SWC), and mild drought (50~55% SWC) and severe drought (30~35% SWC) were achieved after 3 days and 5 days without watering, in our controlled growth chamber, respectively (Figure 1a). We found that these two lines had no visible phenotypic alterations when exposed to mild drought, though the leaf water content of 88S148 decreased significantly compared with that of the control plants. Under severe drought conditions, no visible phenotypic changes were observed in 14S837, but leaf water content began to decrease. In direct contrast, 88S148 showed severe wilting symptoms and a lower leaf water content (Figure 1b,c). Additionally, the water loss in detached shoots was measured, and the data showed that the shoots of 14S837 lost water more slowly than 88S148 (Figure 1d). With the above data from studying seed and seedling phenotypes under drought stress conditions, 14S837 and 88S148 were selected as DT and DS genotypes of Chinese cabbage, respectively, for subsequent metabolomics studies.
Metabolome Profiling
To investigate the metabolic response to drought stress in Chinese cabbage, leaf samples of DS and DT were collected from day 3 and day 5 drought−treatment plants alongside the controls, and total plant metabolites were extracted and analyzed using a widely targeted metabolome analysis based on an ultra−performance liquid chromatography−tandem mass spectrometry (UPLC−MS/MS) platform. A total of 777 metabolites were detected in all 24 samples in our drought assay, and they could be further divided into 14 unambiguously. Separation between the drought treatment and control groups indicated 117 drought treatment significantly affected the metabolism of the two lines, suggesting our 118 methodology correctly reveals the metabolic basis of drought stress. A heatmap based on 119 Pearson's correlation coefficient between all samples (including QC samples) was also 120 constructed (Figure 2c), showing a highly significant positive correlation among three 121 tested biological replicates. On the whole, our experimental design and data are solid and 122 well suited for downstream analysis. The total ion current (TIC) of different quality control (QC) samples showed highly overlapping patterns in retention time and peak intensity, confirming the data was stable and repeatable at the tested time points ( Figure S2). Principal component analysis (PCA) was carried out to visualize the overall metabolic differences and relationship among samples. The PCA of all samples (including QC samples) showed little variation within each group, but large variation between groups (Figure 2b). The two major components of PCA explained 47.4% of the total variance, and the first principal component (PC1) explained 29.72% of the total variation, thus separating the two contrasting groups of DS and DT unambiguously. Separation between the drought treatment and control groups indicated drought treatment significantly affected the metabolism of the two lines, suggesting our methodology correctly reveals the metabolic basis of drought stress. A heatmap based on Pearson's correlation coefficient between all samples (including QC samples) was also constructed (Figure 2c), showing a highly significant positive correlation among three tested biological replicates. On the whole, our experimental design and data are solid and well suited for downstream analysis.
Differential Metabolites in Multiple Comparison Groups
To identify differential metabolites (DMs), orthogonal partial least squares discriminant analysis (OPLS−DA) was performed in 12 comparison groups and variable importance in projection (VIP) values were obtained. DMs were determined by fold changes (FC) ≥ 1.5 or ≤ 0.67 and VIP ≥ 1, and a total of 597 DMs were identified in all tested comparison groups, providing the core DMs data for subsequent analysis (Figure 3a, Table S2). olates, and 9.09% (2/22) other metabolites, they showed differential accumulation patterns 157 in response to drought stress ( Figure 2a). Notably, amino acids and derivatives, alkaloids, 158 In order to understand the dynamics of DMs in different comparison groups, K−means cluster analysis was performed based on the accumulation patterns of different metabolites, and six sub−classes were identified ( Figure 3b, Table S3). A total of 137, 73, 92, 96, 66, and 133 metabolites were clustered from sub−classes 1 to 6, respectively. Notably, DMs in sub−class 4 and 5 show higher accumulation under severe drought conditions, suggesting that these metabolites might be top metabolic candidate biomarkers in the drought response of Chinese cabbage.
Exploration of Key Drought−Responsive Metabolites in Chinese Cabbage
To explore the metabolic response of Chinese cabbage under drought stress, we first examined the variation of DMs in four comparison groups (DS−3d−CK vs. DS−3d, DS−5d−CK vs. DS−5d, DT−3d−CK vs. DT−3d, DT−5d−CK vs. DT−5d), and a total of 291 DMs were detected in the above four comparison groups (Table S2) 41.67% (5/12) glucosinolates, and 9.09% (2/22) other metabolites, they showed differential accumulation patterns in response to drought stress ( Figure 2a). Notably, amino acids and derivatives, alkaloids, organic acids, and nucleotides and derivatives, were significantly affected by drought stress. KEGG pathway enrichment analysis indicated that DMs responding to drought stress were significantly enriched in 13 pathways including purine metabolism, 2−oxocarboxylic acid metabolism, metabolic pathways, glyoxylate and dicarboxylate metabolism, biosynthesis of amino acids, aminoacyl−tRNA biosynthesis, lysine biosynthesis, carbon metabolism, ABC transporters, lysine degradation, the citrate cycle (TCA cycle), arginine biosynthesis, and tryptophan metabolism ( Figure 4). Venn diagrams were constructed to show the number of common DMs in both lines under mild drought and severe drought stress conditions. Five common upregulated and five common downregulated DMs were found in DS and DT under mild drought, while 55 common DMs were upregulated and 14 common DMs were downregulated in both DS and DT under severe drought (Figure 5a,b). Four common DMs in both genotypes and drought conditions, abscisic acid (ABA), serine, choline alfoscerate (GPC), and sphingosine, were proposed as potential biomarkers of drought stress in Chinese cabbage. Furthermore, 28 common upregulated and eight common downregulated DMs were specifically altered in DS−3d vs. DS−5d and DT−3d vs. DT−5d but not in DS−3d−CK vs. DS−5d−CK and DT−3d−CK vs. DT−5d−CK (Figure 5c,d, Table S2). These common DMs with similar response patterns to drought stress in both genotypes were considered to be the drought−responsive metabolites. By removing the duplicated metabolites, a total of 90 metabolites, with 68 upregulated and 22 downregulated ones, were selected and proposed as the drought−responsive metabolites in Chinese cabbage ( Figure 6, Table S4). Among these drought−responsive metabolites, 24 metabolites were classified as amino acids and derivatives. Thus, we further investigated the levels of free amino acids under drought stress. Serine levels were increased in both drought treatments and genotypes. Proline, leucine, isoleucine, methionine, and tyrosine showed a similar response pattern with higher accumulation under severe drought treatment in both genotypes. The aspartic acid levels were higher in DT irrelevant of drought treatment and were specifically decreased in DT upon severe drought. Other amino acids, including threonine, asparagine, cysteine and glutamic acid, were found to be unaffected by drought stress. In conclusion, we identified a large number of metabolites involved in the dro responses of Chinese cabbage, among which, amino acids and derivatives played m important roles in the drought responses.
Differential Drought Responses between DS and DT Genotypes
In order to study the differential response of DS and DT to drought stress, KE pathway enrichment analysis was performed for DMs in four comparison groups (Fi 7). 59 DMs (36 upregulated and 23 downregulated) were found in DS−3d−CK vs. DS which were enriched in glycerophospholipid metabolism, sphingolipid metabolism, plant hormone signal transduction, whereas 98 DMs (39 upregulated and 59 downr lated) were found in DT−3d−CK vs. DT−3d enriched in carbon metabolism, carbon tion in photosynthetic organisms, glyoxylate and dicarboxylate metabolism, glycero metabolism, starch and sucrose metabolism, biosynthesis of secondary metabolites, abolic pathways, galactose metabolism, and the TCA cycle (Figures 3a and 7a,b). In mild drought stress, there were more DMs in DT, especially more downregulated D indicating that mild drought stress had a stronger effect on DT.
Under 5−day drought treatment, there were 151 DMs (111 upregulated and 40 do regulated) in DS, which mainly involved in aminoacyl−tRNA biosynthesis, glucosin biosynthesis, glycerophospholipid metabolism, biosynthesis of amino acids, tropane peridine, pyridine alkaloid biosynthesis, ABC transporters, metabolic pathways, ly degradation, and 2−oxocarboxylic acid metabolism. In comparison, 165 DMs (111 up ulated and 54 downregulated) were identified in DT, enriched in purine metabolism, In conclusion, we identified a large number of metabolites involved in the drought responses of Chinese cabbage, among which, amino acids and derivatives played more important roles in the drought responses.
Differential Drought Responses between DS and DT Genotypes
In order to study the differential response of DS and DT to drought stress, KEGG pathway enrichment analysis was performed for DMs in four comparison groups (Figure 7). 59 DMs (36 upregulated and 23 downregulated) were found in DS−3d−CK vs. DS−3d, which were enriched in glycerophospholipid metabolism, sphingolipid metabolism, and plant hormone signal transduction, whereas 98 DMs (39 upregulated and 59 downregulated) were found in DT−3d−CK vs. DT−3d enriched in carbon metabolism, carbon fixation in photosynthetic organisms, glyoxylate and dicarboxylate metabolism, glycerolipid metabolism, starch and sucrose metabolism, biosynthesis of secondary metabolites, metabolic pathways, galactose metabolism, and the TCA cycle (Figures 3a and 7a,b). In the mild drought stress, there were more DMs in DT, especially more downregulated DMs, indicating that mild drought stress had a stronger effect on DT.
Under 5−day drought treatment, there were 151 DMs (111 upregulated and 40 downregulated) in DS, which mainly involved in aminoacyl−tRNA biosynthesis, glucosinolate biosynthesis, glycerophospholipid metabolism, biosynthesis of amino acids, tropane, piperidine, pyridine alkaloid biosynthesis, ABC transporters, metabolic pathways, lysine degradation, and 2−oxocarboxylic acid metabolism. In comparison, 165 DMs (111 upregulated and 54 downregulated) were identified in DT, enriched in purine metabolism, TCA cycle, metabolic pathways, carbon metabolism, pyruvate metabolism, carbon fixation in photosynthetic organisms, 2−oxocarboxylic acid metabolism, and glyoxylate and dicarboxylate metabolism (Figures 3a and 7c,d). Notably, the differential abundance (DA) scores of DT were mostly negative in the enrichment pathways under mild drought and severe drought conditions (Figure 7b,d), while the scores in DS were mostly positive in the enrichment pathways (Figure 7a,c).
cycle, metabolic pathways, carbon metabolism, pyruvate metabolism, carbon fixation in 221 photosynthetic organisms, 2−oxocarboxylic acid metabolism, and glyoxylate and dicar-222 boxylate metabolism (Figures 3a and 7c,d). Notably, the differential abundance (DA) 223 scores of DT were mostly negative in the enrichment pathways under mild drought and 224 severe drought conditions (Figure 7b,d), while the scores in DS were mostly positive in 225 the enrichment pathways (Figure 7a,c). To further explore the different metabolic response patterns of the two selected contrasting Chinese cabbage genotypes, a comparative metabolic analysis between DS and DT was performed ( Figure 8, Table S2). Fifty−nine metabolites accumulated more in DT, especially some of the key drought−responsive metabolites, such as ABA, GPC, and reduced glutathione, which may explain the increased drought tolerance observed in DT (Figure 8a). Furthermore, 84 common DMs showed higher accumulation in DS in both drought and control conditions compared with the ones in DT, including 25 phenolic acids and 25 flavonoids, thus indicating DS contains more phenolic acids and flavonoids (Figure 8b). The plant hormone ABA plays a central role in response to abiotic stress by modu-266 lating stomata, plant growth, and metabolic pathways [15,16]. In our study, ABA was 267 shown to be the most important metabolite in Chinese cabbage for responding to drought 268 stress. We found that the level of ABA increased rapidly under drought stress in both 269 genotypes. More importantly, the level of ABA in DT was higher than the one in DS under 270 both drought and control conditions. Thus, the differential levels of ABA may explain the 271 contrasting phenotypic differences observed, though the mechanism of ABA accumula-
Discussion
As sessile organisms, plants need to respond to the great challenges of environmental stresses, among which, drought is one of the most severe ones, especially in the large arid and semi−arid areas located primarily in the developing countries and regions of the world [2]. Understanding the plants' drought responses can not only help our understanding of how different plant species mitigate drought stresses, but also provide insights to crop breeding for high water use efficiency and for crop security within economically vulnerable populations. Metabolites are the key tools for plants to facilitate water use, especially in leafy vegetables like Chinese cabbage, which require large quantities of water but have a limited ability to extract water from soil. In this current study, we analyzed the metabolome of two Chinese cabbage genotypes with different drought tolerance abilities and found that amino acids and derivatives, alkaloids, organic acids, and nucleotides and derivatives in Chinese cabbage were significantly affected by drought stress. A total of 777 metabolites were detected by a widely targeted metabolome and 90 metabolites were selected and proposed as the drought−responsive metabolites in Chinese cabbage, with ABA, serine, choline alfoscerate, and sphingosine as potential representative drought stress biomarkers. In addition, we also found that DS and DT have different metabolite response patterns and contrasting coping strategies in response to drought stress, which might explain their corresponding phenotypic differences under drought stress. Notably, constitutively high levels of ABA and glutathione were detected in the drought−tolerant genotype in all tested and control conditions, which might help explain the increased drought tolerance of DT. A simplified drought response model in Chinese cabbage was proposed according to our study.
The plant hormone ABA plays a central role in response to abiotic stress by modulating stomata, plant growth, and metabolic pathways [15,16]. In our study, ABA was shown to be the most important metabolite in Chinese cabbage for responding to drought stress. We found that the level of ABA increased rapidly under drought stress in both genotypes. More importantly, the level of ABA in DT was higher than the one in DS under both drought and control conditions. Thus, the differential levels of ABA may explain the contrasting phenotypic differences observed, though the mechanism of ABA accumulation in DT requires further investigation.
Some important intermediates in the TCA cycle, including citric acid, isocitric acid, α−ketoglutaric acid, succinic acid, and malic acid, were shown to have a lower accumulation in DT under drought stress, indicating that the TCA cycle was inhibited by water deficiency. This is consistent with downregulation of enriched metabolic pathways in DT and may be related to the drought acclimation ability of DT. Furthermore, it has been shown that citric acid can confer abiotic stress tolerance in plants, and exogenous citric acid application can also enhance drought tolerance in a variety of plant species [17][18][19][20].
Therefore, increasing endogenous citric acid level by exogenous spray may be a potentially useful approach for improving the drought tolerance of Chinese cabbage.
Proline has been reported in multiple studies as another important drought−responsive metabolite [13,[21][22][23]. Beyond acting as an osmolyte for osmotic adjustment, proline also stabilizes sub−cellular structures and contributes to reactive oxygen species (ROS) detoxification [24]. Although we did not observe changes in proline levels under mild drought stress, the concentration increased under severe drought condition in both genotypes. Serine, which is required for cysteine and methionine biosynthesis, was proposed as one of the potential biomarkers of drought stress in Chinese cabbage. Methionine, the precursor of aliphatic glucosinolates, showed increased levels under severe drought condition in both DS and DT, which may explain why the aliphatic glucosinolates increased under drought stress. Aspartic acid can be transformed from oxaloacetate, an intermediate in the TCA cycle that can be catalyzed by aspartate aminotransferase [25]. Thus, the decreased levels of aspartic acid in DT may be a result of downregulation of other intermediates in the TCA cycle.
Glutathione (GSH) is a non−protein tripeptide involved in the detoxification of excess ROS, maintaining cellular redox homeostasis and regulating protein function in plants under abiotic and biotic stresses [26][27][28]. Reduced GSH is oxidized to disulfide (GSSG) during ROS scavenging, and GSSG is recycled to GSH by glutathione reductase [24]. Previous studies have shown that the levels of GSH increase in response to drought stress [29]. Furthermore, exogenously applied and endogenously increased GSH can improve drought stress tolerance in many plant species [27,[30][31][32]. While we found no significant changes in the content of GSH in our drought assay, the plants accumulated more GSH in the severe drought stress condition. Moreover, the level of GSH was higher in DT independent of the stress, which may be related to its increased drought tolerance. Furthermore, S−(methyl) glutathione, the thioether of glutathione [33], was induced by drought stress, and its role in the drought response of Chinese cabbage warrants further investigation. Soluble sugars mainly include sucrose and its products glucose and fructose [34]. It was found that drought increased the amount of soluble sugar in Chinese cabbage [35]. In this study, we discovered that drought stress increased sucrose levels in Chinese cabbage but did not significantly affect the levels of glucose and fructose, which suggests that sucrose plays important roles in osmotic adaption under drought stress.
Phenolic acids, a major class of polyphenols, are constitutively present in vegetables with strong antioxidant activity [36,37]. Drought stress enhanced the accumulation of phenolic acids [29,38]. In this study, we identified a total of 147 phenolic acids, and nine phenolic acids were considered to be involved in the response to drought stress of Chinese cabbage, including 4−methylphenol, 2−hydroxy−3−phenylpropanoic acid, 3−(4−hydroxyphenyl)−propionic acid, p−coumaric acid methyl ester, methyl sinapate, sinapoyl−4−O−glucoside, curculigine, sinapoylsinapoyltartaric acid, and 1−O−caffeoyl−(6−O−glucosyl)−β−D−glucose. Interestingly, we found that nearly half of the 39 upregulated metabolites in the DT−3d−CK vs. DT−3d comparison group were phenolic acids, which may improve the drought tolerance of DT by increasing antioxidant activity in the early stages of drought stress (Table S2).
Pipecolic acid is a lysine−derived non−protein amino acid which regulates plant systemic acquired resistance and basal immunity to bacterial pathogen infection [39]. Recent study has shown that pipecolic acid plays a negative regulatory role in drought tolerance of tomato plants [40]. In this study, the level of DL−pipecolic acid in DT decreased under mild drought stress, which may be related to the drought resistance response in DT.
Quinic acid is a cyclic carboxylic acid involved in the shikimate pathway [41]. It was reported that quinic acid is the main contributor to the osmotic potential of Quercus suber leaves [42], and its concentration increased under drought stress in some species [23,43]. However, the level of quinic acid decreased in both genotypes under mild drought and further decreased with the aggravation of drought in DT. This result was in accordance with a report on broccoli [22], Salvia miltiorrhiza Bunge (Danshen) [44], peach [45] and maize [46]. Especially in broccoli, low levels of quinic acid were considered to be an important signature of drought tolerance [22]. Consequently, the effect of quinic acid in response to drought needs to be further studied.
γ−aminobutyric acid (GABA), a non−protein amino acid, functions as an intrinsic signaling molecule and accumulates quickly in response to a variety of abiotic stresses in plants [47][48][49]. Research has suggested that GABA can reduce stomatal opening and transpiration water loss by negatively regulating the activity of a stomatal guard cell tonoplast−localized anion transporter ALMT9, thus improving water use efficiency and drought tolerance [50]. Exogenous GABA application can increase drought tolerance in many plant species, e.g., white clover [51], creeping bentgrass [52], and snap bean [53]. However, little is known about the roles of GABA in Chinese cabbage. To our knowledge, this study is the first to report that drought stress can induce GABA accumulation in Chinese cabbage. In parallel with previous research, the roles and mechanisms of GABA in the drought response of Chinese cabbage must be further studied.
Based on our findings, we propose a simplified model on how the drought−tolerant genotype of Chinese cabbage responds to mild and severe stresses at the metabolic level ( Figure 9). with a report on broccoli [22], Salvia miltiorrhiza Bunge (Danshen) [44], peach [45] and 335 maize [46]. Especially in broccoli, low levels of quinic acid were considered to be an im-336 portant signature of drought tolerance [22]. Consequently, the effect of quinic acid in re-337 sponse to drought needs to be further studied. 338 γ−aminobutyric acid (GABA), a non−protein amino acid, functions as an intrinsic sig-339 naling molecule and accumulates quickly in response to a variety of abiotic stresses in 340 plants [47][48][49]. Research has suggested that GABA can reduce stomatal opening and tran-341 spiration water loss by negatively regulating the activity of a stomatal guard cell tono-342 plast−localized anion transporter ALMT9, thus improving water use efficiency and 343 drought tolerance [50]. Exogenous GABA application can increase drought tolerance in 344 many plant species, e.g., white clover [51], creeping bentgrass [52], and snap bean [53].
Plant Materials and PEG Treatment
The seeds of 27 Chinese cabbage inbred lines, including 88S148 and 14S837, were provided by the Chinese cabbage research group, College of Horticulture, Northwest A&F University, Yangling, China.
The drought assay was conducted by screening seed germination under 20% PEG6000−induced osmotic stress. In brief, 30 seeds of the same size, that were full and disease−free were picked and disinfected with 75% ethanol for 1 min, followed by 10% sodium hypochlorite solution (NaClO) for 15 min, then rinsed repeatedly with sterilized water. Next, the disinfected seeds were spread in a 9 cm diameter petri dish containing three filter papers as the germination bed, and soaked with 10 mL 20% PEG6000 (10 mL sterile water as the control), and germinated in a 25 • C light incubator (Dongnan Instrument Co., Ltd. Ningbo, China). A 1 mL PEG solution per day (sterile water for control) was supplemented to each dish during germination. The germination rate was counted on the seventh day and three independent biological replicates were performed. All reagents are purchased from Sangon Biotech Co., Ltd. (Shanghai, China).
Growth Conditions and Drought Treatment
The seeds were sown in 7 cm pots containing a soil matrix of equal weight. Seedlings were grown in the light incubator (Dongnan Instrument Co., Ltd. Ningbo, China) at 25 • C with 14 h light (150 µmol m −2 s −1 of light intensity)/10 h dark cycles and watered normally.
Drought treatment was simulated by a water deficit. Briefly, the seedlings were grown under the conditions described above. The 3−week−old seedlings were watered until the soil was saturated and did not contain much water. Subsequently, the seedlings undergoing drought treatment were no longer watered after removing excess water from the tray.
Water Loss, Soil Water Content and Leaf Water Content Measurement
Water loss was measured according to a previous method [54]. Briefly, the detached shoots of 3−week−old seedlings were immediately weighed and recorded eight times at one−hour intervals. Water loss was expressed as the percentage of initial fresh weight. Three replicates were performed and each replicate contained six individual plants.
Soil water content was determined by the oven drying method. In brief, the fresh weight of the soil matrix in the pot was weighed after removing the roots, then the soil was oven dried at 105 • C for 12 h and weighed again. The soil water content was expressed by the percentage of weight lost out of the initial fresh weight and recorded at 24 h intervals. Three replicates were performed and each replicate contained three individual samples.
For the measurement of leaf water content, the second leaves were collected and immediately weighed to obtain the fresh weight (FW), then the leaves were oven dried at 80 • C for 24 h and dry weights (DW) were measured. Leaf water content (%) was calculated as [(FW − DW)/FW] × 100. Three replicates were performed and each replicate contained three individual plants.
Sample Collection
The seedlings of drought−tolerant and susceptible genotypes were grown in a light incubator with the conditions described above. The second intact leaves counted from outside to inside were collected from five individual plants when the soil water content reached 50~55% (drought for 3 d) and 30~35% (drought for 5 d), respectively. The plants with normal irrigation were also sampled as controls (CK) at the same time. Therefore, a total of four group samples were collected, designated as DS−3d/DS−3d−CK, DS−5d/DS−5d−CK, DT−3d/DT−3d−CK, DT−5d/DT−5d−CK, and three independent biological replicates were performed. The collected leaves were snap−frozen in liquid nitrogen immediately and stored at −80 • C until extraction.
Metabolome Profiling
Widely targeted metabolome was employed to acquire the metabolomic profile of samples conducted by Wuhan Metware Biotechnology Co., Ltd. (Wuhan, China). In brief, the sample leaves were freeze−dried by a vacuum freeze−dryer (Scientz−100F, Ningbo, China) and crushed using a mixer mill (MM 400, Retsch, Germany) at 30 Hz for 90 s. Lyophilized powder (100 mg) was weighed and dissolved with 1.2 mL 70% methanol solution (vortex 30 s every 30 min and repeated six times), followed by overnight extraction at 4 • C. Finally, the extract was centrifuged at 12,000 rpm for 10 min, and the supernatant was collected and filtered through a filter membrane (0.22 µm pore size, ANPEL, Shanghai, China) before UPLC−MS/MS analysis.
The sample extracts were analyzed using an UPLC−ESI−MS/MS system (UPLC, SHIMADZU Nexera X2, Kyoto, Japan; MS, Applied Biosystems 4500 Q TRAP, Waltham, MA, USA). Quality control samples (QC) prepared by mixing all sample extracts were inserted every 10 samples to monitor the repeatability of the analysis process. UPLC separation was performed with the same protocol as Yuan et al. [55]. Mass spectrographic analysis was performed by a triple quadrupole-linear ion trap mass spectrometer equipped with an ESI Turbo Ion−Spray interface (Applied Biosystems 4500 Q TRAP UPLC/MS/MS System), which operated according to a previous study [56]. The mass spectrometry data was processed with the Analyst 1.6.3 software (AB Sciex). Qualitative analysis of metabolites was identified based on MetWare database (Wuhan Metware Biotechnology Co., Ltd., Wuhan, China) and other public databases. Quantification of metabolites was carried out using multiple reaction monitoring method [56].
Data Analysis
Identified metabolites were annotated using the KEGG compound database (http: //www.kegg.jp/kegg/compound/, accessed on 15 November 2021); annotated metabolites were then mapped to the KEGG pathway database (http://www.kegg.jp/kegg/ pathway.html, accessed on 15 November 2021) [57]. KEGG pathway enrichment analysis was performed using the Metware Cloud, a free online platform for data analysis (https://cloud.metware.cn, accessed on 10 January 2022) and significance was determined by hypergeometric test. The differential abundance (DA) score is a metabolic change analysis method based on a pathway which can reflect the overall changes of all differential metabolites in a pathway. DA score is calculated as (number of upregulated DMs annotated to a pathway−number of downregulated DMs annotated to the pathway)/number of all metabolites annotated to the pathway.
Multivariate statistical analysis can simplify the high−dimensional and complex data, and retain the original information to the greatest extent. PCA, a classic unsupervised pattern recognition multivariate statistical analysis method, was performed by the prcomp function within R (www.r-project.org, accessed on 15 November 2021). The data was unit variance scaled before PCA. OPLS−DA, a supervised pattern recognition method, was performed by R software using R package MetaboAnalystR [58]. The data was log transformed (log2) and mean centered before OPLS−DA. In order to avoid overfitting, a permutation test (200 permutations) was performed. VIP values were extracted from the OPLS−DA result to identify DMs.
Pearson correlation coefficients (PCC) and K−means cluster analysis were also performed with the Metware Cloud. For the K−Means analysis, the relative contents of DMs in all groups were standardized using the Z−score algorithm. A heatmap of metabolites was displayed by TBtools [59].
|
v3-fos-license
|
2021-09-28T05:24:28.190Z
|
2021-09-01T00:00:00.000
|
237938944
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/22/18/9980/pdf",
"pdf_hash": "fcb41aeb2e14585427fd71386a98b9e268098876",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46153",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "fcb41aeb2e14585427fd71386a98b9e268098876",
"year": 2021
}
|
pes2o/s2orc
|
Star-PAP RNA Binding Landscape Reveals Novel Role of Star-PAP in mRNA Metabolism That Requires RBM10-RNA Association
Star-PAP is a non-canonical poly(A) polymerase that selects mRNA targets for polyadenylation. Yet, genome-wide direct Star-PAP targets or the mechanism of specific mRNA recognition is still vague. Here, we employ HITS-CLIP to map the cellular Star-PAP binding landscape and the mechanism of global Star-PAP mRNA association. We show a transcriptome-wide association of Star-PAP that is diminished on Star-PAP depletion. Consistent with its role in the 3′-UTR processing, we observed a high association of Star-PAP at the 3′-UTR region. Strikingly, there is an enrichment of Star-PAP at the coding region exons (CDS) in 42% of target mRNAs. We demonstrate that Star-PAP binding de-stabilises these mRNAs indicating a new role of Star-PAP in mRNA metabolism. Comparison with earlier microarray data reveals that while UTR-associated transcripts are down-regulated, CDS-associated mRNAs are largely up-regulated on Star-PAP depletion. Strikingly, the knockdown of a Star-PAP coregulator RBM10 resulted in a global loss of Star-PAP association on target mRNAs. Consistently, RBM10 depletion compromises 3′-end processing of a set of Star-PAP target mRNAs, while regulating stability/turnover of a different set of mRNAs. Our results establish a global profile of Star-PAP mRNA association and a novel role of Star-PAP in the mRNA metabolism that requires RBM10-mRNA association in the cell.
Introduction
Pre-mRNA 3 -end processing is an essential step in eukaryotic gene expression that involves two coupled steps-endonucleolytic cleavage followed by the addition of a poly(A) tail (polyadenylation) [1][2][3][4]. 3 -end processing is carried out by a cleavage and polyadenylation complex (CPA) that is associated with >85 protein components [3,5,6]. The CPA complex is comprised of subunits of cleavage and polyadenylation specificity factor (CPSF), cleavage stimulatory factor (CstF), cleavage factors Im (CFIm) and IIm (CFIIm), scaffolding protein symplekin, poly(A) polymerase (PAP), and poly(A) binding protein (PABPN1) as core components [3,4]. Cleavage and polyadenylation at the pre-mRNA 3 -end involve recognition of a poly(A) signal (PA-signal) by CPSF-30 and WDR33 subunits of CPSF complex [7][8][9][10]. CstF (CSTF2) interacts with the GU/U-rich downstream sequence (DSE) and cooperates with CPSF to assemble a stable CPA complex [11,12]. CPSF3 then cleaves pre-mRNA at the PA-site followed by PA-tail addition by a PAP on the upstream fragment, whereas the downstream fragment is rapidly degraded [13,14]. PABPN1 then binds and stabilises the PA-tail and controls PA-tail length [15][16][17]. Canonical PAPα/γ is the primary HITS-CLIP sequencing after pull-down of Star-PAP in the presence and absence of RBM10 depletion with IgG as a reference control. HITS-CLIP of Star-PAP was further confirmed after Star-PAP depletion in a similar HITS-CLIP experiment.
HITS-CLIP sequencing raw data of each Star-PAP sample had approximately 10 million reads. After filtering for quality reads, removing adapter sequences and identical reads from PCR amplification, we obtained~8 million sequencing reads that were used for aligning to the reference genome. Further, parsing the alignment to obtain uniquely mapped reads, and a minimal mapping size of 18 nucleotides, we generated about 4.2 million distinct sequencing reads that uniquely mapped to the human reference genome (hg19). The association of identified reads was observed in all chromosomes with a varying number of mapped reads (Figure 1a). The highest mapped reads were observed on chromosomes 1, 8 and 19 and the lowest reads were observed on Chromosome Y (Supplementary Figures S1a and S2a). After peak calling and subtraction for reference IgG HITS-CLIP, we identified 420,000 read clusters corresponding to~14,000 distinct transcripts in Star-PAP HITS-CLIP. To confirm the specificity of Star-PAP binding, we employed siRNA-mediated depletion of Star-PAP and a similar HITS-CLIP experiment was carried out. Interestingly, depletion of Star-PAP resulted in the loss of 65% of the mapped regions on the reference genome ( Figure 1a, Supplementary Figure S2a). The loss of read clusters was observed in almost all chromosomes with varying degrees (Supplementary Figure S2a). Reduced maps on Star-PAP knockdown in some of the chromosomes (chromosome 2, 4, 7, 17, 13, 19, and 20) and percent reductions in respective chromosomes are shown in Figure 1a, Supplementary Figures S1a and S2a. To further assess the difference in the Star-PAP associated clusters between control and Star-PAP depletion on specific mRNAs, we enlarged a region of 17-kb at CHGB mRNA and a 7-kb region around CST4 mRNA from chromosome 20 (Figure 1a). We also enlarged a 200-bp region around the peak cluster that shows Star-PAP associated nucleotide sequence on the mRNA (Supplementary Figure S2c). Together, these results reveal different binding regions of Star-PAP on different mRNAs that were lost on siStar-PAP depletion.
Mapping of the genetic regions of Star-PAP association showed Star-PAP binding sites were primarily associated with protein-coding RNAs (>70%) with less than 15% in the non-coding RNAs (miRNA, snRNA, lncRNA and snoRNA), and~10% in the intronic RNAs ( Figure 1b). For the analysis of Star-PAP specific target protein-coding mRNAs, we considered transcripts that had high read detection (>10-read tags per cluster) and those absent from the HITS-CLIP experiment after Star-PAP depletion. With this stringent condition, we obtained 4200 specific mRNAs directly associated with Star-PAP (Supplementary Table S1). Among the specific protein-coding mRNAs, significant Star-PAP association was detected at the CDS (exons) region, 3 -UTR and terminal exons, and in the 5 -UTR regions ( Figure 1c). Overall, Star-PAP detection was high at the 3 -UTR regions (46%) consistent with its primary role in the 3 -UTR processing. Interestingly, the detection in the CDS region was equally high (42%) (Figure 1c) indicating a distinct role of Star-PAP in mRNA metabolism. Moreover, in many of the transcripts, Star-PAP was detected at both CDS and 3 -UTR regions. Three mRNAs with Star-PAP association at the 3 -UTR and CDS regions are shown in Figure 1a and Supplementary Figure S2c. Yet, how Star-PAP binding at the CDS region alters gene expression is not understood (detailed in the following sections). Consistent with earlier studies of the Star-PAP target mRNAs, our in silico analysis of nucleotide composition of these Star-PAP specific reads confirms a biased GC content over AU in the Star-PAP bound regions in both 3 -UTR and CDS regions (Supplementary Figure S1b). Analysis of consensus sequence motif of 12-mers using MEME-Chip software confirmed an -AUA-containing consensus motif at the target mRNAs (Supplementary Figure S2b). An enlarged region of the Star-PAP read cluster at the 3 -UTR of target mRNA also shows a similar motif with an -AUA-sequence (Supplementary Figure S2c). This is in line with our earlier in vitro footprinting data and in silico target analysis reinforcing the earlier role of Star-PAP in the 3 -end processing of target mRNAs [20,25]. Mapping of the genetic regions of Star-PAP association showed Star-PAP binding sites were primarily associated with protein-coding RNAs (>70%) with less than 15% in the non-coding RNAs (miRNA, snRNA, lncRNA and snoRNA), and ~10% in the intronic RNAs (Figure 1b). For the analysis of Star-PAP specific target protein-coding mRNAs, we considered transcripts that had high read detection (>10-read tags per cluster) and those
Star-PAP Associated mRNA Targets Show Wide Roles of Star-PAP in Human Diseases and Signaling Pathways
To validate our HITS-CLIP experiments, we performed qualitative RNA immunoprecipitation (RIP) and quantitative RIP (qRIP) analysis of selected Star-PAP target mRNAs. We observed the association of Star-PAP with mRNAs including the earlier established targets (AGTR1, COL5A1, VEGF, PTEN, PNISR, IFRD1) by qualitative RIP analysis (Figure 1d). Star-PAP knockdown resulted in the loss of association. Control RNA Pol II was associated with all mRNAs tested (Figure 1d). Similarly, in the qRIP analysis as well, we observed the association of Star-PAP on mRNAs (KCNMA1, COL5A1, RRAS, VEGF, BIK, AGTR1, PTEN, ZEB1, NQO1, FEZ1, RAB26) that was lost on Star-PAP knockdown (Figure 1f). In addition, Star-PAP RNA binding mutant (S6A Star-PAP) [50] but not wild type Star-PAP also resulted in the loss of binding to target mRNAs ( Figure 1f). Western analysis of siRNA Star-PAP depletion and rescue with wild type and S6A Star-PAP is shown in Figure 1e. To gain further insight into cellular functions of Star-PAP target mRNAs, we analysed Star-PAP bound mRNAs from HITS-CLIP sequencing for functional pathways in different human diseases and cellular signals (Supplementary Figure S1c-d). We observed enrichment of mRNAs involved in cancer, heart disease, metabolic diseases, immunity and infection among the Star-PAP bound targets (Supplementary Figure S1c). Among the signalling pathways, RTK-MAPK, PI3K-Akt, GPCR, Interleukin and Wnt signalling pathways were enriched among the target mRNAs (Supplementary Figure S1d) indicating a wide function of Star-PAP in the cell. Star-PAP RNA binding mutation. There was no effect of Star-PAP knockdown on the control non-target GCLC mRNA (Figure 2a). To gain further mechanistic insight into the role of Star-PAP in RNA metabolism, we compared mRNAs detected in our HITS-CLIP Star-PAP with that of earlier microarray analysis that showed altered expression on Star-PAP depletion (~1500 genes up-regulated and~2400 genes down-regulated) [49]. Around 55% of the mRNAs whose expression was significantly altered on Star-PAP knockdown in microarray analysis were detected in our Star-PAP HITS-CLIP ( Figure 2b) indicating that the expression of these set of mRNAs is directly regulated by Star-PAP-RNA association. Interestingly, there was a higher occurrence of down-regulated mRNAs than that of upregulated mRNAs in the Star-PAP HITS-CLIP. There was~60% of the down-regulated genes on Star-PAP depletion was detected in the Star-PAP HITS-CLIP ( Figure 2c). Consistently, Star-PAP was primarily detected at the 3 -UTR and the terminal exon among these mRNAs ( Figure 2d). Moreover, among the mRNA from the HITS-CLIP data where Star-PAP was detected at the 3 -UTR region, the majority were down-regulated on Star-PAP depletion in the microarray data (Supplementary Figure S2e). The Association of Star-PAP on a select mRNA at the 3 -UTR is shown in Supplementary Figure S2c. Together these results reveal that the Star-PAP association at the 3 -UTR region regulates the 3 -end processing of target mRNAs.
Further, qRIP analysis of 6 select mRNAs that were down-regulated on Star-PAP depletion (COL5A1, KCNMA1, WIF1, NQO1, FEZ1, RRAS2) demonstrated a biased association of Star-PAP at the 3 -UTR compared to the CDS regions ( Figure 2e). Consistently, qRT-PCR analysis of selected mRNAs among those of UTR associated demonstrated a loss of expression (RAB26, ASCC3, CAMK2B, NQO1, IGF2, HMOX1, ALDH2, PTBP2, RGS4, STC1, STY1, RTN1) (Supplementary Figure S2d). Western analysis confirms down-regulation of corresponding protein expression of target mRNAs (HMOX1, NQO1 and CDH1) on Star-PAP depletion ( Figure 2g) consistent with the loss of Star-PAP association on the depletion. Further, 3 -RACE assay confirms the role of Star-PAP in the cleavage and polyadenylation of these mRNAs (IGF2, COL5A1, BIK, KCNMA1, HMOX1, NQO1) ( Figure 2f). There was a loss of 3 -RACE product on Star-PAP depletion as reported earlier (Figure 2f). These results were further corroborated with cleavage assay where we observed increased accumulation of uncleaved pre-mRNAs (KCNMA1, NQO1, COL5A1, WIF1, FEZ1) while the expression levels were reduced on Star-PAP depletion ( Figure 2h). Together, HITS-CLIP data confirm the global role of Star-PAP in the 3 -end processing of target mRNAs by the association at the 3 -UTR region. Interestingly, analysis of the polyadenylation site usage (PA-site choice) of these mRNAs revealed a higher distal PA-sites usage (~40%) consistent with earlier genome-wide Star-PAP APA analysis (Supplementary Figure S2f) [18]. We also observed proximal PA-site selection in around 30% of mRNAs whereas~20% of mRNAs have single PA-sites (Supplementary Figure S2f). Functional analysis of these UTR-associated mRNAs shows a higher prevalence in cellular functions including cell cycle, apoptosis, myocyte hypertrophy, cell invasion, metastasis and metabolic pathways (Supplementary Figure S2g).
Star-PAP mRNA Binding Regulates Stability and Turnover Rate of Target mRNAs
Among the up-regulated genes on Star-PAP depletion in our microarray, only around 50% of the genes were detected in Star-PAP HITS-CLIP (Figure 3b). These mRNAs represent the set of mRNAs whose expression is negatively regulated by Star-PAP binding. The other set of up-regulated mRNAs in the microarray (not detected in our Star-PAP
Star-PAP mRNA Binding Regulates Stability and Turnover Rate of Target mRNAs
Among the up-regulated genes on Star-PAP depletion in our microarray, only around 50% of the genes were detected in Star-PAP HITS-CLIP ( Figure 3b). These mRNAs represent the set of mRNAs whose expression is negatively regulated by Star-PAP binding. The other set of up-regulated mRNAs in the microarray (not detected in our Star-PAP HITS-CLIP) is likely controlled indirectly. Interestingly, among the Star-PAP targets detected in HITS-CLIP that are up-regulated on Star-PAP depletion (708 mRNAs), Star-PAP was mapped primarily at the CDS exonic regions (~70%), while a minority (<20%) was mapped in the 3 -UTR region ( Figure 3b) suggesting a novel function of Star-PAP independent of 3 -end processing. Consistently, there was an overall higher coverage of the CDS region compared to the 3 -UTR region among these mRNAs (Supplementary Figure S3a). Further, qRIP analysis on select mRNAs (PNISR, IRFD1, LHX9, TP73, RRAS2) demonstrated primary association of Star-PAP at the CDS region over the 3 -UTR region on these mR-NAs (Figure 3c). Consistently, 3 -RACE and cleavage assay show no effect of Star-PAP knockdown on the cleavage and polyadenylation of this set of mRNAs (Figure 3d,e). To understand the mechanism of how Star-PAP binding negatively regulates the expression of these of mRNAs, we carried out the qRT-PCR analysis of select mRNAs (LHX9, TP73, RRAS2, ZEB1, CYB5A1, PNISR, GAD1, POLR3, SSX2, RGS4, AGTR2, BPNT1, CEP57, COQ2, CTSO, DNAJC and DPH5) (Figure 3f). We observed increased mRNA levels on Star-PAP depletion consistent with our microarray analysis. Western blot also showed increased protein levels on Star-PAP depletion (Figure 3g) suggesting Star-PAP's role as a negative regulator of mRNA stability (Figure 3f,g). To further understand the mechanism, we measured mRNA stability and turnover by measuring half-life (BPNT1, COQ2, IGF2, GAD1 and control non-target GCLC) after inactivating transcription with actinomycin D in the presence and absence of Star-PAP knockdown ( Figure 3h). Strikingly, there was an increase in the half-life (2 to 3-fold) of mRNAs with no effect on the non-target GCLC mRNA (Figure 3h). These results demonstrate that Star-PAP binding on the CDS region de-stabilises mRNA and as a result a depletion of Star-PAP results in both increased mRNA and protein expressions. A list of Star-PAP-associated mRNAs down-regulated on siStar-PAP is shown in Supplementary Table S2. Functional pathway analysis of these mRNAs showed enrichment of genes involved in diseases including cardiovascular, metabolic, infection, and cancer (Figure 3i). A list of Star-PAP associated mRNAs up-regulated on Star-PAP knockdown is shown in Supplementary Table S2.
Transcriptome-Wide Star-PAP Binding Analysis after RBM10 Depletion Indicates Global Role of RBM10 in Star-PAP Target mRNA Association
RBM10 is a Star-PAP-associated protein that is required for the regulation of mRNAs involved in cardiac hypertrophy [24]. Therefore, we investigated the genome-wide role of RBM10 in the Star-PAP recognition of target mRNAs. We carried out a similar HITS-CLIP experiment of Star-PAP after siRNA-mediated depletion of RBM10 in HEK 293 cells (Figure 4a). Strikingly, RBM10 depletion resulted in a loss of >60% of mapped read clusters associated with different chromosomes (Figure 4a, Supplementary Figure S3b). Relative reductions of Star-PAP association on six select chromosomes on RBM10 depletion are shown in Supplementary Figure S3b. Among the 4200 protein-coding genes bound by Star-PAP detected in our HITS-CLIP,~70% of the mRNAs were not detected after RBM10 knockdown (Figure 4b). Interestingly, among the down-regulated genes on Star-PAP knockdown (UTR regulated), the majority (around 950) of mRNAs were not detected after RBM10 depletion (Figure 4d) indicating that RBM10 is required for Star-PAP association on target PA-sites. Moreover, among the >700 up-regulated genes on siStar-PAP that are detected on HITS-CLIP (negative regulation by Star-PAP), the majority (~540 mRNAs) were not detected after the RBM10 depletion (Figure 4c). This indicates the role of RBM10 on an overall Star-PAP target mRNA association. A list of mRNAs in Star-PAP HITS-CLIP lost on RBM10 depletion is tabulated in Supplementary Table S3
RBM10-RNA Association Regulates Star-PAP-Mediated mRNA Metabolism
The genome-wide loss of Star-PAP association on RBM10 depletion was further tested using qRIP experiment using 10 select mRNAs (COQ2, AGTR1, DPH5, GAD1, BPNT1, PAK1, LMNB1, NGEF, RAB26, BRCA1, NOS2) (Figure 4g). We selected both sets of mRNAs that were up-regulated or down-regulated on Star-PAP depletion. We observed a clear loss of Star-PAP association in all mRNAs investigated upon RBM10 depletion (Figure 4g). There was no effect of RBM10 knockdown on RBM10 independent Star-PAP target mRNAs. Western analysis for siRNA depletion of RBM10 is shown in Figure 4h. This confirms the requirement of RBM10 for Star-PAP target mRNA binding. Similarly, in a RIP analysis, we observed the association of both RBM10 and Star-PAP on Star-PAP target mRNAs (AGTR1, BPNT1, NOS2) and that Star-PAP association was a loss on RBM10 depletion (Figure 5a) indicating that RBM10 RNA binding is required for Star-PAP association with the target mRNA. To confirm this, we tested Star-PAP association
RBM10-RNA Association Regulates Star-PAP-Mediated mRNA Metabolism
The genome-wide loss of Star-PAP association on RBM10 depletion was further tested using qRIP experiment using 10 select mRNAs (COQ2, AGTR1, DPH5, GAD1, BPNT1, PAK1, LMNB1, NGEF, RAB26, BRCA1, NOS2) (Figure 4g). We selected both sets of mRNAs that were up-regulated or down-regulated on Star-PAP depletion. We observed a clear loss of Star-PAP association in all mRNAs investigated upon RBM10 depletion (Figure 4g). There was no effect of RBM10 knockdown on RBM10 independent Star-PAP target mRNAs. Western analysis for siRNA depletion of RBM10 is shown in Figure 4h. This confirms the requirement of RBM10 for Star-PAP target mRNA binding. Similarly, in a RIP analysis, we observed the association of both RBM10 and Star-PAP on Star-PAP target mRNAs (AGTR1, BPNT1, NOS2) and that Star-PAP association was a loss on RBM10 depletion (Figure 5a) indicating that RBM10 RNA binding is required for Star-PAP association with the target mRNA. To confirm this, we tested Star-PAP association with target RNAs with RBM10 RNA binding motif deletion (that compromised RBM10 RNA binding) (Figure 5b). For this purpose, we ectopically expressed wild type and RNA binding motif deleted RBM10 that has silent mutations on the targeting siRNA sites. We observed a significant loss of Star-PAP association with target RNAs (BPNT1, PAK1, AGTR1) on Star-PAP knockdown as well as RRM motif deletion RBM10 revealing that RBM10 binding is required for the Star-PAP association with the target RNA (Figure 5b). RBM10 is a U-rich or G-rich sequence binding protein and therefore, we tested the nucleotide sequence on mRNAs with Star-PAP mapped regions that were lost on RBM10 depletion. We observed a higher U-content and G-content of Star-PAP reads on mRNAs where Star-PAP was not detected on RBM10 depletion (Supplementary Figure S4a). Moreover, analysis of motif at these reads by CentriMo software indicates the potential association of 6U binding motif with a frequency of 48%, 7U with 33% and 8U with 18%, respectively, but a marginal possibility of G-motifs with 10% for 6G, 5% for 7G and <3% for 8G, respectively (Supplementary Figure S4b). Together, these results indicate that the RBM10 RNA association regulates Star-PAP target mRNA binding. with target RNAs with RBM10 RNA binding motif deletion (that compromised RBM10 RNA binding) (Figure 5b). For this purpose, we ectopically expressed wild type and RNA binding motif deleted RBM10 that has silent mutations on the targeting siRNA sites. We observed a significant loss of Star-PAP association with target RNAs (BPNT1, PAK1, AGTR1) on Star-PAP knockdown as well as RRM motif deletion RBM10 revealing that RBM10 binding is required for the Star-PAP association with the target RNA (Figure 5b). RBM10 is a U-rich or G-rich sequence binding protein and therefore, we tested the nucleotide sequence on mRNAs with Star-PAP mapped regions that were lost on RBM10 depletion. We observed a higher U-content and G-content of Star-PAP reads on mRNAs where Star-PAP was not detected on RBM10 depletion (Supplementary Figure S4a). Moreover, analysis of motif at these reads by CentriMo software indicates the potential association of 6U binding motif with a frequency of 48%, 7U with 33% and 8U with 18%, respectively, but a marginal possibility of G-motifs with 10% for 6G, 5% for 7G and <3% for 8G, respectively (Supplementary Figure S4b). Together, these results indicate that the RBM10 RNA association regulates Star-PAP target mRNA binding. Therefore, we tested mRNA metabolism from both sets of mRNAs (down-regulated and up-regulated on Star-PAP depletion from our microarray analysis). First, RBM10 knockdown resulted in a differential expression of Star-PAP target mRNAs-a loss of expression of a set of mRNAs (ANKRD1, NEGF, LMNB1, PAK1, RAB26, NOS2) (UTRassociated) whereas an increased expression for another set of RNAs (RRAS2, BPNT1, AGTR1, COQ2, GAD1) (CDS-associated) (Figure 5c). The altered expression on RBM10 depletion was not rescued by Star-PAP ectopic expression (Figure 5c). This indicates that RBM10 is involved in the Star-PAP-mediated mRNA metabolism. Since the UTRassociated mRNAs were regulated through 3 -end processing and the CDS-associated group through RNA turnover and stability, we tested both 3 -UTR RNA processing (by cleavage assay and 3 -RACE assay) and RNA half-life measurement. In a 3 -RACE assay, there was compromised mRNA maturation on RBM10 depletion on FEZ1 and COL5A1, whereas, there was no effect of RBM10 depletion on AGTR1 or RRAS (up-regulated on the knockdown) (Figure 5d). Similarly, in the cleavage assay, RBM10 knockdown affected specifically mRNAs that were compromised on Star-PAP knockdown with no effect on the RNAs that were up-regulated on Star-PAP depletion (Supplementary Figure S4c,d). Concomitantly, measurement of half-life after RBM10 knockdown indicated increased half-life of target mRNAs similar to Star-PAP depletion (Figure 3h). Consistently, there was an overall higher RBM10 association at the 3 -UTR of down-regulated mRNAs whereas CDS association was prominent for the up-regulated mRNAs (Figure 5e). Together, these results indicate that RBM10 is required for Star-PAP mediated mRNA metabolism in both 3 -end processing and RNA destabilisation. Interestingly, among the RBM10-independent mRNAs (mRNAs where Star-PAP association was not affected by RBM10 depletion), Star-PAP was largely associated with the CDS region (~66% of the mRNAs) compared to the UTR region (~24% of the mRNAs) (Supplementary Figure S4e). Similarly, these mRNAs also exhibited higher proximal PA-site (40%) usage than the distal PA-site (25%) usage as opposed to the RBM10 dependent mRNAs (Supplementary Figure S4f) suggesting a role of RBM10 in Star-PAP mediated APA. Functionally, these mRNAs show enrichment of signalling pathways including RTK-MAPK, PI3K-Akt, JAK-STAT, mTOR and TGF-β in the cell (Supplementary Figure S4g).
Discussion
Star-PAP is a variant PAP that plays a critical role in the 3 -end processing of select mRNAs [20,21,25]. Star-PAP follows a distinct processing mechanism that is dispensable of important canonical components including CstF-64. Star-PAP instead requires additional associated factors including kinases and RNA binding proteins [1,34]. Star-PAP binds to target mRNA UTR and helps recruit the cleavage and polyadenylation factors [20,34]. However, the role of Star-PAP-associated factors in the Star-PAP UTR/PA-site selection or in the processing reaction is unclear. From mass spectrometry analysis, we established RBM10 as a unique Star-PAP coregulator required for specific mRNA regulation involved in myocyte hypertrophy [24]. In this study, we showed that RBM10 regulates global Star-PAP association on target mRNAs. This is consistent with the ubiquitous expression patterns of both the proteins where RBM10 will be required for Star-PAP mediated 3 -end processing [21,24]. Nevertheless, our study strongly indicates the role of RBM10 in determining Star-PAP specificity. There are two aspects of Star-PAP specificity: first, the selection of a PA-site, and second, the exclusion of canonical PAP from the target PA-site to have an exclusive/specific control of targets by Star-PAP [1]. RBM10 can have roles in both these aspects of specificity. In the first aspect, the RBM10-RNA association would recruit Star-PAP in a sequence-specific manner to help assemble a stable Star-PAP cleavage complex. In line with this, a loss of RBM10 would compromise Star-PAP binding on the RNA as observed in our study. Second, RBM10 binding at the vicinity of the Star-PAP binding region could exclude canonical PAPα or other components of canonical machinery that are absent from the Star-PAP processing complex. This supports our earlier hypothesis that Star-PAP requires a co-regulator for the function and specificity of its cellular activities [1]. Such specificity driven by associated factors will have important ramifications in the regulation of Star-PAP mediated alternative polyadenylation [18,23]. Yet, the role of RBM10 in APA is yet to be defined.
We reported a GC-rich sequence with an -AUA-motif for Star-PAP recognition, and a suboptimal downstream region with a U-depleted sequence on Star-PAP targets [20]. We confirmed from our HITS-CLIP experiment that Star-PAP-associated regions have a biased GC over AU composition in addition to a motif containing AUA on global Star-PAP targets. While sequence specificity for Star-PAP is critical, earlier reports indicate the signalling regulations are critical for the Star-PAP specificity [23,25,50,51]. Such signalling influence on specificity may operate through associated proteins such as RBM10. At least three agonists-oxidative stress, hypertrophic signal, and the toxin dioxin are known to regulate Star-PAP target mRNA selection [21,23,24,50]. It is still unclear how these signals drive the Star-PAP functions. Our finding of the RBM10 requirement for Star-PAP association shows the potential involvement of RBM10 in transducing the signal-mediated specificity of Star-PAP targets. This is consistent with RBM10 s role in the regulation of Star-PAP target anti-hypertrophy regulators in the heart [24]. Similarly, kinases CKIα/ε and PKCδ are also shown to modulate Star-PAP mRNA selection [25,50,51]. This could occur through either direct Star-PAP phosphorylation or indirectly via RBM10 phosphorylation that affects the sequence-specific binding of Star-PAP on distinct mRNAs. One of the phosphorylations at the ZF region on Star-PAP (Serine 6) was shown to regulate the specificity of Star-PAP regulation of some mRNAs involved in stress response and cell invasion [49,51]. Yet, the overall sequence-specific changes for Star-PAP induced by signalling conditions or by different phosphorylation statuses are yet to be defined.
Star-PAP has an established role in the 3 -end RNA processing that controls the expression of a large number of mRNAs that regulate various cellular functions [23,[25][26][27]49,51]. In addition to its adenylation function, Star-PAP has a confirmed uridylation activity [28,52]. The substrate preference of Star-PAP (U vs. A) in the cell is likely driven by associated factors or co-regulators such as RBM10 [20,21,24]. Additionally, Star-PAP has also been shown to regulate the stability and processing of miRNAs [26,30,31,53]. The depletion of Star-PAP resulted in a decrease in the levels of a large number of miRNAs, yet how Star-PAP regulates miRNA expression is unclear [30]. Star-PAP can be immunoprecipitated with specific miRNAs and also along with the RISC complex proteins indicating a potential post-transcriptional role on miRNA biogenesis [26,31]. Consistent with this, we also detected a number of miRNA associations with Star-PAP in our HIT-CLIP experiment. Together, these findings show a diverse role of Star-PAP in different RNA processing events. In this study, we show a new function of Star-PAP in the mRNA metabolism that regulates mRNA stability and/or turnover. A model of how RBM10 regulates Star-PAP-RNA association and mRNA metabolism is shown in Figure 6. Here, Star-PAP acts as a negative regulator and its binding destabilises target mRNAs. This function is independent of Star-PAP polyadenylation of target mRNAs, uridylation of U6 snRNA, and miRNA regulations [21,30,52]. Nevertheless, this affects more than 1000 mRNA targets involved in multiple cellular functions and signaling pathways. RNA binding proteins are known to regulate the stability of the bound RNA (e.g., ARE binding proteins HNRNPD, ZFP36, TTP, KSRP or BRF5) that can promote mRNA turnover via recruiting decapping enzyme at the 5 -end or recruiting deadenylating enzyme at the 3 -end [54][55][56][57][58]. Alternatively, Star-PAP could promote mRNA silencing by binding near AGO2 sites and contributing to its loading with miRNAs as in the case of AUF1 protein [59][60][61]. Star-PAP is known to interact with AGO2 and also pull down miRNA [26,31]. Therefore, Star-PAP binding could also promote miRNA-mediated silencing on the Star-PAP-associated target mRNAs by recruiting targeting miRNAs.
Cell Culture, Transfections and Treatment
HEK 293 cells were obtained from American Type Cell Culture Collection. HEK 293 cells were maintained in Dulbecco's Modified Eagle's Medium (Himedia, Mumbai, India) with 10% Foetal Bovine Serum (Gibco Biosciences, Dublin, Ireland) and 50 U/mL Penicillin Streptomycin (Gibco) at 37 °C in 5% CO2. Transient knockdown experiments were carried out using custom-made siRNAs (Eurogenetec, Seraing, Belgium) by calcium phosphate method as described earlier [23]. Transient overexpression of Star PAP and RBM10 were performed using pCMV Tag2A constructs expressing FLAG-epitope tagged Star-PAP and RBM10 that has silent mutations rendering the siRNA used for the knockdown ineffective as described earlier [24,50]. Whenever required, cells were treated with actinomycin D (5 µg/mL in DMSO) and DMSO treatment was used as solvent control.
RNA Isolation
Cultured HEK 293 cells from 10 cm dishes (1 mL/1 × 10 6 cells) were harvested in 2 mL epi tubes. Harvested cells were then washed with PBS and total RNA was isolated using RNase easy mini Kit (Qiagen, Germantown, MD, USA) as per the instruction of the manufacturer.
Quantitative Real-Time PCR (qRT-PCR)
qRT-PCR was carried out in a CFX96 multi-colour system (Bio-Rad, Hercules, CA, USA) using iTaq SYBR Green Supermix (Bio-Rad, Hercules, CA, USA) as described previously [23]. Then, 2 µg of total RNA was reverse transcribed using MMLV reverse transcriptase (Invitrogen, Waltham, MA, USA) with oligodT primer. Real-time primers were designed using Primer3 software and the difference in the melting temperature of corresponding forward and reverse primers were less than 1. Melt-curve analysis was used to check for single-product amplification and primer efficiency was near 100% in all experiments. Quantifications were expressed in arbitrary units, and target mRNA abundance was normalised to the expression of GAPDH with the Pfaffl method [62]. All qRT-PCR results are representative of at least three independent experiments (n > 3).
Cell Culture, Transfections and Treatment
HEK 293 cells were obtained from American Type Cell Culture Collection. HEK 293 cells were maintained in Dulbecco's Modified Eagle's Medium (Himedia, Mumbai, India) with 10% Foetal Bovine Serum (Gibco Biosciences, Dublin, Ireland) and 50 U/mL Penicillin Streptomycin (Gibco) at 37 • C in 5% CO 2 . Transient knockdown experiments were carried out using custom-made siRNAs (Eurogenetec, Seraing, Belgium) by calcium phosphate method as described earlier [23]. Transient overexpression of Star PAP and RBM10 were performed using pCMV Tag2A constructs expressing FLAG-epitope tagged Star-PAP and RBM10 that has silent mutations rendering the siRNA used for the knockdown ineffective as described earlier [24,50]. Whenever required, cells were treated with actinomycin D (5 µg/mL in DMSO) and DMSO treatment was used as solvent control.
RNA Isolation
Cultured HEK 293 cells from 10 cm dishes (1 mL/1 × 10 6 cells) were harvested in 2 mL epi tubes. Harvested cells were then washed with PBS and total RNA was isolated using RNase easy mini Kit (Qiagen, Germantown, MD, USA) as per the instruction of the manufacturer.
Quantitative Real-Time PCR (qRT-PCR)
qRT-PCR was carried out in a CFX96 multi-colour system (Bio-Rad, Hercules, CA, USA) using iTaq SYBR Green Supermix (Bio-Rad, Hercules, CA, USA) as described previously [23]. Then, 2 µg of total RNA was reverse transcribed using MMLV reverse transcriptase (Invitrogen, Waltham, MA, USA) with oligodT primer. Real-time primers were designed using Primer3 software and the difference in the melting temperature of corresponding forward and reverse primers were less than 1. Melt-curve analysis was used to check for single-product amplification and primer efficiency was near 100% in all experiments. Quantifications were expressed in arbitrary units, and target mRNA abundance was normalised to the expression of GAPDH with the Pfaffl method [62]. All qRT-PCR results are representative of at least three independent experiments (n > 3).
Cleavage Assay
To determine the cleavage efficiency of mRNA, the accumulation of non-cleaved mRNA levels was measured by quantitative real-time PCR (qRT-PCR). Total RNA was reverse transcribed using random hexamers and qRT-PCR was carried out using a pair of primers across the cleavage site to amplify non-cleaved mRNAs as described earlier [21]. Non-cleaved messages were expressed as fold-change over the total spliced mRNA.
HITS-CLIP Sequencing and Analysis
HITS-CLIP experiments were carried out as described earlier [63,64]. Briefly, HEK 293 cells grown on 15 cm plates were UV irradiated (400 mJ/cm 2 ) three times for 15 min each before harvesting. Cells were then harvested and lysed in 1X PXL buffer (1× PBS, 0.1% SDS, 0.5% deoxycholate, 0.5% NP-40, and protease inhibitor Cocktail) by sonication. The lysate was then treated with DNase I followed by partial RNAse digestion. Debris was then separated by high ultracentrifugation at 30,000 rpm for 40 min at 4 • C. Next, immunoprecipitation was carried out from the supernatant using anti-Star-PAP antibody [50] conjugated with pre-incubated Dynabeads Protein A (Invitrogen, Waltham, MA, USA) in the presence of bridging anti-rabbit IgG antibody, overnight at 4 • C. IP samples were washed twice with 1X PXL, followed by 5× PXL high salt wash buffer (5× PBS, 0.1% SDS, 0.5% deoxycholate, 0.5% NP40) and three times with 1× PNK buffer (50 mMTris-HCl, pH 7.4, 10 mM MgCl 2 , 0.5% NP40). Washed IP samples were further treated with PNK (80 µL PNK reaction) with 10× PNK buffer, 4 µL of PNK enzyme and 1 µL of ATP) in a thermomixer at 37 • C for 20 min. It was further washed with 1× PXL and 5× PXL and twice with 1× PNK buffer. The efficiency and specificity of IP were confirmed by Western blot analysis and denaturing acrylamide gel. Protein-RNA complexes were subjected to Proteinase K (Sigma-Aldrich, St. Louis, MO, USA) digestion at 37 • C, 20 min in 1× PK/urea buffer and the released RNA fragments were extracted by acid Phenol:Chloroform:Isoamyl alcohol. It was followed by overnight precipitation using 3 M sodium acetate and 0.75 µL glycogen and 1:1 of ethanol: isopropanol. CLIP RNA fragments were finally resuspended in 10 mM Tris-HCl pH 7.5. Library preparation and sequencing were carried out at the commercially available facility at Genotypic Technologies (http://www.genotypic.co.in, accessed on 7 September 2017). The library was prepared using NEBNext Ultra Directional RNA Library Prep kit as per the manufacturer's instruction and sequenced on an Illumina platform.
The raw data generated was checked for quality using FastQC (https://www.bi oinformatics.babraham.ac.uk/projects/trim_galore/, accessed on 20 May 2019). Lowquality sequences, artifacts sequences, contaminated sequences and low-quality reads were filtered using Clip tool kit (fastq_filter.pl, 20 May 2019) (mean score 20 from 0-24) [65]. Cutadapt was used to trim low-quality sequences from the ends before the adapters and to remove universal adapter sequences (DOI:10.14806/ej.17.1.200). Filtered and trimmed sequences were then subjected to duplicate removal using Cliptool Kit (fastq2collapse.pl, 20 May 2019) [65]. Reads were then mapped to the human reference genome (hg19) using Burrows-Wheeler Aligner (BWA) tool [66]. The MAPQ score and the minimal mapping size (in parseAlignment.pl program of Cliptool kit) were set to 1 and 18 nt, respectively, so that a single read in alignment file should map to a single locus in genome [65]. The duplicate tags with the same start coordinates mapped on chromosomes at the 5 end of RNA tags were collapsed together using Clip tool kit (tag2collapse.pl, 20 May 2019) [65]. Peak calling was then performed using Model-based Analysis of ChIP-Seq (MACS2) using IgG as control with the parameter set of high confidence enrichment ratio against a background with an mFold range of minimum 5 and maximum 50 against and fragment size of 500 and filtered the peaks using p-value 0.05 [67]. After processing with this parameter and IgG subtraction, we identified more than 420,000 read clusters. Genomic annotations were further obtained by extracting co-ordinates of the read clusters with reference human genome hg19 using Bedtools intersect [68]. Basic unix utilities (sort, uniq, awk and sed, etc.) were used for parsing and sorting based on genomic features. Next, the Integrative Genomics Viewer tool was used to visualize the genome-wide analysis, complete chromosomal visualisation peak and specific binding peaks in various regions of a gene [69]. Nucleotide compositions were then extracted using Galaxy (https://usegalaxy.org/, 10 October 2020) and plotted % nucleotide content as box plot. Motif detection software MEME-Chip was used to identify the binding motifs from the read sequences with E-value cut off of 0.05 [70].
RNA Immunoprecipitation (RIP)
RIP analysis was carried out as described earlier [21,71]. Briefly, HEK 293 cells were cross-linked with 1% formaldehyde for 10 min followed by the addition of 0.125 M glycine for 5 min to halt crosslinking. Washed cells were lysed by incubating for 10 min in 300 µL of cell lysis buffer (10 mM Tris-HCl pH 8.0, 10 mM NaCl, 0.2% NP-40, 1× EDTA free Proteinase Inhibitor, 1000 U RNaseI (Promega, Madison, WI, USA). Crude Nuclei pelleted (5 min, 2500 rpm at 4 • C) and 400 µL nuclei lysis buffer (50 mM Tris-HCl pH 8.0, 10 mM EDTA, 1% SDS, 1× EDTA free Proteinase Inhibitor, 1000 U RNaseI (Promega, Madison, WI, USA)) added to the pellet and sonicated at 22% amplitude 20 s pulse for 5 min. The nuclear lysate was centrifuged at 15,000 rpm for 10 min, supernatant collected and treated with DNase1 for 20 min at 37 • C followed by the addition of EDTA to 20 mM to stop the digestion. Supernatant incubated overnight at 4 • C with respective antibodies for Star-PAP, RBM10, RNA Pol II, FLAG and Rabbit IgG. Further, the mixture was incubated with Protein G beads which were equilibrated with IP dilution buffer (20 mM Tris-HCl pH 8, 150 mM NaCl, 2 mM EDTA, 1% Triton X-100, 0.01% SDS, 1× EDTA free Proteinase Inhibitor, 1000 U RNaseI (Promega, Madison, WI, USA) for 2 h at 4 • C. Further, the complex was pelleted down at 5000 rpm for 5 min at 4 • C and washed with IP dilution buffer (3× 5000 rpm for 5 min at 4 • C). Immunoprecipitates were eluted with 300 µL elution buffer (1% SDS, 100 mM Sodium bicarbonate) and NaCl added to 200 mM followed by proteinase K for 2 h. Reverse crosslinking was then carried out by incubating the mixture at 67 • C for 4 h. RNA was isolated from the mixture using Trizol (Invitrogen, Waltham, MA, USA) reagent according to the manufacturer's protocol. cDNA was then synthesised using Random Hexamers (Invitrogen, Waltham, MA, USA) by MMLV RT (Invitrogen, Waltham, MA, USA). PCR amplification was carried out and visualised on agarose gel.
For quantitative RIP, the immunoprecipitated RNA samples were diluted at 1:10. These samples along with input RNA were quantified using CFX multi colour system (Bio-Rad, Hercules, CA, USA) as described above. Values from each sample were corrected using reactions lacking reverse transcriptase. Quantifications were expressed in arbitrary units, and IgG immunoprecipitation product levels were used as controls for normalisation of the abundance of the target messages. The quantitative association was then expressed relative to the input RNA signal as described [72,73] using the method of Pfaffl [62]. The primers used for qRIP are listed in the Supplementary Text.
For determining UTR and CDS association on different mRNAs, we carried out qRIP analysis using specific primers designed at the 3 -UTR and CDS regions as described above.
Half-Life (T 1/2 ) Measurement
HEK 293 cells were transfected with RNAi specific to Star-PAP and RBM10. Cells were then treated with actinomycin D (5 µg/mL in DMSO) for different time points (0, 2, 4, 6, 8, 12, 18, 24, 30, 36, 42 and 48 h) post-transfection. Cells were then harvested for total RNA was isolated from cells from each time point. cDNA was synthesised by oligodT primer followed by qRT-PCR as described above. mRNA half-life (T 1 2 ) was measured as described earlier by following the decrease in % mRNA level over time with 0 time point taken as 100% of each mRNA expression [74].
3 -RACE
3 -RACE assay was carried out as described previously [50]. Briefly, 2 µg of total RNA isolated from HEK 293 cell was used for cDNA synthesis using an engineered oligodT primer with a unique sequence at the 5 -end (Adapter primer) and MMLV-RT (Invitrogen, Waltham, MA, USA). This was followed by PCR amplification using a gene-specific forward primer and a universal adapter primer that is complementary to the unique sequence on the engineered oligodT primer (AUAP primer). The RACE products analysed on a 2% agarose gel and confirmed by sequencing.
Immunoblotting
Cell lysates were prepared in 1× SDS-PAGE loading buffer (0.06 M Tris, 25% Gylcerol, 2% SDS, 0.002% Bromophenol blue, 1% β-mercapto ethanol). Denaturation was carried out by heating the mixture at 95 • C for 20 min. Proteins were separated in SDS PAGE gel in 1× Tris Glycine Buffer (25 mM Tris pH 8.0, 190 mM Glycine, 0.1% SDS pH 8.3). Transfer of proteins to the PVDF was performed in a transfer buffer (25 mM Tris pH 8.0, 190 mM Glycine, 20% Methanol). PVDF membrane after transfer was blocked in 5% skimmed milk in 1× TBST (20 mM Tris pH 7.4, 150 mM NaCl, 0.1% Tween-20) for 45 min at room temperature. Primary antibodies were diluted in TBST as per the manufacturer's instruction and incubated in a shaking platform overnight at 4 • C. The blots were washed in TBST 3 times for 10 min followed by incubation in HRP conjugated secondary antibody (Jackson Immuno Research Laboratory, West Grove, PA, USA). Further, imaging of the blots was carried out using chemiluminescent substrate (Bio-Rad, Hercules, CA, USA) in an iBright FL1500 platform (Invitrogen, Waltham, MA, USA).
Statistics
All data were obtained from at least three independent experiments and are represented as mean ± standard error mean, SEM. The statistical significance of the differences in the mean is calculated using ANOVA with statistical significance at a p-value of less than 0.05. All Western blots show representations of at least three independent blotting experiments.
Primers and Antibodies
A list of all the primers and antibodies employed in the study is shown in the Supplementary Data. Data Availability Statement: The accession number for raw and processed HITS-CLIP data reported in this paper deposited at NCBI Gene Expression Omnibus (GEO) (https://www.ncbi.nlm.nih.gov /geo/) is GSE182643.
|
v3-fos-license
|
2018-12-03T21:02:19.970Z
|
2016-01-25T00:00:00.000
|
155462317
|
{
"extfieldsofstudy": [
"Sociology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.5539/ies.v9n2p32",
"pdf_hash": "3b6611c751a6f141d6b2f10b39f77e7f6fbbc1c6",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46154",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "3b6611c751a6f141d6b2f10b39f77e7f6fbbc1c6",
"year": 2016
}
|
pes2o/s2orc
|
Clash inside the Academy : The Market and the Strife for the Democratic Values of the Western University
The growing popularity of the corporate university model raises the question of whether the market principles are suitable for planning the policy of a key enterprise like the university without weakening its capacity of pursuing critical knowledge and teaching for democracy. Does the inclusion of the free market imperatives in the functioning of the university improve its overall quality? Is there a clash between the values of the university and those of the market? What is really at stake for democratic societies? This paper addresses these questions through a conflict perspective on neoliberalism as the orthodoxy of state planning and the implications of this operational model on the core values of the Western university. The paper also takes a historical approach on state intervention to explain the political circumstances that accompanied this orientation to public policy and to offer perspective on the relevance of the state to liberal democratic society.
Introduction
When Trudie Kibbe Reed, the president of Philander Smith College in Arkansas, was asked about the dismissal of a tenured professor, she justified her decision by saying, "As a leader, just like all other CEOs, my authority cannot be challenged" (Finkin & Post, 2009, p. 231).On the face of it, the anecdote exemplifies how the corporate market wave is sweeping over the academy and replacing its language of education by mechanistic discourse of corporate culture that sees college and university presidents as CEOs.Woodhouse (2008) observes that the trends toward marketization filter down into the day-to-day life of the academy in other ways, so today professors and subject-based disciplines are resources units, students are customers, curricula are program packages, and graduates are products.Wilson (2008) suggests that the best metaphor to describe this model of the university can be the "Wal-Mart university " (p. 197).This operational model signals a paradigm shift in the idea of university as seats of learning and the inheritors of a tradition of reason spanning over two thousand years.
However, there is more to the picture than meets the eye.The real dangers of bringing the free market to the university transcend expunging its terminology.The real dangers lie in hollowing out the academy of values situated at the core of its mission as free enquiry and impartial pursuit of knowledge, and stripping them down to a core of market principles and functions aimed at satisfying student and employers' demands and maximizing the profit of private corporations (Deem, 2008).Using the conflict theory as its foundation, this paper argues that the intrusions of the market principles in the operation of the university undermine its central values of academic freedom and institutional autonomy and weaken its capacity to fulfil its mission of pursuing critical knowledge and educating for democracy.This argument emanates from a belief that the functioning of the capitalist market may not be compatible with the functioning of the university (Marginson, 2014).Nonetheless, the market model is often portrayed as the only antidote to the ills of higher education: a position encapsulated in the TINA syndrome, an acronym of the former British Prime Minister Margaret Thatcher's famous phrase 'There is No Alternative'.This determinist position coincided with the ascendancy of neoliberalism as the ideological font of regulating public policy premised on unfettered market mechanisms as the basis of growth in realms that traditionally remained beyond the pale of the market as higher education on claims of less cumbersome bureaucracy and greater efficiency, cost-effectiveness, and accountability in the management of public assets and services.Today, as Harvey (2006) notes, neoliberalist principles are spreading emphatically in public life, whereas the state is withdrawing from it which marks a setback for generations of social democratic class struggle in these arenas.But substantiation of this argument requires a discussion of the ideology that inspired this phenomenon.
Therefore, the paper begins by discussing the conflict theory as preamble to its main discussion of neoliberalism.A special focus will be given to the marketization of higher education that was introduced through the ideological shift to neoliberalism as the new orthodoxy of state planning in the late 1970s.The second section outlines the contradictions between the market values and the university values with special focus on how the market methods weaken the freedom and the independence of the university.The third section explores the intersection between the mission and values of the academy and democracy.The paper concludes that many of the problems with academic freedom stem from the encroachments of the market in higher education.It also concludes that democratic societies require free universities, and free universities require protection from the influence of extraneous powers like the market.
Conflict, Class Power, and the Neoliberal Project
It is argued that neoliberalism was from the very beginning a project to restore class power through the concentration of wealth in the hands of the richest strata of society (Duménil & Lévy, 2004).It is also argued that when the income of one per cent of the population started to soar suddenly in the mid-1980s and continued apace to reach fifteen per cent by the turn of century in a country like the United States, a relatively advanced model of the neoliberalization, one cannot help but explore the question of whether or not the shift to neoliberalist policies and the class forces assembled behind them were responsible for forging this quantitative leap (Harvey, 2006).This paper uses the conflict theory to examine this argument.
Conflict theory emphasizes socioeconomic classes.It is based to some extent on the writings of the Karl Marx, which is beyond the scope of this paper.Suffice it to say that Marxist theorists see the social and economic relationships in the production process as windows to understand the political activities and the larger patterns of social organisation (Knutilla, 2003).According to Marx, capitalist society is fractured into two major classes: the proletariat (the ruled working class) and the bourgeoisie (the ruling capitalist class); the latter controls and exploits the former until the proletariat eventually revolt to redress injustice (Ball & Peters, 2005).For Marx, as can be discerned from the previous discussion, power has an economic texture grounded in the relationships between and among social classes.
Conflict theory begins with the assumption that the society is rigged by "the power elite [who are] in positions to make decisions having major consequences" (Mills, 1956, pp. 3-4).That is, society is ruled by the power elite who occupy pivotal positions enabling them to solidify their status by designing laws and policies aimed at sustaining their power.But this conceptualization should not be interpreted as the power elite are solely limited to politicians who can design laws for their constituencies.Dyck (2003) notes that in this approach, the capitalist elite (bourgeoisie) controls the political elite who in turn uses the state to consolidate the bourgeoisie domination.Dyck adds that conflict theorists think the state should be the sole provider of social capital such as education and health care; however, the state pursues to create a business climate in which capitalists can maximize their gains, in what is called "accumulation function " (p. 14).This harks back to Marx who also claims that "the modern state is but a committee for managing the common affairs of the whole bourgeoisie" (Marx & Engels, 2002, p. 221).Harvey (2005) subscribes to this claim, which, according to his analysis of neoliberalism, is a major aspect of neoliberal policies that aim at harnessing the state to ensure its laws and policies optimize the conditions for capital accumulation.We will discuss this argument further in the section about neoliberalism.
Neoliberalism: History and Implications
Neoliberalism was the brainchild of the Mont Pèlerin Society, a transnational meta-discourse community established by Friedrich Hayek who sought to redefine capital liberalism by reverting to a more deregulated market (Olssen, 2010).The Mont Pèlerin Society included academic economists, historians, and philosophers who disapproved the ideological revisions to liberal capitalism that resulted in a shift to Keynesianism as the economic orthodoxy of public policy development of the welfare state structured in most of the Western hemisphere after the Second World War (Dieter & Neuhoffer, 2006;Thorsen & Lie, 2006).Hayek was deeply opposed to state interventionist theories, especially Keynesian economics, which, according to Hayek (1944), jeopardized central values to human civilization as freedom.Conversely, Hayek (1973) argued, the market can set free the creative and entrepreneurial spirit in individuals and thereby can lead to a greater individual freedom and a more efficient allocation of resources.Hayek also argued that the battle of ideas against these theories was key, and that it would take more than a generation to win that battle (Hayek, 1960).This argument warrants a further discussion of neoliberalism to understand what is really at stake.Neoliberalism is a highly contested notion.Generally, neoliberalism is a theory of political economy that supposes "human well-being can be achieved through liberating individual entrepreneurial freedoms and skills within an institutional framework characterized by strong private property rights, free markets and free trade" (Harvey, 2005, p. 2), and through minimizing state intervention in public life to the legislation and enforcement of law and policies aimed at eliminating obstacles to capital's profitability, investment, and market.These include: creating and preserving an institutional framework appropriate to such practices; ensuring the market exists in all areas including education, health, and security (Olssen, 2010); deregulating business, including privatising of public enterprises; reducing taxes and budget; removing protectionist policies and hurdles to financial and foreign exchanges (Chorev, 2010;Kuhen, 2006).
The state, as can be noticed from this definition, is a central theme in neoliberalism.Dale and Robertson (2009) point out that for neoliberals the state is necessary to remove controls from the market.For Foucault (2008), this is antithetical to the laissez-faire maxim of classical liberalism, which was mainly aimed at "the rationalization of the exercise of government" (p.3018).In the same vein, Gill (cited in Dale & Robertson, 2009, p. 112) extrapolates that unlike classical liberalism, neoliberalism was not driven by a comprehensive opposition to all state activity; rather it worked through the state to push the market into public life, in a process that Gill calls the "constitutionalization of the neoliberal".In the same vein, Hyslop- Marginson and Leonardo (2012) agree that "the most characteristic feature of neoliberalism is the systematic application of state authority, in a variety of antidemocratic policies and practices, to impose market imperatives on public policy development" (p.3).Harvey (2006) thinks that this neoliberal trend is a huge contradiction to the promised freedom of the market, because it may force public institutions to use methods that might not be in the best interest of the people served by these institutions.Higher education can be a good barometer to examine this argument.A useful question arises from this discussion is how free and efficient this sphere can be when the market mechanisms are pushed into its functioning.
When transposed to higher education, "the market can replace the democratic state as the primary producer of cultural logic and value" (Lynch, 2006, p. 4).The university must make innovative adjustments to diffuse obstacles that hinder the inclusion of market principles in its operations (Hyslop-Margison & Leonard, 2012).Education is quintessentially a private good; tuition fees must be offloaded on the consumer (students), education is a "consumable service that can be shopped for measuring cost against quality … [which] can be improved the way business people attempt to improve business-by squeezing more productivity out of the labor force while imposing cost-cutting and efficiency measures" (Saltman, 2014, p. xxii).
Perhaps the diffusion of obstacles placed before the market is particularly important for higher education.The question is who decides these obstacles, and what influence such a step can have on the capacity of this enterprise to serve the society that supports it.In the neoliberal doctrine, liberty and freedom are measured against how much hindrance that institutional factors and regulations cause to the creation of a good business climate; constraints to the freedom of the market are deemed undemocratic (Thorsen & Lie, 2006).As Hyslop- Marginson and Leonardo (2012) observe, the implication here is that "freedom and democracy are generally reduced by neoliberalism to libertarian discursive mechanisms that permit financiers and the ruling elite to operate in ways that undercut the general welfare of society" (p.3).Harvey (2006) contends that the real problem here lies in the conflation between freedom and democracy as a political practice and the freedom of the market rather than with the freedom of individuals.For higher education, the freedom to teach and research independently could be only acceptable within this context to the extent that they do not interfere with market logic and the corporate authority, otherwise, they are undemocratic, and thus must be removed or curtailed.
Deregulation of the marketplace (and public institutions in general) is inherent to the discourse of neoliberalism and its precursor: classic liberalism.It is democratic; it is humane; it is crucial.However, there is skepticism about the adequacy of these views.It is also argued that despite the central role that Keynesian economics gave to the state in planning public policy, the social democratic state structured from this regime after the Second World War had not only managed to rescue liberal capitalism from the Great Depression of 1929, but also to achieve important gains (Dieter & Neuhoffer, 2006).Then, are all state controls bad?
The neoliberal hegemony in policy has portrayed erstwhile state regulations as the antithesis of democracy.In a rebuttal to this argument, Olssen (2009) contends that this is not a valid comparison, because the liberal notions of 'self-regulating market' and 'laissez-faire' (let it be) assume the actors involved in the market processes possess moral constraints that would make them correct themselves when things go wrong.The reality is, Olssen adds, "one could only 'let be' if nature determined a satisfactory and harmonious outcome" (p.2).Hobhouse (1911), one of the oldest polemicist of the laissez-faire, also makes this point that nature does not provide a natural tendency to self-regulation and/or self-correction mechanisms; hence, some control is required.
Hobhouse, argues that the state-economy relationship should not be conceptualized as do 'nothing' or 'interfere', rather it should be conceptualized as the control that impedes creativity and freedom vis-a-vis the controls that facilitate them.
The neoliberal discourse has always portrayed state regulations and welfare state practices in a context of conspiracy theory to thwart individual liberty and human well-being.But to little avail of this theory, history decrees otherwise.Polanyi (1957) observes that the shifts from liberal solutions to state solutions that occurred in England, Germany, and France since the 1860s were not a result of ideological convictions on the part of those who engaged in the process.Instead, these collectivist solutions were a consequence to the failings of the market.Shonfield (1965) adds, that interventionist policies provided guidelines to social services, education, and employment because the market proved to be "a poor guide to the best means of satisfying the real wishes of consumers" (pp. 226-227).
Neoliberal rhetoric appeals to individual liberty and freedom.This strategy has proved very effective, because it resonates with the Western tradition where these values are prized and valued.But what sort of freedom is envisioned here for the university?The cultural critic Matthew Arnold (cited in Harvey, 2005) long ago said, "Freedom is a very good horse to ride, but to ride somewhere" (p. 6).To what destination the university is expected to ride the horse of freedom given to it by the market?
The next section discusses the effects of marketizing the university on its values of academic freedom and autonomy.
Free University, Free Market: Clash of Values
Universities today face a stark choice.They can commit to the market imperatives and become engines for transmitting skills and technology for private corporations, or they can adhere to their independence and determine for themselves the values they wish to embody.Unfortunately, this eventuality seems inevitable.Woodhouse (2009) argues that the opposition between the market and the university emanates from their distinct logics of value.Let us get back to basics and think of the purpose university.Schafer (2008) sums up, "universities are places where scholars pursue knowledge for its own sake" (p.52).One way of interpreting this egalitarian conception is that the university functions of teaching and research are primarily driven by curiosity regardless of the monetary value of the results produced in the process.Schafer adds that "the intellectual vitality of universities derives from the fact that scholars are … beholden to no one … [and] the knowledge gained by university research is then freely disseminated to colleagues, students, and the wider community" (p.52).Woodhouse (2009) subscribes to this view and adds that this perspective is only possible "where the freedom to pursue knowledge critically is sustained by an institutional autonomy guaranteeing the university's independence from powerful social forces, including the governments and the market" (p.37).Marginson (2014) explains that the logic of the university centres on the notion that knowledge is "none-rivalous and none-exclusible" (p.59).In other words, knowledge sought freely and autonomously is a reward that does not lose its value or amount in circulation or in being held by many people, rather, its value increases when distributed freely across society.
On the other hand, the market logic of value is predominantly financial.The chief goal of education is producing relevant knowledge and research that can make money for private individuals and companies (McMurtry, 1998).Woodhouse (2009) notes that "the goals of pursuing knowledge and maximizing private profit contradict one another because sharing knowledge with others is incompatible with accumulating money for oneself" (p 22).Therefore, it is a fundamental error to identify the goal of education with that of the market where "private profit is acquired by a structure of acquisition that excludes other from its appropriation" (McMurtry, 1991, p. 38).In fact, the logic of the market is not only financial; it is not inclusive.Those who seek knowledge but lack the money are not permitted access that knowledge or to share in its accumulation.
The market principles can be useful for planning of the state, but they may not be suitable for planning education.Newson (1992) explains this oppositional character between the two by saying, "The principles that benefit the markets undermine the objectives of education and conversely, education that achieves its intended purposes cannot serve well as a marketable commodity" (p.234).
But the clash of values is just like all sorts of conflict, when two sets of values collide, one trumps the other.In the neoliberal moment of today and the market holding the high cards, it might seem inconceivable that the university and its values can emerge unscathed.Notwithstanding how demoralising this opinion can be, it is important to remember that attempts to subdue the university are not new.Each period carried a fresh wave of challenges to the academy.The stadium that took shape in medieval Europe was not less threatened by the imperium and sacerdotium, the great powers of that age; nor was it less threatened later by the national state (Perkin, 2007).Scholars always strove to keep alive the freedom of inquiry and autonomy that helped the Western university continue to function as a bastion of enlightenment for centuries.
The next section discusses the aspects upon which academic freedom and autonomy rest, and how these aspects are attacked to remove barriers to the value system of the market.
Handling the Hurdle
Refusal to acknowledge the opposing value system of the university is explicit.In Canada, for example, the Canadian Corporate-Higher Education Forum (CHEF) reiterates the call for the diffusion of any barriers to strong corporate-university linkages.The CHEF was formed in 1983 with a membership comprising university presidents and corporate CEOs to discuss issues of broad societal issues.An early example of the CHEF stance toward freedom and self-governance of the university was clearly articulated by Judith Maxwell and Stephanie Currie, private sector economists who were commissioned by the forum to identify of corporate-university cooperation to increase Canada's economic competitiveness internationally.Their book, Partnership for Growth in 1984, advised where exactly the forum should invest and steer resources.Maxwell and Curry (1984) saw great potential for the market in the university and recommended that to achieve this potential, the university should be attuned to the needs of the market.Maxwell and Currie advised that this can achieve excellence and ensure the technology and skills created in the university are accessible for the Canadian industry which needs these resources to stay competitive in post-industrial era.According to Maxwell and Curry (1984), the university curriculum and research should be aimed at serving the market by generating technology ready for use by private corporations, and education should be limited to receiving training in the skills that add value through the application of scientific and market knowledge.
The most disturbing aspect in this assessment is the part about the cultural differences of the university.For Maxwell and Curry (1984), these cultural differences are hurdles that must be overcome for greater cooperation between universities and businesses.Moreover, it is the university who should adjust for the market, not vice versa.Maxwell and Curry contend that the freedoms to select instructional content, management of research, and communication of knowledge are at odds with principles of profitability and efficiency that determine value in the market, and thus, they must be discarded.This thesis has serious implications for the universities and the societies supporting them.
The model described here marks a seismic shift in the life of the academy in two ways.First, it changes the egalitarian nature of the university.Second, it subordinates the university to the overriding market principle of monetary gain.Simply, this regime leaves no place for fundamental values for academic life as academic freedom, the safeguard of the autonomy to teach, research, articulate theories, and espouse views without restriction by prescribed doctrine or institutional censorship (Shills, 1991;Hogan & Trotter, 2013;Turk, 2008;Woodhouse, 2001).
Academic freedom derives its importance from the fact that it offers secure opportunities to base teaching and research on a critical pursuit of knowledge rather than prejudice or dogma.By so doing, Woodhouse (2009) observes, professors challenge students' monolithic beliefs by exposing them to counter-argument and by opening debate about the adequacy of contesting ideas, and this gives the opportunity to think of issues in transformative ways that can ultimately help students grow intellectually in self-understanding and understanding of the world.Cohen (2008) believes that the infringement of the freedom to contest and scrutinize various standpoints weakens the integrity of the university as it can result in flawed research.In this sense, academic freedom is central to the purpose of the university, because it helps professors fulfil their scholarly obligation of creating and disseminating knowledge in an unbiased way that sustains public trust in the university.Simultaneously, it helps students establish the habit and capacity to pursue knowledge critically.This stands in stark contrast to the market principle of profitability, which (as discussed earlier) translates into the knowledge, and skills of most worth are the ones should be taught.Monetized reasoning like this has serious ramifications for the academy.First is the marginalization of academics: it is outsiders who decide the course syllabi, reading lists, and sometimes even the points of view.This is particularly the case when research is funded by external bodies as private corporations.Universities deviate from its essential mission of pursuing the interests of the whole society to pursuing the interests of stockholders.Second, which has a profound relevance to the purpose of the university, is the reduction of the academy into a training centre graduating human capital with technical skills that add value to the market.The real danger of this logic of value, Woodhouse (2009) notes, "[It ignores] understanding, … rather, regards all learning as a matter of acquiring skills in isolation from the academic discipline in which they are used.The goal of learning 'to think critically and act logically' … [becomes] 'to evaluate situations, solve problems and make decisions' in ways that are useful to future employers but that do little to enhance the critical thought of students."(p.26) Barrow (1990) agrees that problem-solving of this kind equates to acquiring skills or exercises that can be mastered and improved by practice, rather being constituents of broader understanding based in various disciplines of thought.Woodhouse (2001) argues that "academic skills decoupled from any disciplinary base, are really nothing different from skills management" (p.111).It is doubtful that the market model can accommodate for deep structural aspects of education when its logic derives from a belief that the goal of all human activity is to maximize profit.
Why Tenure?
University professors are in a position where they need to take intellectual risks and tackle controversial issues that might differ from the dominant discipline.Hogan and Trotter (2013) explain that tenure was necessary to protect scholars from societal and institutional retributions when their views defy accepted norms.Deem (2008) observes that tendencies to replace tenured faculty by contingent faculty (adjuncts, part-time, and non-tenure track) and powerful accountable administrators were intensified in 1980s with the rise of new managerialism, an ideological approach to management characterised by cuts in public expenditure and the introduction of quasi-markets to public services-a phenomenon usually referred to today as casualization (Seth, 2004).Aby (2007) believes that academic freedom rests on tenure, which makes the casualization of university teaching a great danger to academic freedom, because it replaces tenured faculty by vulnerable ones whose guarantee of academic freedom is tenuous.Aby (2007) adds, "Precarious appointments like these make academic freedom more a wish than a reality" (p.12).However, there are other pernicious consequences to the profession.Wilson (2009) argues that replacing tenured faculty by adjuncts lowers labour costs at the expense of academic standards and the intellectual quality, because these professors lack the job security of tenured faculty that enables them to take intellectual risks without putting their jobs at the mercy of administrative vagaries.The slipperiness of this particular attack on academic freedom is that it comes under the guise of appealing terms as efficiency, accountability, and imbalanced freedoms; whereas the reality is, it is aimed at cutting budgets and suppressing internal dissent.
Tenure hence maintains the quality of education, preserving at the same time the freedom necessary for pursuing critical knowledge and for consolidating democracy.Freedom, argues Barber (2003), is what makes the university better equipped for performing its civic mission of turning out good citizens of free communities.Important questions arise from this discussion is about the parallel between academic freedom and democracy and about the manner in which the university provides the environment for the two to thrive and to contribute to the betterment of democratic societies.Answering these questions is the theme of the next section.
Academic Freedom and Democracy
The university was always a haven for dissenting voices.In medieval Europe, university workers resisted external interference through cessatio, a form medieval strike (Hayhoe, 2001).How much the university contributed to engendering a tradition for democracy in Europe is quite debateable, but there is agreement that academia always cherished a tradition of freedom and self-rule (Altbach, 1998).According to Wilson (2009), free universities are crucial for democratic societies; this derives from the fact that universities enjoy many protections of free speech.
It is unusual to find a mission statement of any institution of higher education that directly refers to any democratic mandates.However, from time to time, this mandate needs to be visited, explicated, and declared to reconfirm the role of higher education, especially the university, as a democratic space.First, we need to know if there is democratic mandate for education in the first place.In Amy Gutmann's famous book "Democratic Education", the author's thoughts coalesce around the democratic purpose of primary and secondary education.Overall, Gutmann (1987) forcibly demonstrates in this book that the ideas of liberal education and democracy go hand in hand.That is, liberal education offers the opportunity to achieve the democratic purpose of education, which is the formation of what Gutmann calls "the democratic character" (p.51).Gutmann adds that this democratic character involves "the development of capacities for criticism, rational argument, by being taught how to think logically, to argue coherently and fairly, … and to learn not only to behave in accordance with authority but to think critically about if … [children] are to live up to the democratic ideal of sharing political sovereignty." (pp. 50-51) The university, the chief pillar of higher education, has a related but different democratic purpose.Gutmann (1987) argues that the university is less about character formation; "although learning how to think carefully and critically about political problems, to articulate one's views and to defend them before people with whom one disagree is a form of moral education … for which universities are well suited" (p.173).Fallis (2005) points out that the university continues the process of building democratic character, but the fundamental democratic purpose of the university is "the protection against the democratic tyranny of ideas.Control of the creation of ideas-whether by a majority or a minority-subverts democracy" (p.19).The university hence serve democracy by being the space that gives the opportunity to grow intellectually and by being the sanctuary where views, even unorthodox ones, are judged by their intellectual merit without the fear of oppression, censure, or even subversion.Hyslop-Margison and Leonard (2012) argue that what distinguishes democratic societies is the existence of public discursive spaces like the university.Habermas (1996) calls these discursive places the life world.For Habermas, the life world refers to the human experiences, spaces, and interactions that create spaces for meaningful democratic discussions.Habermas suggests that neoliberalism has caused destruction of the life world.Hyslop-Margison and Leonard believe that neoliberalist, market-driven policies threaten the historically democratic core values of higher education through the marginalization of subjects that afford students-as-developing-citizens the knowledge and the opportunity of exposure to liberal arts and engaging in critical discussions.Hyslop-Margison and Leonard add that the slashing of humanities from university curriculum exemplifies how critical forums are being undermined so that no space remains in society where unjust economic arrangements can be discussed freely.In fact, no one can credibly ignore the incontrovertible evidence of the increasing attack on humanities, a phenomenon, according to Nussbaum (2010), is causing a democratic crisis in modern education.
Conclusion
This paper has outlined the threats associated with subordinating of the universities to the imperatives of the market.Special focus was given to the ascendance of neoliberalism as the state administration orthodoxy and the implications of the emphasis this ideology places on the application of the market principles in the operation of a key institution like the university.The paper also discussed the fundamental differences between the value systems of the market on one hand and the university on the other.Following on, the text shined a light on the association between the intrinsic values of the university to democracy.
In conclusion, the neoliberal moment poses tremendous intellectual challenges to the academy.There is an urge for the university to protect its independence from the market or any other forces that want their ideological or commercial views to supersede professional standards in academic policy-making.The university can only serve the public when it is open to the widest of viewpoints and perspectives.Education that stifles counter-arguments is not worthy of the name 'knowledge'; it is tantamount to indoctrination.History has shown that indoctrinated societies lose sight of the structural flaws in the system.The Soviet Union is an exemplary of how blind beliefs in mythical justice and capacity to abolish all the problems of humanity led to an absolutist regime in times everyone thought absolutism had become a relic of the past.Similarly, well-intended claims by the market advocates of more efficient use of the academy for human wellbeing need to be exposed to rigorous examination to ensure their applications do not lead to the opposite.
Universities offer a free space where people learn to think for themselves without the fear of censure or coercion.They are also places where thought are tested.The suppression of the distinct freedom of academics undermines the freedom of the whole society since it weakens the ability to think of reality in a critical manner.Free and autonomous universities are necessary to democracy.But as this paper shows, free universities require protection from the influence of external influences and a sincere commitment to accommodating opposing views.To infringe upon these values is to infringe upon them at our peril.
|
v3-fos-license
|
2019-04-04T13:03:00.019Z
|
2015-07-14T00:00:00.000
|
94507903
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://article.sciencepublishinggroup.com/pdf/10.11648.j.ejb.s.2015030301.12.pdf",
"pdf_hash": "c2e32b5740f05ce929534db4d81d75c6d8d32e1c",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46155",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"sha1": "1b37bdb5a488585ab7e9ab851228024b22f61d18",
"year": 2015
}
|
pes2o/s2orc
|
Synthesis , Characterization and Radiolabeling of Bortezomib with 99 m TC
The development of new specific diagnostic radiopharmaceutical is the need of the hour for the timely management of cancer patients. At present, available radiopharmaceuticals are not much specific for tumor imaging. The present study was conducted to radiolabel bortezomib with technetium-99m ([99m]Tc). Labelling was performed by both direct as well as indirect methods and the developed radiotracer was subjected to quality control tests. The labelling efficiency of [99m]Tc-bortezomib was estimated to be more than 39%. with direct method. On the other hand, indirect method using protein albumin as ligand resulted in net binding of 41 %. The present study resulted in successfully labelling of target specific anticancer drug Bortezomib by both direct as well as indirect methods. This newly developed radiotracer has promising avenues for early detection of deadly disease of cancer. The radiotracer, however, needs further validation through animal experimentation and clinical studies.
Introduction
Cancer is a class of diseases which involve uncontrolled division of cells. These cells have ability to invade nearby tissue by the process of invasion or get disseminated to distant locations by the process of metastasis which involves transport of cancerous cells to distant sites via bloodstream/ lymphatic system. Extensive research is going on worldwide to find various therapies and early diagnostic measures to cure this pathological state. However to date, the success rate is not up to the satisfactory level. So, it is desired to develop a target specific screening agent which is able to detect the process of carcinogenesis right at the initiation stage Targeted therapy is the focus of the cancer research worldwide (1). It is a type of medication that blocks the growth of cancer cells by interfering with specific targeted molecules needed for carcinogenesis and tumor growth, rather than by simply interfering with rapidly dividing cells (e.g. with traditional chemotherapy). Targeted cancer therapies are believed to be more effective than current treatments and also less harmful to normal cells.
In the present study, the drug of interest is bortezomib which belongs to a group of target specific anti cancer drugs and is a highly selective, reversible inhibitor of the 26S proteasome in cancer cells (2). The proteasome is an ubiquituous enzyme complex that plays a critical role in the degradation of many proteins involved in cell cycle regulation, apoptosis and angiogenesis (3). In cancer cells, these pathways are excessively expressed and are elementary for cell survival as well as proliferation, so the inhibition of proteasome is an attractive potential anticancer therapy (4,5). On the other hand, albumin is the bio-olecule of interest in the present study which is an endogenous nano-particle and is known for its binding properties to various endogenous metabolites, drugs and metal ions. Therefore, the prime aim of the present study is to radio label Bortezomib with 99m Tc so as to develop a new radiopharmaceutical which will act as promising diagnostic agent to detect the process of carcinogenesis at an early stage. So, the present study is first of its kind to label a target specific anticancer drug bortezomib with 99m Tc by both direct as well as indirect labeling.
Chemicals
Bortezomib was purchased from Cadila pharmaceuticals ltd. All other reagents were procured from Merck Chemicals and Loba chemicals Pvt. Ltd.
Labelling Procedures
Direct labelling approach: This approach involed use of radioisotope, drug of interest and an effective reducing agent. For direct labelling of Bortezomib with 99mTc, stannous chloride was used as reducing agent. Fresh pertechnetate Technetium Column Generator (Isorad, Israel) was used for the labelling procedures. The variable concentrations (20 lg, 50 lg, 100 lg, 200 lg, 400 lg, and 1000 lg) of stannous chloride (Sncl(2).2H(2)O), different pH (2-9) conditions, and variable incubation times (5 minutes, 10 minutes, 20 minutes, and 30 minutes) were tested.
[99m]Tc-doxorubicin was prepared by dissolving 2mg of doxorubicin in 1mL distilled water followed by addition of a standardized concentration of 100 lg stannous chloride dihydrate and pH was marked to 6.0. The contents were filtered through a 0.22 lm membrane filter (Millipore) into a sterile vial. About 40.0MBq radioactivity of pertechnetate was added to the mixture and incubated for 15 minutes. The resultant radioligand [99m]Tc-doxorubicin was then subjected to various quality control tests.
Indirect labelling: This approach involved use of additional ligand so as to assist in binding of both the drug of interest and radioisotope. In the present study, we have used albumin as ligand as it has ability to bind both the Bortezomib as well as 99mTc.
In Vitro Quality Control Procedures
For quality control two methods were used viz. Paper chromatography and Instant thin layer chromatography.
Paper Chromatography: Placed approximately 1 ml of acetone into one 10-mI glass vial and 1 ml of 0.9% NaCI into an identical vial. Loaded a spot of radiopharmaceutical at the bottom of the Whatman paper chromatography strip and marked the position of the same with a pencil line. Developed paper strip in acetone solution for free pertechnate and in 0.9% NaCI solvent for free hydrolyzed Tc, until solvent front migrated to top. Cut strips into sections. Counted all sections for activity (per unit time) using a gamma counter.
Instant Thin Layer Chromatography: Radiochemical purity of the labeled complex was determined by instant thin layer chromatography (ITLC) using 100% acetone and 0.9% sodium chloride as solvents. Briefly, 20.0 lL of radiocomplex was dropped onto the ITLC strip at the marked origin point and put into the solvent chamber at room temperature. The percent labeling of [99m]Tc-bortezomib was calculated at 15 minutes, 1 hour, 4 hours, and 24 hours by ITLC method. The percentage of free pertechnetate, hydrolyzed pertechnetate, and bound pertechnetate was calculated.
Results
The results obtained from various experiments conducted in this study are depicted in Tables
Standardization of Stannous Chloride Concentration
In order to label the drug we first standardized concentration of reducing agent viz. Stannous chloride via keeping the drug concentration constant at 10 µg. We incubated different concentrations of stannous chloride with drug and 2 mili curie of radioisotope 99mTc at room temperature for 10 minutes. Then ran paper/ ITLC chromatography. We used two different solvents in order to evaluate percentages of 99mTc impurities called free pertechnetate and hydrolyzed 99mTc. Table a gives information about free pertechnetate using acetone as solvent and table b gives information about hydrolyzed 99mTc. Table c gives net percentage labelling which is 38 % and the best dose standardized for stannous chloride is 10 µg.
Standardization of Bortezomib Drug Concentration
We then standardized the dose concentration of Bortezomib drug Table f gives net percentage labelling which is 27.6 % and the best concentration observed in this experiment was 40 µg. However, we have got better binding with 10 µg of drug in earlier experiment. So the final standardized concentration for drug is 10 µg.
Standardization of Albumin Concentration (Indirect Lableing)
To further improve the labelling efficiency we used albumin as ligand. Similarly we standardized the concentration of albumin by keeping concentration of the drug as well as stannous chloride constant at 10 µg So final standardized concentration for albumin for drug is 10 µg (Table g).
Discussion
The present study was aimed to develop such a radiopharmaceutical which is able to detect the process of carcinogenesis at an early stage. We have utilized drug delivery efficacy of nano systems (albumin in indirect labeling), sensitivity of radionuclide ( 99m Tc) and target specific ability of bortezomib (6-7). The combined utilization or in other words fusion of these promising areas resulted in successful development of a unique radiopharmaceutical which will allow early detection of process of carcinogenesis.
Direct labelling approach does not require any additional ligand or any other assistance by foreign molecule. It involves use of radioisotope, drug of interest and an effective reducing agent. In the present study, stannous chloride was used as reducing agent for the direct labelling of Bortezomib with 99mTc. We tested various concentrations of both stannous chloride and drug, in order to achieve labelling with high efficiency. In nuclear medicine, the process of labeling of cells and molecules with Technetium-99m almost always requires the use of a reducing agent, since the eluate obtained in the generator as pertechnetate ion is not easily connect to other chemical species (8).
Indirect labelling, on the other hand involved utilization of additional ligand so as to assist in binding of both the drug of interest and radioisotope. In the present study we have used albumin as ligand as it has ability to bind both the Bortezomib as well as 99mTc. With indirectly we are able to further enhance the labelling efficiency of bortezomib with 99m Tc. The serum albumin is a an endogenous nano-particle having a molecular mass of 66 KD with molecular dimensions of 30 x 30x 80 A 0 (3 x 3 x 8 nm) and is known for its binding properties to various endogenous metabolites, drugs and metal ions (9).The serum albumin is an efficient nano sized drug delivery system which can be exploited for the preferential and specific target oriented drug delivery. The main mechanism of the above albumin transport is transcytosis, which is a molecular pathway that involves interaction of albumin with its cell surface albumin receptor known as gp60 (albodin). Some studies have reported that the metabolism, body distribution and efficacy of various drugs markedly become affected after binding with albumin (10)(11). Also, there is a recent report which revealed the binding abilities of Bortezomib drug with albumin (3). Binding was for both the methods were further confirmed by paper as well as instant thin layer chromatographic (ITLC) procedures as they are the preferred choice of for quick as well as efficient quality contol of the newly formed radiopharmaceuticals (12).
So, the present study concludes that target specific anticancer drug Bortezomib is labelled more efficiently with indirect method of radio-labelling using albumin as a ligand. This newly developed radiotracer has promising avenues for early detection of deadly disease of cancer. Further, investigations are needed for clinical validation via experimental as well as clinical studies.
|
v3-fos-license
|
2020-08-20T10:05:56.203Z
|
2020-01-01T00:00:00.000
|
226439229
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-45843-0_22.pdf",
"pdf_hash": "46b880335af44f7cec2c54532841e648662e703a",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46156",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "3e7cd89d4c073fa653c6bc418e09f7942853e58a",
"year": 2020
}
|
pes2o/s2orc
|
Ecosystem-Based Management to Support Conservation and Restoration Efforts in the Danube Basin
Biodiversity and environmental integrity of river systems in the Danube catchment is threatened by multiple human alterations such as channelization, fragmentation or the disconnection of floodplains. Multiple human activities, including the construction of hydropower plants, expansion of agricultural use, and large-scale river regulation measures related to navigation and flood protection, are resulting in an ongoing loss of habitat, biodiversity and ecosystem service provision. Conservation and restoration of the systems biodiversity and ecosystem service provisioning is a key task for management but is challenging because the diversity of human activities and policy targets, scarcity of data compared to the complexity of the systems, heterogeneity of environmental problems and strong differences in socio-economic conditions along the Danube River hampers coordinated planning at the scale of the whole river basin and along the whole river from source to mouth. We evaluated three different implementations of an Ecosystem-Based Management (EBM) approach, which aims to support management efforts. This was done following the principles for EBM related to the resilience of ecosystems, the consideration of ecological and socio-economic concerns, the inclusion of multi-disciplinary knowledge and data addressing the ecosystem scale independent of administrative or political boundaries. This approach has been developed in the H2020 project AQUACROSS.
Introduction
The core principle of Ecosystem-Based Management (EBM) is to concurrently consider biodiversity and human society as integral parts of the ecosystem and manage the socio-ecological system as a whole Langhans et al. 2019). Delacámara et al. (2020) review the many 'flavours' of EBM to identify six characteristics or principles, which set EBM apart from other types of management: 1. It considers ecological integrity, biodiversity, resilience and ecosystem services 2. It is carried out at appropriate spatial scales 3. It develops and uses multi-disciplinary knowledge 4. It builds on social-ecological interactions, stakeholder participation and transparency 5. It supports policy coordination 6. It incorporates adaptive management.
While these EBM principles are not proscriptive, i.e. any particular EBM activity is not required to have all these characteristics, they may offer useful criteria by which EBM activities may be practically assessed.
The Danube River Basin (DRB) is the most international river basin in the world shared by more than 80 million people across 19 countries (Fig. 1). The Danube River connects with 27 large and over 300 small tributaries on its way from the Black Forest to the Black Sea, covering a catchment size of approx. 800,000 km 2 .
As a result, a huge variety of human activities and related pressures affect this area and a number of major environmental issues threaten the ecosystems of the Danube. As Europe's second longest river, the Danube has long been a major transport corridor. Today, it connects Europe's largest port of Rotterdam with the Black Sea via the Rhine-Main-Danube canal. Physical modifications of the river morphology to accommodate transport and power production have altered flow regimes with serious consequences for ecosystems including the disconnection of the river from its natural flood plains. Agricultural activities along the Danube have resulted in pollution by nutrients and pesticides. The combined effects of these and other pressures have resulted in overall degradation of the freshwater ecosystems and severe declines in iconic species such as different sturgeon species. The International Commission for the Protection of the Danube River (ICPDR) provides a formal international mechanism for environmental management collaboration across the Danube Basin (detailed information on the many environmental issues can be found on their website (https://www.icpdr.org/main/).
Despite conservation efforts, ongoing and partially conflicting demands within and among the different neighboring countries, inconsistencies in legislation, high administrative and socioeconomic complexity as well as partially lack of on-site expert knowledge all hamper sustainable management (Hein et al. , 2018Habersack et al. 2016).
There are two major challenges for the management of the DRB. The multicultural setting makes transboundary issues extremely difficult and challenging. For example, the basin lies in the historical political border between capitalist and communist countries, which greatly influences the socio-economic situations, social behaviors, technical developments, as well as water uses and protections between the two former systems (Sommerwerk et al. 2010) and resulting in varying priorities towards, and capacities for, environmental protection (O'Higgins et al. 2014). In the DRB, this historical background is well reflected in the structural differences between the Upper Danube (capitalist countries) where hydro-morphological alteration is high but pollution is low, while in the Lower Danube (former communist countries) pollution is still a highly relevant issue but level of impact due to river engineering works is still relatively low (Sommerwerk et al. 2010). This phenomenon is also reflected in the ranking of stressors along the Danube River. Hein et al. (2018) found that for the Upper Danube hydro-morphological alterations due to hydropower generation, navigation, and flood protection has the highest importance followed by forestry, disturbance due to recreational activities, recreational fisheries and last by pollution, whereas the Lower Danube is mostly impacted by land use including forestry, agriculture and urbanization having an direct as well as an pollution effect on the system and last by hydro-morphological alterations of the river.
The Danube River Basin and the corridor of the Danube river Ecosystem-Based Management to Support Conservation and Restoration Efforts in. . .
The second major challenge in DRB management is to establish synergies among multiple competing interests and policy targets including e.g. navigation, hydropower production, flood protection and nature conservation (Sommerwerk et al. 2010). Human stressors interact with the management goals of the Water Framework Directive (EC 2000) or Nature Directives (EC 1992) and the Biodiversity Strategy to 2020 (EC 2011), resulting in potential synergies and conflicts between the various management goals. The implementation of sectoral policies on hydropower (renewable energy), navigation, and flood protection may show significant synergies and antagonisms, and the interaction of their implementation significantly influences the actual type and extent of pressures on rivers. Table 1 lists some of the interrelated directives, policies and initiatives with specific relevance to the management of the Danube River and its associated ecosystems.
For example, the Flood Risk Directive (EC 2007) aims at reducing risk of flooding along water courses including natural water retention measures (e.g. dyke relocation to provide more space for rivers). Floodplains are therefore a key element of the EU Green Infrastructure Strategy (ICPDR 2016). Like-wise navigation projects might either have a synergistic effect on nature protection goals in already significantly altered river sections (if ecological restoration is supported within the project), or an antagonistic effect in intact river sections where every intervention may create a conflict with nature protection goals (DANUBEPARKS 2011). With a multitude of interacting environmental and other directives, management targets can have synergistic as well as antagonistic effects, which vary from place to place. Moreover, these interactions are complex and not sufficiently understood.
In this context, modern management concepts can neither exclusively focus on the mitigation of single pressures or stressors nor can they limit their measures to
Renewable Energy Directive
Total of 20% of EU energy needs to be supplied by renewable sources (including hydro power). Initiative Trans-European Transport Network Good navigability for important waterways, including the removal of obstacles single ecosystem components, species groups or other single targets. In contrast, they have to consider complex interactions and feedback loops between the ecosystems and the society. Thus, for the future, explicit and well-defined ecosystem-based targets need to be formulated, and adequate measures need to be defined to achieve more resilient ecosystems, guarantee the provision of a broad range of ecosystem services, and increase the resilience against emerging stressors like climate change or invasive species (Hein et al. 2018). Given the need for holistic catchment scale management approaches (Hein et al. 2018;Seliger et al. 2016), EBM offers the potential to incorporate multiple objectives related to biodiversity, ecosystem services and socio-economic benefits into a single, harmonized management approach for the DRB. The Danube River, as one of the largest river-floodplain systems in Europe, is a highly complex, threatened and challenging socio-ecological system, and therefore an ideal system to test and apply an EBM approach. To this end, within the frame of the AQUACROSS research project a number of tools and techniques were combined and tested for application in the Danube catchment. In this paper we describe and discuss three different approaches and provide a qualitative assessment of how these methods relate to the EBM principles identified above.
The Studies
Other authors in this volume Lewis et al. 2020) have addressed the challenges of model design and selection and the potential for combining models to address particular situations. We evaluate three different quantitative and qualitative approaches that have been applied at the Danube catchment scale to describe and model the socio-ecological system. A linkage framework approach Teixeira et al. 2019;) was used to assess the relationships between different activities within the catchment and their relations to biodiversity and ecosystem services. The potential of EBM was also tested within two quantitative studies following an EBM planning framework based on a generic model-coupling approach proposed by Langhans et al. (2019). The workflow consists of three elements a spatial (model-based) representation of (1) biodiversity, (2) ecosystem services (ESS), and (3) a combined spatial prioritization of biodiversity and ESS supply and demand. Finally, Domisch et al. (2019) combined the ARIES (Artificial Intelligence for Ecosystem Services) modelling framework (Villa et al. 2014) with the application of MARXAN (Ball et al. 2009) to identify a range of spatially explicit management zones and options. Funk et al. (2019) combined Bayesian Belief Network Modelling with the ARIES model to identify river reaches maintaining multiple ecological functions and support multiple services to prioritize individual areas for conservation incorporating a range multiple restoration criteria.
Linkage Frameworks
A Linkage Framework (LF) for the Danube Basin (Fig. 2) identified 53 specific human activities (or Drivers) occurring in the catchment . Furthermore, 35 different pressures in five different categories (biological, chemical, physical, energy, and exogenous/unmanaged) were identified, as well as 33 ecosystem components (27 habitats and 6 biotic groups). These components were linked to 27 ecosystem services (ESS) and abiotic outputs. Over 23,000 impact chains relating drivers-pressures and ecosystem components were identified and categorized. To investigate the impact chains, their connectance was calculated and linkages were also weighted in terms of the extent, frequency, dispersal, severity and persistence of interactions to increase their explanatory power. Analysis of the impact risk of Fig. 2 Flow diagram of the linkage framework depicting impact chains from habitat type to ecosystem services pressures on ecosystem components revealed that physical change poses the highest threat to freshwater systems and to fish. Physical pressures are highly linked to environmental engineering and hydropower but also to the direct effects of land claim or land conversion activities . Further along the impact chain, the ecosystem components within the Danube catchment were identified to have the capacity to supply 27 ESS (regulating and maintenance, provisioning, and cultural services and abiotic). Floodplains with their riparian forests and wetlands were the highest connected realms providing the greatest variety of ecosystem services.
Coupled Models: ARIES and MARXAN
Domisch et al. 2019 tested the EBM approach within the whole DRB by combining species distribution modelling for 85 fish species as a surrogate for biodiversity with four estimated ESS layers (carbon storage, flood regulation, recreation and water use) using the modeling platform ARIES. In a final step, multiple management zones were defined using the spatial prioritization tool Marxan with Zones to derive different spatially explicit management options for the whole region. In order to explore the transboundary challenges of the Danube catchment management the costs of establishing management zones were compared across nations using purchasing power parity (PPP) adjusted gross domestic product (GDP) per capita and the relative share of each country's area of the DRB. This approach therefore accounted for countries having limited financial resources (i.e. a proxy for social equity in the EBM approach) and less land area in the DRB as those might face additional challenges in financing EBM. Finally, they compared the spatial plan derived from an assumption where each country contributes equally to the EBM to one where the PPP-adjusted GDP and the percent area of each country in the basin were used as additional costs. The two analyses led to clear differences in the spatial configuration of management zones, in the GDP and percent area approach more conservation and critical management zones (with medium level of ecosystem service use) were allocated to the (wealthier) upper Danube region. Domisch et al. 2019 used Marxan with Zones, to minimize the overall costs of a zoning plan, while ensuring that the predefined feature targets were met. Therefore four zones were characterized by different objectives and constraints (1) a "focal conservation zone", (2) a "critical management zone"-a buffer zone-, (3) a "catchment management zone" allowing for higher levels of ESS use potentially less compatible with protecting biodiversity (i.e., recreation), and (iv) a "production" zone with high use for ecosystem services (i.e., water use). Funk et al. (2019) employed a coupled modelling approach at the scale of the Danube River to prioritize river-floodplain stretches of the navigable Danube for restoration and conservation, focusing on the river and its adjacent floodplains and riparian area (rather than the entire catchment). Bayesian Belief Networks (BBN: graphical models which represent the probabilistic relationships between different components of a system) were used to integrate different sources of information on Drivers and Pressures and their effects on environmental State (Elliott & O'Higgins, 2020). Open access GIS Datasets for Drivers and pressures included: land use data, potential riparian zone transport and navigation, and hydro-morphological pressures. This information was then used to inform weighting of the relationships within the BBNs.
Coupled Models: Bayesian Belief Networks and ARIES
Based on spatial information on conservation status based on the Habitats Directive reporting, BBNs were generated to spatially model likely species distribution in relation to the combinations of drivers and pressures for each of eleven indicator species representative of different habitat types ( Table 2). The predictive power of these BBN models was tested statistically (using the R statistical computing package (see Funk et al for full details). Spatial mapping of ESS was conducted using the ARIES pollination, recreation and flood models submodels.
A spatial database combining the ARIES outputs with the outputs of the probabilistic species modelling was interrogated using clustering to identify multifunctional river and flood plain reaches supporting biodiversity and ESS supply. These multi-functional clusters were then mapped.
The model used a multi-objective optimization tool (e.g. Sacchelli et al. 2013), which enabled systematic optimization for different management objectives. One objective was to prioritize sections for conservation or restoration with a high remaining multi-functionality to reduce effort and costs, a second objective was to prefer sites with high reversibility (i.e. low level of human use) to increase probability of success, and finally to prefer semi-natural areas to reduce costs and loss of agricultural yield. Different weightings of the three objectives represent different possible management plans and therefore can be used as a basis for a more integrated and targeted planning. This process resulted in the development of a suite of potential target areas for restoration, conservation or mitigation efforts.
Consistent with other studies (Egoh et al. 2011;Maes et al. 2012), Funk et al. (2019 recorded a high overlap between areas important for biodiversity and areas important for ESS supply, pointing to a close interrelationship between biodiversity and ESS that is often greater in natural systems (Chan et al. 2011;Schneiders et al. 2012). Specifically, the multi-functionality approach tested by Funk et al. (2019) showed that in the study area, only natural and near-natural river-floodplain systems provided habitat for various aquatic species as well as multiple ESS.
In the study, sites with greater probability of restoration success, indicated by low level of driver intensity related to navigation, hydropower and flood protection constraints as well as sites with high level of remaining semi-natural area (compared to agricultural area) were prioritized. In this way the study addressed potential opportunity costs of restoration efforts across the entire Danube River. This approach afforded the ability to provide better cost-effectiveness in achieving large scale conservation and ESS targets at the catchment scale (Bladt et al. 2009;Egoh et al. 2014), and to potentially avoid conflicts with drivers.
EBM Principles
Overall the application of the LF to the Danube Basin, illustrated the complexity of interactions between human activities, ecosystem components and the ESS they provide, and is useful in identifying the most important ecosystem components with respect to ESS supply as well as the types of activities that most likely affect these components through pressures. With respect to the EBM principles, the LF can support the first principle in terms of communicating the links between ecological integrity, biodiversity (expressed at the habitat level) and ESS. The LF is not spatially explicit and can be transferred and adapted for use at in any similar system and applied to any spatial scale of interest thereby supporting the second principle (appropriate spatial scales) of EBM. The LFs are developed by 'experts' on a given location, who assess the activities and pressures, based on their knowledge. While LFs require an holistic view of a system, they do not necessarily integrate insights from a range of disciplines (principle 3) rather they characterize a suite of socialecological interactions (principle 4). In its capacity to foster an understanding of the complexity of these links to promote understanding of policy synergies, they may also be used to facilitate and support policy coordination (principle 5). However, because the LF is a semi-quantitative and expert judgement based approach it is unlikely to carry sufficient confidence to justify any particular policy decision. Since the LF does not identify particular management options its current role in adaptive management (principle 6) is limited. Nevertheless, with its basis in the causal chain analysis of the DPSIR (see Elliott this volume) the linkages could potentially be extended to incorporate response options. For fully detailed accounts of development and analysis of the LF and comparison across regions, and aquatic ecosystem types, the reader is directed to Borgwardt et al. 2019, for a general description and discussion of the approach see Robinson and Culhane (2020).
The two integrated modelling studies Funk et al. 2019) exemplify how different holistic approaches can be used to identify management options which consider ecological integrity biodiversity resilience and ESS (Principle 1). Both implementations of the quantitative model coupling framework for EBM (Langhans et al. 2019), confirms how biodiversity and ESS estimates can be jointly simulated within the DRB given the availability of requisite data and models. It demonstrates that the method is very flexible and the criteria and models used are broadly applicable and the approach is transferable to other aquatic systems (Funk et al. 2019. Both approaches were spatially explicit and developed specifically to work at the appropriate spatial scales (principle 2). In the first study ) this included the entire catchment while the second study (Funk et al. 2019) had a more restricted focus specifically on rivers and the flood plain, nevertheless both studies worked across international borders which is a prerequisite for the work in the Danube.
Both model used a range of data sources, in particular Domisch et al. (2019) used truly multi-disciplinary, economic and environmental data (principle 3) to account for economic disparity, within the social part of the social-ecological system. This approach accounts for countries having limited financial resources (i.e. a proxy for social equity in the EBM approach) and land area in the Danube River Basin as those might face additional challenges in financing EBM in the basin.
In contrast, Funk et al. (2019) selected a method indirectly accounting for costs independent from country level's financial limitations, prioritizing sites with greater probability of restoration success at lower cost (i.e. indicated as lower loss of agricultural area). Therefore the multi-functionality approach accounts for the emerging view that ecological restoration requires restoring ecosystems for the sustainable and simultaneously provisioning of multiple goods and services such as water, flood protection, recreation, and biodiversity, among others to increase cost-effectiveness (Paschke et al. 2019).
One potential pitfall with both approaches is the stakeholder participation and transparency (Principle 4). Neither study directly used stakeholder input to inform the model building process, rather, the choices were made at the technical level by the modelling teams. To make the approach operational, participatory processes involving stakeholders across the catchment, member state and local levels would be a further important step. BBNs in particular are one promising technique which can be easily adapted to incorporate stakeholder input. It is possible to construct BBNs models based on stakeholder perceptions allowing co-design of modelling activities (see O'Higgins et al. 2020 for an example). In addition, the use of the AI approach included in the ARIES model may lack the transparency of more traditional deterministic environmental models, which may reduce the acceptability of model results. Elsewhere in this volume Fulford et al. 2020 discuss practical trade-offs inherent in model complexity.
Both the policy coordination potential (principle 5) and the adaptive management aspects (principle 6) are strong in both studies described above. Outputs from both models produced a suite of policy-relevant options enabling joint efforts to conserve the Danube. Funk et al. 2019 accounted for this principle by using data and knowledge derived and used in the framework of different policies, directives and initiatives e.g. navigation and hydropower sector (e.g. TEN-T regulation), water management sector (Water Framework Directive), local data from protected areas (Birds and Habitats Directive) and spatial land use information. This includes a continuous hydro-morphological assessment for the navigable Danube River compliant with CEN standards (Schwarz 2014;ICPDR 2015), Land cover/Land use (developed to support e.g. EU Biodiversity Strategy to 2020) or sectoral data collected on the status of the waterway, critical locations for navigation and navigation class (Fairway 2016). Cause-effect relations within the network of interactions between driver, pressure and state variables along the Driver-Pressure-State chain were then analysed within a quantitative Bayesian Network approach. Therefore, the approach selected by Funk et al. 2019 provides the first large scale statistical proof of multiple relationships of biodiversity and human uses and pressures along the navigable stretch of the Danube River. Therefore, it has the potential to increase knowledge on the socio-ecological system across sectors and policies and is serving as a basis for a strategic and more integrated management approach.
The Domisch et al. (2019) study explicitly included consideration of regional inequalities and economic capacity and generated a more in-depth picture of the feasibility of particular conservation efforts, thus enabling the adaptation of plans to meet these real-world social constraints.
Conclusions
We developed and tested different qualitative and quantitative implementations of an EBM approach for a complex socio-ecological system, the DRB. The LF approach helped to understand the complex interaction within the social-ecological system and to describe the main human activities and pressures affecting the aquatic ecosystem components. The modelling approaches summarized in this paper have increased the consideration of ecological integrity and biodiversity, accounting for multiple species and different relevant ESS. These studies illustrate approaches considering cumulative impacts by multiple human activities including land use, navigation and hydropower and integrate this multidisciplinary data and knowledge. The prioritization approaches taken fosters integrated management planning across multiple policies by creating the opportunity to pursue different policy objectives simultaneously.
All three selected EBM application for the DRB were implemented at the ecosystem scale i.e. including the whole catchment or river independent of jurisdictional, administrative or political boundaries , Funk et al. 2019 and therefore have the potential to foster transboundary cooperation for a EBM of the DRB.
Both implementations of the quantitative model coupling framework for EBM (Langhans et al. 2019), showed how biodiversity and ESS estimates can be jointly simulated within the Danube River Basin given the availability of requisite data and models. This demonstrates that the method is flexible, the criteria and models used are broadly applicable, and the approach is transferable to other aquatic systems (Funk et al. 2019). The EBM principles used for qualitative assessment of the modelling approaches may serve as a useful generic basis for the design of further EBM studies.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
|
v3-fos-license
|
2018-04-03T03:45:03.789Z
|
2014-03-10T00:00:00.000
|
21915683
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1364/oe.22.00a268",
"pdf_hash": "e4cf24111fd17a680057417c3782a45e28302d75",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46157",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"sha1": "e4cf24111fd17a680057417c3782a45e28302d75",
"year": 2014
}
|
pes2o/s2orc
|
Spatio-temporal dynamics behind the shock front from compacted metal nanopowders
Laser ablated shock waves from compacted metal nanoenergetic powders of Aluminum (Al), Nickel coated Aluminum (Ni-Al) was characterized using shadowgraphy technique and compared with that from Boron Potassium Nitrate (BKN), Ammonium Perchlorate (AP) and Potassium Bromide (KBr) powders. Ablation is created by focused second harmonic (532 nm, 7 ns) of Nd:YAG laser. Time resolved shadowgraphs of propagating shock front and contact front revealed dynamics and the precise time of energy release of materials under extreme ablative pressures. Among the different compacted materials studied, Al nanopowders have maximum shock velocity and pressure behind the shock front compared to others. ©2014 Optical Society of America OCIS codes: (100.0118) Imaging ultrafast phenomena; (160.0160) Materials; (110.2970) Image detection systems. References and links 1. A. Ulas, G. A. Risha, and K. K. Kuo, “An investigation of the performance of a Boron/Potassium nitrate based pyrotechnic igniter,” Propellants Explosives Pyrotech. 31(4), 311–317 (2006). 2. Y. S. Kwon, A. A. Gromov, and J. I. Strokova, “Passivation of the surface of Aluminum nanopowders by protective coatings of the different chemical origin,” Appl. Surf. Sci. 253(12), 5558–5564 (2007). 3. M. A. Zamkov, R. W. Conner, and D. D. Dlott, “Ultrafast chemistry of nanoenergetic materials studied by timeresolved infrared spectroscopy: Aluminum nanoparticles in teflon,” J. Phys. Chem. C 111(28), 10278–10284 (2007). 4. D. E. Eakins and N. N. Thadhani, “The shock-densifiction behavior of three distinct Ni+Al powder mixtures,” Appl. Phys. Lett. 92(11), 111903 (2008). 5. S. Roy, N. Jiang, H. U. Stauffer, J. B. Schmidt, W. D. Kulatilaka, T. R. Meyer, C. E. Bunker, and J. R. Gord, “Spatially and temporally resolved temperature and shock-speed measurements behind a laser-induced blast wave of energetic nanoparticles,” J. Appl. Phys. 113(18), 184310 (2013). 6. N. K. Bourne, “Akrology: materials: physics in extremes,” AIP Conf. Proc. 1426, 1331–1334 (2012). 7. N. K. Bourne, J. C. F. Millett, and G. T. Gray III, “On the shock compression of polycrystalline metals,” J. Mater. Sci. 44(13), 3319–3343 (2009). 8. A. N. Ali, S. F. Son, B. W. Asay, and R. K. Sander, “Importance of the gas phase role to the prediction of energetic material behavior: an experimental study,” J. Appl. Phys. 97(6), 063505 (2005). 9. R. E. Russo, X. Mao, H. Liu, J. Gonzalez, and S. S. Mao, “Laser ablation in analytical chemistry-a review,” Talanta 57(3), 425–451 (2002). 10. D. Yarmolich, V. Vekselman, and Y. E. Krasik, “A concept of ferroelectric microparticle propulsion thruster,” Appl. Phys. Lett. 92(8), 081504 (2008). 11. J. E. Sinko and C. R. Phipps, “Modeling CO2 laser ablation impulse of polymers in vapor and plasma regimes,” Appl. Phys. Lett. 95(13), 131105 (2009). 12. C. Phipps, M. Birkan, W. Bohn, H. A. Eckel, H. Horisawa, T. Lippert, M. Michaelis, Y. Rezunkov, A. Sasoh, W. Schall, S. Scharring, and J. Sinko, “Review: laser-ablation propulsion,” J. Propul. Power 26(4), 609–637 (2010). 13. S. L. Vummidi, Y. Aly, M. Schoenitz, and E. L. Dreizin, “Characerization of fine Nickel-coated Aluminum powder as potential fuel additive,” J. Propul. Power 26(3), 454–460 (2010). 14. S. Siano, G. Pacini, R. Pini, and R. Salimbeni, “Reliability of refractive fringe diagnostics to control plasmamediated laser ablation,” Opt. Commun. 154(5–6), 319–324 (1998). 15. Ch. Leela, S. Bagchi, V. R. Kumar, S. P. Tewari, and P. P. Kiran, “Dynamics of laser induced micro-shock waves and hot core plasma in quiescent air,” Laser Particle Beams 31(02), 263–272 (2013). 16. L. I. Sedov, Similarity and Dimensional Methods in Mechanics (CRC, 1993). #202378 $15.00 USD Received 3 Dec 2013; revised 13 Jan 2014; accepted 14 Jan 2014; published 24 Jan 2014 (C) 2014 OSA 10 March 2014 | Vol. 22, No. S2 | DOI:10.1364/OE.22.00A268 | OPTICS EXPRESS A268 17. S. H. Jeong, R. Greif, and R. E. Russo, “Propagation of the shock wave generated from Excimer laser heating of Aluminum targets in comparison with ideal blast wave theory,” Appl. Surf. Sci. 127–129, 1029–1034 (1998). 18. Ya. B. Zel′dovich and Yu. P. Raizer, Physics of Shockwaves and High-Temperature Hydrodynamic Phenomena (Dover, 2002). 19. P. Verma and R. V. Singh, HEMRL (Personal communication, 2012). 20. N. Zhang, X. N. Zhu, J. J. Yang, X. L. Wang, and M. W. Wang, “Time-resolved shadowgraphs of material ejection in intense femtosecond laser ablation of Aluminum,” Phys. Rev. Lett. 99(16), 167602 (2007). 21. H. L. Brode, “Numerical solutions of spherical blast waves,” J. Appl. Phys. 26(6), 766–775 (1955). 22. R. A. Freeman, “Variable-energy blast waves,” J. Phys. D Appl. Phys. 1(12), 1697–1710 (1968). 23. X. Chen, B. M. Bian, Z. H. Shen, J. Lu, and X. W. Ni, “Equations of laser-induced plasma shock wave motion in air,” Microw. Opt. Technol. Lett. 38(1), 75–79 (2003). 24. B. Wang, K. Komurasaki, T. Yamaguchi, K. Shimamura, and Y. Arakawa, “Energy conversion on a glass-laserinduced blast wave in air,” J. Appl. Phys. 108(12), 124911 (2010). 25. C. Porneala and D. A. Willis, “Time-resolved dynamics of nanosecond laser-induced phase explosion,” J. Phys. D Appl. Phys. 42(15), 155503 (2009). 26. D. Batani, H. Stabile, A. Ravasio, G. Lucchini, F. Strati, T. Desai, J. Ullschmied, E. Krousky, J. Skala, L. Juha, B. Kralikova, M. Pfeifer, Ch. Kadlec, T. Mocek, A. Präg, H. Nishimura, and Y. Ochi, “Ablation pressure scaling at short laser wavelength,” Phys. Rev. E Stat. Nonlinear Soft Matter Phys. 68(6), 067403 (2003). 27. S. Bagchi, P. P. Kiran, K. Yang, A. M. Rao, M. K. Bhuyan, M. Krishnamurthy, and G. R. Kumar, “Bright, low debris, ultrashort hard X-ray table top source using carbon nanotubes,” Phys. Plasmas 18(1), 014502 (2011). 28. S. Bagchi, P. P. Kiran, M. K. Bhuyan, S. Bose, P. Ayyub, M. Krishnamurthy, and G. R. Kumar, “Hot ion generation from nanostructured surfaces under intense femtosecond irradiation,” Appl. Phys. Lett. 90(14),
Introduction
Novel energetic materials have variety of applications in propellants, igniters, initiators in airbag gas generators, pyrotechnics etc. to name a few.For eg., Aluminum (Al) is used in rocket propellant formulations and also in combustion mechanism as it act as effective catalyst.Also Boron Potassium Nitrate (BKN) based pyrotechnic igniter is used as an initiator in airbag gas generator or propulsion.Due to the special properties of compacted powders, they are used in propellants, pyrotechnics, optical, biomedical and environmental engineering [1].However, the major challenge is to reduce the hazardous emissions while the energetic materials undergo reactions converting the internal energy in to the kinetic energy for applications.Though variety of mechanisms was proposed to channelize the energy released, precise control of energy release from the energetic materials has been a challenging task.Nano energetic materials have started replacing the best known molecular explosives owing to the control of energy release by modifying the shape and size of the nanostructures [2][3][4][5].Energetic Materials create temperatures and pressures of the order of 3000-10000 K and > 100 MPa respectively generally created under controlled laboratory conditions using Diamond Anvil Cell, shock tubes and gas guns [6,7] and rely heavily on the modeling and simulation to get overall reaction mechanism of the energy release.The development of high power table top laser systems associated with fast imaging techniques has given unique opportunity to study the kinetics of the material under extreme conditions, like phase transitions during energy release process of energetic materials with/without allowing propagation of chemical reaction, fragmentation, temperature during the reactions [3,5,8,9].In addition, this approach allows us to control the energy extraction during the reactions.When the laser intensity is larger than the breakdown threshold of a specific material, a small portion of the material melts, evaporate and form a material plume.This plume expands and drives the background gas (in our case air) to a supersonic velocity forming a shock wave (SW).Understanding the evolution of SW has many applications out of which Laser ablation propulsion (LAP) is a new electric propulsion concept that gives a precise control over the environmentally hazardous emissions compared to chemical propulsion schemes and is being used in micro thruster applications [10][11][12][13].Variety of materials such as metals (Al, Cu etc), polymers (PVC, triazene, polyvinyl alcohol, nitrocellulose, PVN, CH, CN, Nylon etc), polymeric CHO propellants, liquid layers on metal sheets excited with laser light of varying wavelength from UV to long IR regions of spectrum with varied pulse durations and modes were used for LAP [10][11][12][13].In this paper, we present the evolution of laser ablative shockwaves (SWs) from compacted nanopowders of Al and Nickel coated Al to understand the challenging aspects of laser-nanopowder interactions to explore their application potential for LAP [10,11].To ensure the suitability of the compacted nanopowders for specific application, the results were compared with micron sized powders of Boron Potassium Nitrate (BKN), Ammonium Perchlorate (AP) and Potassium Bromide (KBr) compacted and studied under the same experimental conditions.
Experimental details
The evolution of laser ablated SWs from the compacted samples into surrounding air is studied using defocused shadowgraphy (SHW) imaging technique at different time delays after the shock inducing laser pulse.Defocused shadowgraphy gives not only the information about the expansion of plasma, propagation of shock front and ionization of vaporized material but also gives the information about the evolution of the dynamics of ejected mass from the ablated material getting converted into plasma that launches a shockwave in to the ambient medium [14,15].The experimental schematic used in our study is depicted in Fig. 1(a).Second harmonic of Nd:YAG laser (INNOLAS Spitlight-1200) (532 nm, 7 ns, 10 Hz) is focused using a plano-convex lens of 80 mm focal length in f/#10 geometry.The beam diameter at the focal plane is measured to be 140 ± 10 µm.The input laser energy is kept at 75 mJ per pulse leading to an intensity of 7 × 10 10 W/cm 2 on the surface of the sample.He-Ne laser (632.8 nm, CW, 25 mW, Thorlabs) was used as probe beam to capture the evolution of shock front (SF) and contact front (CF) in to the ambient atmosphere.The probe beam expanded to 15 mm captures the laser ablated SF and CF.As the probe beam passes through the plume, it gets refracted by a region with high density gradients causing dark and bright areas in the shadowgraphs.The variations created by laser ablated materials in ambient air were captured by probe beam using an ICCD camera (ANDOR DH-734 with a minimum gate width or temporal resolution of 1.5 ns and spatial resolution of 13µm over 1024 × 1024 pixel array) was synchronized with the laser by triggering delay generator (SRS DG535) with Pockel's cell (PC) sync pulse from Nd: YAG laser.This allowed us to overcome the inherent insertion delay of the ICCD camera.The delay between laser pulse and ICCD gate width was adjusted by using delay generator.The beginning of the laser pulse is taken as t = 0.The output from delay generator was used to trigger ICCD camera to ensure capturing evolution of plasma created by every laser pulse and starts acquiring images, allowing shadowgraphs to be taken at any desired time delay.PC pulse (C1), gate width of the ICCD (C2), laser pulse (C3) and the delay of the ICCD gate width from t = 0, were monitored using an oscilloscope (YOKOGAWA DL9240L, 1.5 GHz, 10 GS/s) (Fig. 1(b)).A band-pass filter transparent only to probe beam is placed in front of the ICCD camera to eliminate background illumination due to ns laser pulses.The images were captured at various time delays with an initial time delay of 400 ns.Time-resolved shadowgraphs are used to understand the SW evolution revealing position of SF [16][17][18] and expansion of hot gas (CF) respectively.The Al nanoparticles of 70-110 nm dimension were procured form Advanced Powder Technologies LLC, ALEX TM .Nickel coating was done by an in house developed method [19] ensuring that a coating of 12 ± 3 nm was made on the nanoparticles.For the specific application of LAP and to compare with the regular energetic materials, the nanopowders were compacted under a load of 6 Tons to form pellets with dimensions of 1 inch in diameter and 1 mm thickness.Upon compaction the nanoparticles were observed to agglomerate giving a particle size in the range of 2-20 µm as shown in in Figs.1(c) and 1(d) for Al and Ni-Al nanopowders, respectively, with uniform distribution of the particles along the material layer.The ablative shock waves from the compacted nano energetic powders were compared with that of BKN, AP and spectroscopic grade KBr powders compacted under the same conditions.The pellets were mounted on an electronically controlled XY translation stages (M-443, LTA-HA controlled by ESP-300, M/s.Newport) to ensure that fresh surface of the sample interacts with the laser pulse.
Results and discussion
The evolution of laser ablative shockwaves from the compacted powder pellets is compared.Figures 2(a Each image has a spatial extent of 15.4 mm × 15.4 mm with a spatial resolution of ~15 µm (calibrated by imaging the output from a single mode optical fiber and comparing with a high resolution beam profiler with SP620U, Ophir Spiricon).The laser propagation direction (z) is from left to right.At each time delay, ten images were captured and averaged to obtain shock propagation distance (R SW ).Each of the images captured during the SW process was imported in to MATLAB® software and analyzed to extract the position of SF.After the calibration of the captured image was performed, the radius of the shock front (R SW ) was measured for different pellets (Fig. 3(f)).The laser ablated material is observed to evolve with varying density gradients (as shown by a series of bright and dark fringes in Figs.2(a)-2(f)) till 3-6 µs.From the images, SW (outer discontinuous dark layer due to the changes in the refractive index caused mainly by the high density gradients) and the Contact Front (CF) (a white thick layer) which separates the ambient gas from material vapor generated by ablation can be seen clearly.Around 4 µs, a dark band representing compressed air detaches from the evolving ablated plasma and is launched in to quiescent air as SF.Two sets of fringes were located internally and externally with respect to the SW were observed.The interference between undisturbed probe rays passing out of the SW and those deflected by the shock rear produces the internal fine fringes, while the external ones are due to the interference between the slightly perturbed rays and those deflected by the shock leading front [14].The emergence of internal fringes were observed until a time delay of 3-6 µs after which the fringes disappear once the SF (dark layer) gets detached from contact front of Fig. 2(d) leading to the oscillations of CF.Similar diffraction pattern of light and dark stripes at the earlier time scales of the ablation has been reported earlier [5,20].These were attributed to the conversion of material in the solid phase into the gaseous phase [5] and due to intermittent material ejection at high temperatures [20].At around 7 µs, a secondary shock layer is observed to get detached from the contact front and propagate into the ambient atmosphere.The evolution of time dependent shock front is observed to be different for different targets.Figures 3(a)-3(e) show shadowgraphs at 7.6 µs delay from t = 0 for Aluminum (Al), Nickel coated Aluminum (Ni-Al), Boron Potassium Nitrate (BKN), Ammonium perchlorate (AP) and potassium Bromide (KBr) targets respectively.Time dependent evolution of shockfront has been explained using variety of models beginning from Sedov-Taylor's (S-T) classical Point Strong Explosion Theory (PSET) explaining the propagation in planar, cylindrical and spherical geometry [16][17][18], to the ones considering gas motion in Lagrangian form for spherical [21] and cylindrical [22] variable energy blast waves, laser induced plasma shock wave motion in air [23,24], laser induced phase explosions of solid targets [20,25] and gas phase effects in energetic materials [5].PSET assumes that energy deposited at a point source propagates through the medium as a shockwave where the energy is released at a distance which is extremely large.From the temporal evolution of R SW , the energy released in the explosion that drives the SW (E s ) and the nature of the SF expansion is estimated using the relation R SW = φ o [E s t 2 /ρ o ] 1/(n + 2) ; where t is the time elapsed since the origin of the disturbance that generated the SW, ρ o is the density of the ambient medium (1.184 Kg/m 3 ) and φ o is a constant dependent upon the specific heat ratio, γ (1.4), of the ambient medium [8][9][10] (SW nature: if n = 1 planar, n = 2 cylindrical and n = 3 spherical).In our experimental configuration, as we have used pellets, hemispherical SW's are observed for all the targets.Hence, the energy driving the hemispherical SW E h = 0.5 E S .The evolution of the R SW from the compacted powders is observed to follow two different slopes indicating two specific stages of evolution before and after the detachment of SF from CF (Fig. 3(f)).At the earlier times up to 2 -3 μs, the variation of R SW is minimal (within 200 -800 μm) for all the samples studied and observed to follow a planar nature.After the detachment from CF, SF is observed to accelerate faster into the ambient medium following spherical Sedov-Taylor solution.The faster acceleration of SF indicates the time scales around which phase explosions of the material from the nascent phase to the vapor phase occur [25].The lines in Fig. 3(f) show the fit to the experimental data obtained using the CPC-PSET model for the hemispherical shockwave evolution from the targets.At earlier time scales S-T planar expansion (dash-dot lines) and at latter time scales S-T spherical expansion (solid dash) is observed [17,25].At all the time scales, the acceleration of SF from compacted metal nanopowders (Al, Ni-Al) is observed to be faster compared to the other powders.The energy driving the hemispherical shock waves, E h (in mJ) estimated by the fits is observed to be higher for Al followed by Ni-Al, BKN, AP and KBr.To understand the observed anomalous shock front behavior, the temporal evolution of contact front (R CF ) that gives information about the expansion of the ejected mass [17] and believed to be the source of energy released was also studied for time delays up to 12 μs from t = 0 (Fig. 4).The CF gives insight into the phase changes occurring during the laser-material interaction, i.e., conversion of material from the nascent solid state to liquid state and then to vapor phase compressing the ambient atmosphere [15,17,25], hence releasing the SF.The SF is observed to get detached from CF at 3.2 μs and 3.6 μs for Al and Ni-Al pellets, respectively.While for other compacted powders the energy released was observed later than 5 µs.The oscillations of R CF at longer time scales indicate the ablation dynamics of the compacted powders giving an insight into the precise time of energy release.The faster launching of SF into ambient air indicates quicker release of kinetic energy from the Al, Ni-Al compacted nanopowders compared to other powders studied in this work.The R CF is observed to be higher for Al with a maximum radius of 3.1 mm compared to the other materials.Though Ni-Al has higher atomic weight compared to the Al nanopowder presumably leading to a higher ablative pressure [26], Ni-Al has a low ignition temperature compared to Al due to the intermetallic reactions between Al and Ni atoms [2,13].Moreover, with ns pulses in the visible region, the skin depth of the radiation ensures that Ni coating (~15 nm) gets completely ablated shielding the Al particles to the incident laser beam, hence reducing the coupling of laser energy and the resulting ablation rate.
The time dependent evolution of shock front (R SW ) allowed us to directly measure the shock velocity (V SW ).The pressure behind the SF (P SW ) is estimated by using Counter Pressure Corrected Point Strong Explosion Theory (CPC-PSET) taking into account the pressure exerted by the ambient atmosphere on the propagation of shockwave [15][16][17][18].Among the five different compacted pellets, the SW properties are observed to be higher for Aluminum (Al) with the maximum V SW and P SW of 6.2 km/sec and 38 MPa respectively and lower for KBr with the maximum V SW and P SW of 3.7 km/sec and 13 MPa respectively (Figs. 5(a) and 5(b)).Both velocity and pressure behind the SF are observed to decay due to the rapid expansion of the SW with time delay from the laser pulse.At 11.2 µs delay from the laser pulse, SW reaches the acoustic limit (346 m/sec) where the shock pressure reaches to atmospheric pressure (0.1 MPa).The specific impulse, I SP taken as V SW /g, where g is the acceleration due to gravity for the compacted powders is in the range of 350 -650 sec (Table 1).The I SP values of the compacted metal nanopowders are observed to be much higher compared to regular bulk metals [10][11][12][13].This is due to the larger surface area offered by nanopowders that enhance coupling of laser radiation to the material layer leading to increased plasma temperature [27,28] that result in higher ablation and the associated shock emissions.
Summary and future scope
Shadowgraphic imaging is used to study the laser ablative shockwave and contact front dynamics from different compacted powders in pellet form such as nano Aluminum (Al), Nickel coated nano Aluminum (Ni-Al), Boron Potassium Nitrate (BKN), Ammonium perchlorate (AP) and potassium Bromide (KBr) targets with ns temporal resolution.Among the five different materials, the SW properties are observed to be higher for nano Al followed by Ni-Al, BKN, AP and KBr.From the time dependent evolution of shock front and the contact front precise time of energy release from the materials and an insight into the dynamics of ejected material launching a shock into the ambient medium is obtained.This allows an option to control the energy release requirements as the shock front can either be accelerated or decelerated by choosing appropriate materials and compositions The I SP for Al and Ni-Al greater than 500 sec indicate the potential of these compacted metal nanopowders for microtrhuster applications that lead to environmental friendly applications of energetic materials.
Fig. 3 .
Fig. 3. Shadowgraphs showing Shock Front (SF) and Contact Front (CF) at 7.6 µs delay from the laser pulse for (a) Aluminum (Al) (b) Nickel coated Aluminum (Ni-Al) (c) Boron Potassium Nitrate (BKN) (d) Ammonium perchlorate (AP) (e) potassium Bromide (KBr) and (f) Radius of curvature of shock front (SF) (R SW ) at 75 mJ input laser energy.Lines are fit to the data using the CPC-PSET model for the hemispherical shockwave evolution from the targets.
Fig. 4 .
Fig. 4. Evolution of radius of contact front (R CF ) from compacted nanpowders at 75 mJ of incident laser energy.Lines are guide to the eye.
Fig. 5 .
Fig. 5. (a) Velocity (V SW ) and (b) Pressure (P SW ) behind the shock front (SF) for the compacted materials.Lines are guide to the eye.The horizontal lines in the figures (a) and (b) represent the speed sound and pressure of ambient atmospheric air, respectively.
|
v3-fos-license
|
2022-06-19T15:20:07.308Z
|
2022-06-01T00:00:00.000
|
249836610
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.cureus.com/articles/101117-a-giant-left-coronary-button-aneurysm-after-aortic-root-remodeling-procedure-in-a-patient-with-marfan-syndrome-a-case-report.pdf",
"pdf_hash": "fd24ebbcf765f570f76733c139b885c318b25913",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46159",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "17ba5e74bb5a4f049a66910f1bcdc4dd24b8213f",
"year": 2022
}
|
pes2o/s2orc
|
A Giant Left Coronary Button Aneurysm After Aortic Root Remodeling Procedure in a Patient With Marfan Syndrome: A Case Report
Coronary button aneurysm is a well-known complication of aortic root surgery, especially in patients with Marfan syndrome. We present a case of a giant left coronary button aneurysm that occurred 20 years after an aortic root remodeling procedure was performed. A 32-year-old man with Marfan syndrome underwent the aortic root remodeling procedure for annuloaortic ectasia. Thirteen years later, an aortic aneurysm with chronic aortic dissection was diagnosed, and partial aortic arch replacement was performed. Twenty years after the first procedure, a 73-mm left coronary button aneurysm was observed. Due to dense adhesions from repeated surgeries, we approached the aneurysm through the artificial graft lumen, and the coronary artery was successfully reconstructed using Piehler's technique. When performing aortic root surgery for Marfan syndrome, the risk of coronary artery button aneurysm formation should be considered. Once an aneurysm is formed, a surgical strategy that assumes dense adhesions is essential.
Introduction
Aneurysmal change of the residual aortic wall is a severe complication of aortic root surgery. Since the development of surgical techniques, such as various coronary artery reconstruction methods (Carrel patch, Cabrol, and Piehler's methods), complications at the coronary artery reconstruction site have decreased [1]. However, coronary button aneurysm formation has been reported in patients undergoing coronary artery reconstruction using the Carrel patch technique, especially in patients with Marfan syndrome (MFS). Some residual aortic walls are considered to form an aneurysm owing to their connective tissue abnormalities, such as medial cystic necrosis [2,3]. Herein, we report a case of a rapidly growing giant coronary button aneurysm in a patient with MFS. We successfully treated it despite the presence of solid adhesions due to repeated surgeries.
Case Presentation
A 32-year-old man with MFS underwent valve-sparing aortic root replacement (Yacoub procedure) for annuloaortic ectasia in the US. At the age of 45 years, DeBakey type-II chronic aortic dissection was identified, and he underwent partial arch replacement at our institute. The patient was prescribed losartan to prevent aneurysm formation. However, he had stopped going to the outpatient clinic when he was 49 years old and had not taken losartan since then. Bilateral coronary button aneurysms had already been pointed out at that time. The left coronary button aneurysm was oval-shaped with a diameter of 25 mm, while the right aneurysm was 15 mm, with no change in size from the CT four years before, and both showed no tendency to increase. Three years later, the patient presented to the emergency room with complaints of chest pain. Computed tomography revealed that the left coronary button aneurysm had grown to 73 mm in diameter ( Figure 1) and that there was no stenosis in the coronary artery. Since there were no signs of myocardial ischemia on blood tests or electrocardiogram, the cause of the chest pain was considered to be the enlarged aneurysm. In addition, echocardiography revealed moderate aortic regurgitation due to aortic annulus dilatation. We believed it would be difficult to repair the aortic regurgitation and that it would worsen in the future if untreated. We scheduled a left coronary artery reconstruction and aortic valve replacement. 1 1 2 1 1
FIGURE 1: Chest computed tomography before the third surgery.
Chest computed tomography before the third surgery. The left coronary button aneurysm enlarged up to 73 mm in diameter, while the right coronary button aneurysm was slightly enlarged.
After the third median sternotomy was performed, cardiopulmonary bypass (CPB) was established with an arterial cannula in the right femoral artery and a venous drainage cannula in the right femoral vein. Under mild hypothermia (28°C), the ascending aortic graft was clamped, and cold crystalloid cardioplegia was administered. It was challenging to identify the coronary aneurysms from the outside because of tight adhesions around the graft. Therefore, we decided to approach the aneurysm from the ascending aortic graft. The anterior surface of the graft was incised to identify the anastomosis of the coronary button. The anastomosis site of the coronary button, i.e., the neck of the aneurysm, was about 25 mm in diameter. It was incised transversely and enlarged, and a thrombus in the aneurysm was removed to identify the ostium of the left main trunk (LMT). Four mattress sutures were placed on the rim of the LMT using 4-0 Prolene with a felt pledget, and distal anastomosis was performed with an 8-mm Dacron graft. The proximal side of the graft was anastomosed to the buttonhole of the aortic graft with 4-0 Prolene running suture, stretching the graft sufficiently forward to avoid graft kinking. Subsequently, aortic valve replacement was performed using a mechanical valve. The right coronary button was not tending to enlarge but was forming an aneurysm, so to reduce the possibility of future enlargement, it was sutured using a 4-0 Prolene mattress suture with felt pledgets. Weaning from the CPB was successful without any complications. Postoperative recovery was uneventful. The postoperative contrast-enhanced computer tomography (CT) showed no kinks in the graft and no other significant problems (Figure 2). The patient was discharged on postoperative day 19.
FIGURE 2: A postoperative contrast-enhanced computed tomography
The coronary artery was reconstructed with the artificial graft (arrow) without any complications of the anastomosis site.
Discussion
Compared to the original Bentall procedure, surgical techniques have improved, and complications such as coronary button aneurysms have decreased [1]. However, coronary button aneurysms are frequently reported in patients with MFS [4]. Kazui et al. suggested that the remaining aortic wall may be involved in the formation of coronary button aneurysms in many cases of MFS. The incidence of coronary artery button aneurysms is lower with a smaller-diameter coronary artery button [3]. In this case, a felt strip was used to reinforce the anastomosis of the coronary button. Nevertheless, the size of the coronary button was relatively large (25 mm in diameter), which could have caused the aneurysm.
Therefore, the coronary button should be as small as possible to reduce the remaining aortic wall. In addition, to prevent pressure from being applied to the remaining aortic wall, the suture should be applied to the rim of the coronary artery ostium rather than the remaining aortic wall when anastomosing the coronary button.
Several surgical procedures for coronary aneurysms have been reported, such as direct suture of the coronary artery and aortic wall, coronary artery replacement, and coronary artery bypass surgery; however, none of these procedures have been standardized [5,6]. Since surgery for coronary artery button aneurysms is always a redo open heart surgery, it is vital to perform a surgical strategy that considers adhesions. In this case, since it was the third open-heart surgery and dislodging the adhesions around the coronary aneurysm was expected to be difficult, we chose to approach the aneurysm from the graft lumen by cutting through the graft. Moreover, the coronary artery was reconstructed using a modified interposing method [5]. There are three advantages to this approach.
1. It does not require excessive detachment of adhesions and avoids intraoperative injuries.
2. It allows reconstruction with the original blood flow direction.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2013-01-19T00:00:00.000
|
10643005
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12032-013-0456-4.pdf",
"pdf_hash": "5619c436286a029c5387be84d965b768fd559a6b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46161",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "5619c436286a029c5387be84d965b768fd559a6b",
"year": 2013
}
|
pes2o/s2orc
|
VEGF and bFGF gene polymorphisms in Polish patients with B-CLL
Among a variety of angiogenic factors involved in the B cell chronic lymphocytic leukemia (B-CLL), vascular endothelial growth factor (VEGF) and basic fibroblast growth factor (bFGF) were identified. Their levels have been regarded as prognostic markers of the progression of disease. The objective of the present study was to assess whether polymorphisms located within the genes coding for these key angiogenic activators contribute to disease susceptibility and/or progression in patients with B-CLL. For this purpose, 180 individuals were investigated, including 68 B-CLL patients and 112 healthy controls. All individuals were typed for the VEGF (936 C > T) and bFGF (−921 C > G) alleles using PCR–RFLP technique. Only a slight prevalence of the VEGF T variant was observed among patients as compared to healthy individuals (p = 0.095) with a significant difference when high risk (stage III/IV) patients were considered (OR = 3.81, p = 0.045). No other significant association was observed between the VEGF polymorphism and progression of the disease. The VEGF alleles and genotypes segregated similarly in patients with different stage of the disease according to Rai classification. No significant relationships were also observed for the bFGF polymorphism with either susceptibility to B-CLL (when compared to control group) or progression of the disease. These results suggest the possible association of the VEGF polymorphism with high risk B-CLL.
Introduction
Dysregulation of angiogenesis occurs in various pathologies and is one of the hallmarks for cancer. The importance of this biological process in normal hematopoietic cell development and the pathophysiology of several malignancies, including B cell chronic lymphocytic leukemia (B-CLL), has been recently reported [1][2][3]. Patients with CLL have been demonstrated to have detectable levels of both plasma and cellular pro-and anti-angiogenic cytokines, as well as abnormal neovascularization in the marrow and lymph nodes [4][5][6]. Recent evidence suggests that vascular endothelial growth factor (VEGF)-based autocrine pathway promotes the survival of CLL B cells in part through upregulation of anti-apoptotic proteins [7]. Moreover, interactions between CLL B cells and their microenvironment generate alterations in the secretion of angiogenic factors that result in enhanced leukemic B cell resistance to apoptotic cell death [8]. Among a variety of angiogenic factors involved in the CLL, vascular endothelial growth factor and basic fibroblast growth factor (bFGF) were identified [9]. Their levels have been regarded as prognostic markers of the progression of the disease [10][11][12][13][14] in patients with B-CLL, including those of Polish origin. The data on the role of the VEGF and bFGF in CLL are summarized in Table 1.
The objective of the present study was to assess whether polymorphisms located within the genes coding for these key angiogenic activators (VEGF and bFGF), contribute to disease susceptibility and/or progression in patients with B-CLL.
Patients and controls
Sixty-eight patients (F/M = 27/41), aged 39-85 (median 69) years, with B-CLL were investigated. B-CLL was diagnosed according to defined clinical, morphological and immunological criteria. All patients gave their informed consent prior to their inclusion in the study. The study has been approved by the appropriate ethics committee.
Patients were treated at the Department of Hematology, Wroclaw Medical University. According to the modified Rai classification [15], there were 17, 28 and 12 patients in stage 0, I and II of the disease, respectively. The other 11 patients presented with more advanced disease: 6 and 5 with stage III and IV, respectively. In addition, 112 healthy individuals of both sexes (F/M = 57/55) served as a control group.
VEGF and bFGF genotyping DNA was isolated from the whole blood taken on EDTA with the use of Qiagen DNA Isolation Kit (Qiagen GmbH, Hilden, Germany).
The VEGF and bFGF alleles were detected using a polymerase chain reaction restriction fragment length polymorphism (PCR-RFLP) assay.
In brief, DNA was extracted from peripheral blood taken on EDTA using silica membranes (QiAmp Blood Kit, Qiagen, Hilden, Germany) following the recommendations of the manufacturer. A 208-bp-long fragment of the 3 0 untranslated region (UTR) of the VEGF gene was amplified using the following primers: forward, 5 0 -GAG TGT CCC VEGF mRNA and protein are produced in CLL cells [4] VEGF is expressed on B-CLL granulocytes and lymphocytes; VEGF receptors, VEGFR-1 and VEGFR-2, are expressed on B-CLL cells [6] VEGF levels do not differ between plasma of CLL patients and healthy controls [5] Bone marrow stromal cells (BMSC) treated with CLL microvesicles produce VEGF on a higher level than untreated healthy BMSC, but not as high as CLL-BMSC; [3] VEGF 165 is the main and overexpressed isoform in CLL-BMSC compared to healthy cells; VEGF 121 is poorly expressed, and isoforms VEGF 189 and VEGF 206 are not detected VEGF serum level is higher in CLL patients than in healthy individuals; [13] VEGF and VEGFR-2 levels are significantly higher in serum of patients in III or IV than in those in 0-II Rai stage of the disease; VEGF and VEGFR-2 serum levels correlate in CLL patients VEGF supports antiapoptotic and cytoproliferative effect of CD154 in CLL cells; [7] Inhibition of VEGF and its receptor decreases CLL cells survival VEGF produced by bone marrow stromal cells, but not by CLL cells, decreases CLL cells apoptosis [8] Increased expression of VEGF receptors correlates with clinical stage [24] High serum levels of VEGF correlate with increased risk of disease progression in early B-CLL [11] Low level of VEGF correlates with worse outcome in B-CLL patients with low level of b2-microglobulin (good prognosis indicator) what may lead to decreased survival of patients in 0 to II stadium of the disease [12] Targeting VEGF receptors effectively induces apoptosis in primary CLL cells and reduces tumor growth in a VEGFpositive CLL-like xenograft mouse model [25] bFGF Plasma levels of bFGF in CLL patients are significantly higher compared to levels of other proangiogenic molecules (i.e. VEGF) in plasma of these patients [1] Plasma levels of bFGF are significantly higher in patients with B-CLL compared to healthy controls [5] Serum levels of bFGF are statistically higher in patients with B-CLL than in healthy controls; serum bFGF level was significantly higher in patients with progressive than in those with stable disease [14] bFGF upregulates BCL-2 expression in B-CLL [9] Increased expression of bFGF correlates with clinical stage [10] TGA CAA CAC TGG CA-3 0 , reverse, 5 0 -AGC AGC AGA TAA GGG ACT GGG GA-3 0 as previously described [16].
The following primer pair was used for amplification of a 437-bp-long fragment of the bFGF gene promoter region: 5 0 -TGA GTT ATC CGA TGT CTG AAA TG-3 0 and 5 0 -TAAC were considered statistically significant, and those between 0.05 and 0.1 as indicative of a trend.
Results and discussion
The former studies on the role of the VEGF and bFGF in B-CLL regarded their serum levels or cellular expression as prognostic markers of the progression of the disease [10][11][12][13][14].
As for the Polish patients with CLL, our former study documented the significantly higher bFGF levels in B-CLL patients than in controls [14], while Gora-Tybor et al. [13] described the differences in VEGF serum levels between patients and controls, and patients in Rai stage III and IV versus those in Rai stage 0-II (summarized in Table 1).
As VEGF production appeared to have a significant effect on the susceptibility to CLL and the course of the disease, in our present study, we wanted to determine whether functionally relevant polymorphism within the VEGF encoding gene (pp. 936 C [ T in the 3 0 -UTR) could contribute to the risk of this malignant disease. The previous reports documented significantly lower VEGF plasma levels in carriers of the 936 T allele what could be attributed to the 936 C/T exchange leading to the loss of a potential binding site for transcription factor AP-4 (activating enhancer-binding protein 4) [18].
The bFGF (pp. -921 C [ G) promoter polymorphism was also studied. Polymorphisms within the promoter region of the bFGF gene may interfere with existing transcription factor binding sites or produce new binding sites and therefore influence the bFGF gene expression [17].
In the present study, C to T substitution at position 936 within the 3 0 -UTR of the VEGF gene and C to G substitution at position -921 within the promoter region of the bFGF gene were analyzed in order to determine whether the presence of these allelic variants is associated with susceptibility and progression of the disease in CLL patients.
As mentioned before, unfavorable CLL progression was reported to be associated with high VEGF levels and increased VEGF expression with a lack of the VEGF 936 T variant [18]. That is why the less frequent representation of the VEGF T allele among patients, especially those presented with more advanced disease, was expected. However, only a slight prevalence of the VEGF T variant was observed among patients as compared to healthy individuals (20/68 vs. 20/112, OR = 1.91, p = 0.095, Table 2). This relationship reached statistical significance when a group of high risk patients was considered. Among 11 patients in III or IV stage of the disease, 5 (45 %) were carrying the T allele as compared to 20 out of 112 (18 %) controls (OR = 3.81, p = 0.045, Table 2).
No other significant association was observed between the VEGF polymorphism and progression of the disease. The VEGF alleles and genotypes segregated similarly in patients with different stage of the disease according to Rai classification, beta2 microglobulin serum level and survival.
No other significant relationships were also observed for the bFGF polymorphism with either susceptibility to B-CLL (when compared to control group, Table 2) or progression of the disease.
Thus, no differences in VEGF and/or bFGF alelle and/or genotype distribution were noted between subgroups with stage 0-II versus III-IV according to modified Rai staging as well as males versus females (individual data not shown).
Intensive literature search was performed in order to compare the VEGF and bFGF alleles and genotypes distribution in Caucasian B-CLL patients and controls of other studies with our present results.
To our knowledge, there are no published data on the role of VEGF and bFGF polymorphisms in B-CLL (thus, our report present novel observations not previously described).
There were more studies investigating the distribution of the VEGF alleles showing comparable results. Allele frequencies of the present study are in agreement with those previously published for healthy Caucasians [21][22][23].
These data suggest that while the bFGF (-921 C [ G) polymorphism does not significantly contribute to susceptibility and progression of the disease in Polish patients with B-CLL, the VEGF (936 C [ T) polymorphism may be associated with high risk disease.
Obviously, these results should be regarded as preliminary and confirmed in the more extended study, including patients from the other centers.
|
v3-fos-license
|
2021-05-01T05:14:42.824Z
|
2021-04-10T00:00:00.000
|
233456012
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/jtm/2021/5585272.pdf",
"pdf_hash": "7308d844a3b485e3c7d843e242df3a63f4ad3626",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46162",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7308d844a3b485e3c7d843e242df3a63f4ad3626",
"year": 2021
}
|
pes2o/s2orc
|
Thrombocytopenia as a Diagnostic Marker for Malaria in Patients with Acute Febrile Illness
Background Thrombocytopenia is the most common hematological abnormality in patients with acute malaria. This study aimed to determine the role of thrombocytopenia as a diagnostic marker for malaria in patients with acute febrile illness. Method A cross-sectional health facility-based study was conducted on 423 consecutively selected acute febrile patients at Ataye District Hospital from February to May 2019 GC. A complete blood count and malaria microscopy were performed for each acute febrile patient. ROC curve analysis was performed to calculate sensitivity, specificity, positive predictive value, and negative predictive value of platelet count in predicting malaria. A P ≤ 0.05 was considered statistically significant. Result Out of the 423 acute febrile patients, 73 (17.3%) were microscopically confirmed malaria cases and the rest 350 (82.7%) patients had negative blood film results. Of the microscopically confirmed malaria cases, 55 (75.34%) were P. vivax and 18 (24.66%) were P. falciparum. The prevalence of thrombocytopenia among malaria patients (79.5%) was significantly higher than those in malaria negative acute febrile patients (13.7%), P < 0.001. About 67% malaria-infected patients had mild to moderate thrombocytopenia and 12.3% had severe thrombocytopenia. The ROC analysis demonstrated platelet counts <150,000/μl as an optimal cutoff value with 0.893 area under the curve, 79.5% sensitivity, 86.3% specificity, 95.3% negative predictive value, and 54.7% positive predictive value to predict malaria. Conclusion Malaria is still among the major public health problems in the country. Thrombocytopenia is a very good discriminatory test for the presence or absence of malaria with 79.5% sensitivity and 86.3% specificity. Therefore, this may be used in addition to the clinical and microscopic parameters to heighten the suspicion of malaria.
Background
Malaria is a life-threatening infectious disease caused by protozoan parasites of the Plasmodium species. Globally, there were an estimated 228 million malaria cases and 405, 000 deaths in 2018, and 93% of the malaria cases and 94% of the deaths were in the African Region [1,2]. e sub-Saharan Africa region was the most affected area contributing the largest burden of malaria morbidity and mortality. In Ethiopia, malaria is a serious public health problem with around 68% of the population is at risk of malaria. Approximately, 2.9 million malaria cases and 4,782 deaths were reported in 2016. e most dominant plasmodium species are Plasmodium falciparum and Plasmodium vivax [3][4][5].
ose hematological alterations may vary with level of malaria endemicity, background hemoglobinopathy, nutritional status, demographic factors, and malaria immunity [10]. rombocytopenia is the most common hematological abnormality in patients with acute malaria [9,11]. Severe thrombocytopenia is associated with increased risk of mortality in both child and adult patients with P. falciparum and P. vivax infections [12]. e mechanism of thrombocytopenia in malaria is not clear, but the speculated mechanisms are an increase in the consumption or destruction of platelets, or suppression of thrombopoiesis, or a combination of both. Increased consumption or destruction of platelets during malarial infection occurs due to antibodymediated platelet destruction, disseminated intravascular coagulation (DIC), pooling within the reticuloendothelial system, sequestration in the microcirculation, oxidative stress, and malaria-mediated apoptosis [13][14][15][16].
ere are some studies that reported the presence of thrombocytopenia is a predictor of acute malaria in patients with acute febrile illness in endemic areas [17][18][19][20]. However, there is scarce information on usefulness of thrombocytopenia in predicting acute malaria in Ethiopia. erefore, this study is aimed to determine the role of thrombocytopenia as a diagnostic marker for malaria in patients with acute febrile illness in Ethiopia.
Study Design, Area, and Population.
A cross-sectional health facility-based study was conducted at Ataye District Hospital from February to May 2019 GC. e Hospital is found in Ataye Town, Ataye District, North Shewa Zone of the Amhara Region, Ethiopia. Ataye Town is located at an elevation of 1468 meters above sea level and about 270 km away from the capital city Addis Ababa. Ataye District is one of the hot and malaria endemic areas in Amhara Region, Ethiopia. Malaria is one of the major health problems of the district. Among the plasmodium species, P. vivax and P. falciparum are main plasmodium species responsible for malaria infection in the district.
A total of 423 consecutively selected acute febrile patients were enrolled from Ataye District Hospital. Acute febrile patients with pregnancy, HIV, known hematological malignancies, bleeding disorders, and antimalarial therapy were excluded from the study.
Data Collection
Procedure. Semistructured questionnaires were used to collect sociodemographic and clinical data of the study participants by trained clinical nurses. A complete blood count (CBC) and malaria microscopy were performed for each acute febrile patient. Sysmex KX-21N (Sysmex Corporation, Kobe, Japan) automated hematological analyzer was used to determine the complete blood count (total white blood cell count, red blood cell, and platelet count). Giemsa-stained thin and thick blood smears were used to diagnose malaria. in smears were considered positive for malaria if one or more malarial parasites were seen and negative if no asexual form of Plasmodium was detected in a minimum of 200 oil immersion fields. On thick blood smears, detection of any levels of asexual forms of malarial parasites was considered malaria-positive and if no parasites were seen after examining 1000 white blood cells labeled as negative. e blood smear examination and complete blood count were performed by two experienced medical laboratory scientists. ese laboratory professionals had trainings on malaria microscopy. Standard operating procedures (SOPs) were strictly followed in each step to maintain the quality of the laboratory results.
Data Analysis.
e data were entered and analyzed using a Statistical Package for the Social Science (SPSS) Version 20 statistical software. We used the t-test for continuous variables and the chi-square test for categorical variables to compare means and proportions. ROC curve analysis was performed to calculate sensitivity, specificity, positive predictive value, and negative predictive value of platelet count in predicting malaria. In all cases, a P value less than 0.05 was considered statistically significant.
Ethical Considerations.
Ethical approval was obtained from the Research and Ethics Review Committee of College of Medicine and Health Sciences, Wollo University. Permission to conduct the study was also obtained from Ataye District Hospital. Written informed consent from adult participants and assent for children under 18 years of age were obtained before enrolment into the study. To ensure confidentiality, participants' data were linked to a code number. Any abnormal test results of participants were communicated to the concerned body in the hospital.
Sociodemographic and Clinical Characteristics of Participants.
A total of 423 subjects with acute febrile illness were enrolled in this study.
Malaria and rombocytopenia.
Out of the 423 acute febrile patients, 73 (17.3%) were microscopically confirmed malaria cases and 350 (82.7%) had negative blood films (Table 1). Of the microscopically confirmed malaria cases, 55 (75.34%) were P. vivax positive and 18 (24.66%) were P. falciparum. ere was no statistically significant difference in malaria infection between male and female acute febrile patients (14.7% and 19.4%, respectively). Malaria infection was more prevalent in rural residents (22.8%) than urban residents (11.1%), (P � 0.002). e mean platelet count was significantly lower in patients with malaria infection than patients without malaria infection (P < 0.001). e prevalence of thrombocytopenia among malaria patients (79.5%) was significantly higher than those of malaria negative acute febrile patients (13.7%), P < 0.001. Regarding to the severity of thrombocytopenia, about 67% malaria-infected patients had mild to moderate thrombocytopenia and 12.3% had severe thrombocytopenia ( Table 2). e prevalence of severe thrombocytopenia was higher in P. falciparum (38.9%) than in P. vivax (3.6%) infected study participants. Most of the malaria negative acute febrile patients (86.3%) had normal platelet count (Figure 1).
Diagnostic Values of Platelet Count.
In the ROC analysis, the area under the curve (AUC) of the platelet count was 0.893 (95% CI: 0.847-0.938; P < 0.001), the optimum cutoff point of platelet count to differentiate malaria infection from acute febrile patients was less than 150 × 10 3 /µl with sensitivity of 79.5% and specificity of 86.3% (Figure 2 and Table 3).
Discussion
Malaria is one of the high burden diseases in developing countries. Ethiopia is one of the sub-Saharan countries highly endemic to malaria where an estimated 68% of the population lives in malarious areas [4,5]. In the present study, the prevalence of microscopically confirmed malaria cases was 17.3% which is similar to studies conducted in the Zeway Health Center, Ethiopia (17%) [22], Gurage Zone, Southern Ethiopia (18.3%) [23], East Nile locality of Khartoum State (18.5%) [24], and Kenya (15.5%) [25]. However, the finding of this study is lower than studies conducted in Arba Minch, Southern Ethiopia (27.6%) [20], Kersa Woreda, Ethiopia (43.8%) [26], Zaria, Nigeria (45.4%) [27], and New Delhi (24%) [18] and higher than studies conducted in North Shoa, Ethiopia (8.4%), and District Dir Lower, Pakistan (12.2%) [28,29]. e observed variation might be due to seasonal climatic condition and altitude difference that might influence breeding of malaria vector and community awareness about malaria transmission and control. Among the confirmed malaria cases, the predominant Plasmodium species was P. vivax (75%), followed by P.falciparum (25%).
is was in agreement with other studies [23,29,30], but other studies reported that the most prevalent species was P. falciparum [26,28].
In this study, mean platelet counts were significantly reduced in malaria-infected patients than those with nonmalaria. rombocytopenia occurred in 79.5% of malariainfected patients. ese findings imply that thrombocytopenia may be a marker of Plasmodium infection. e association of thrombocytopenia and malaria infection was in agreement with previous studies [9,20,[31][32][33]. No statistically significant difference in the prevalence of thrombocytopenia was observed between P. falciparum (83.3%) and P. vivax (78.2%) infected patients (P � 0.7482) which is similar to a study done by Kassa et al. [34]. In contrast to our study, Patel et al. showed significantly higher incidence of thrombocytopenia in P. falciparum than P. vivax [11] and Shaikh et al. showed significantly higher incidence of thrombocytopenia in P. vivax infected patients [33]. rombocytopenia in malaria infection might occur due to an increase in the consumption or destruction of platelets, or suppression of thrombopoiesis, or a combination of both. e suggested mechanism of accelerated clearance or consumption of platelets during malarial infection includes disseminated intravascular coagulation (DIC), immunemediated destruction, pooling within the reticuloendothelial system, sequestration in the microcirculation, and malariamediated apoptosis [13][14][15][16]. In contrast to this study, a study was reported no association between the mean platelet count and irritable bowel syndrome [35]. A study was reported no difference between mean platelet count and TSH levels between the malignant thyroid nodule group and benign nodule group [36]. ere are a lot of possible reasons for the increase or decrease of the peripheral platelet count. Among the different variables that can determine the mean peripheral platelet count, platelet production rate, mean platelet survival/life span, and the size of the exchangeable splenic platelet pool can be mentioned [37]. In contrast to the relationship between malaria and thrombocytopenia in the present study, a study reported by Sincer et al. showed that there was an inverse relationship between the number of blood eosinophil count and the severity of acute coronary syndrome (ACS) in elderly patients [38].
In this study, the ROC analysis demonstrated platelet counts <150,000/μl as an optimal cutoff value with 79.5%
Conclusion
Malaria is still among the major public health problems in the country. e prevalence of thrombocytopenia was significantly higher among malaria patients than malaria negative acute febrile patients in Ataye Hospital. rombocytopenia is a good discriminatory test for the presence or absence of malaria with 79.5% sensitivity and 86.3% specificity. erefore, this may be used in addition to the clinical and microscopic parameters to heighten the suspicion of malaria.
Data Availability e authors confirm that all data underlying the findings are fully available without restriction. All relevant data are within the manuscript.
Ethical Approval
Ethical approval was obtained from the Research and Ethics Review Committee of College of Medicine and Health Sciences, Wollo University. Permission to conduct the study was also obtained from Ataye District Hospital.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
Authors' Contributions
AG and YE were involved in the conception, design, analysis, interpretation, report writing, and manuscript writing. AG, DGF, GH, and TF had been involved in the design, analysis, and critically reviewing the manuscript. All the authors read and approved the final manuscript. Journal of Tropical Medicine 5
|
v3-fos-license
|
2023-10-22T15:12:19.206Z
|
2023-10-01T00:00:00.000
|
264387689
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/23/20/8593/pdf?version=1697786273",
"pdf_hash": "977953c243127a0e01b8e5da3a9048a0fbe9190c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46163",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "0bafaad853f3793cd47c4b02a6227bfa28a7c40f",
"year": 2023
}
|
pes2o/s2orc
|
Shadow-Imaging-Based Triangulation Approach for Tool Deflection Measurement
As incrementally formed sheets show large geometric deviations resulting from the deflection of the forming tool, an in-process measurement of the tool tip position is required. In order to cover a measuring volume of 2.0 m × 1.0 m × 0.2 m and to achieve measuring uncertainties of less than 50 µm, a multi-sensor system based on triangulation is realized. Each shadow imaging sensor in the multi-sensor system evaluates the direction vector to an LED attached to the tool, and the three-dimensional position of the LED is then determined from the combination of two sensors. Experimental results show that the angle of view from the sensor to the LED limits both the measurement range and the measurement uncertainty. The measurement uncertainty is dominated by systematic deviations, but these can be compensated, so that the measurement uncertainty required for measuring the tool tip position in the ISF is achieved.
Introduction 1.Motivation
In comparison with conventional forming processes, incremental sheet forming (ISF) is an economical alternative for forming large sheet metals in small lot sizes [1].Since a universally applicable forming stylus forms the sheet metal over a counter die with an arbitrary shape [2], the machine tool costs are significantly reduced.However, one disadvantage of ISF is that major geometrical deviations occur due to spring back [3] and tool deflection [4].To enable the compensation of the tool deflection, it must be determined.For this purpose, a prediction of the tool deflection using mechanical calculations is performed [5].However, these predictions are based on model assumptions and do not consider the machine tool error or deformations of the machine tool.Therefore, a tool deflection measurement is preferred instead.
The required tool deflection measurement system must be capable of measuring the three-dimensional tool tip position in the ISF process.So, the measurement system has to work contactless and fast, capturing the tool position close to the tool tip in a single shot.Additionally, the measurement system should be independent of the machine tool kinematics.
State of the Art
To meet the requirements for an in-process deflection measurement of the moving tool, optical measurement systems are reasonable approaches.In the intended application, a machining volume of 2.0 m × 1.0 m × 0.2 m is covered by the measurement system.To enable the detection of typically occurring tool deflection of 150-450 µm, a measurement uncertainty of ≤50 µm is targeted.Consequently, a challenging dynamic range (i.e., measurement range divided by measurement uncertainty) of 4 × 10 4 is required.Covering the entire measuring volume with a global measurement approach, e.g., full-field photogrammetry, the dynamic range is not achievable [6].On the contrary, a local measurement approach, e.g., tracking laser interferometry, is capable of achieving the required dynamic range.Therefore, laser trackers are usually applied to measure the machine tool error [7].Even in robotic ISF, a laser tracker is applied to measure the tool center position in order to control the forming process in real time [8].However, to determine the tool deflection and the machine deformation, a reflector must be attached close to the tool tip.The tracked reflector at the tool tip might move out of the system's field of view during a loss of view and then the tracking system fails.As a compromise between the local and the global measurement approach without scanning, a multi-sensor approach is proposed, where several sensors are arranged around the measuring volume and each sensor covers a small sub-region of the full volume [9].Applying the multi-sensor approach, a setup that is robust to a loss of view is realizable and an enhanced dynamic range is achievable.
Since the sensors must be located outside the machining volume, an applicable sensor technology has to cover a sub-region of an axial measuring range of 500 mm, which is half the width of the machining volume.Additionally, a lateral measuring range in the horizontal and vertical direction of 200 mm is aimed for.The sensor implementation at the machine tool could be realized, e.g., on the edge of the worktable or on a separate frame around the machine tool whereby the sensors might need to be oriented at an certain angle so that the machine tool or the clamping does not cover the machining area.Considering the time resolution, a measurement duration below 1 ms is required so that the LED moves only 50 µm during the measurement and motion blur is kept sufficiently low when operating at a common feed rate of 50 mm/s [10].For this purpose, the position has to be captured in a single shot.To provide a new position measurement at each 1 mm tool movement, a measuring rate of 50 Hz is necessary.Resulting from these requirements, camera-based methods determining a position via triangulation with an angle-of-view measurement are suitable, because they provide an appropriate field of view, and exposure times below 1 ms and frame rates above 50 Hz are feasible.
Photogrammetry is a particularly suitable approach for measuring the 3D positions of multiple points with reflector targets [11].Industrial applications include deformation measurements, i.e., displacement field measurements of the object's surface, of a model in a wind tunnel [12] or of a wind turbine blade in static and fatigue tests [13]; alignment of row parts before machining [14]; or the tracking of robot end-effectors [15].Although tracking robot end-effectors is a similar application to tool tip measurement, transferring the measurement principle is neither practical, due to the use of reflector targets that are too large to be placed close to the tool tip, nor does it reach a sufficient dynamic range [6].For photogrammetric shape measurement, artificial patterns are pasted on the surface of the measuring object [11].Here, the 3D shape is obtained using stereoscopic digital image correlation (DIC).Three-dimensional DIC was applied, for example, for the analysis of three-dimensional displacement fields in fracture experiments [16] or in ISF for measuring the shape of the formed part to iteratively control the forming process [17].Siebert et al. [18] have shown that 3D DIC enables a sufficient dynamic range in the lateral but not in the axial direction with respect to the intended application in tool deflection measurement.Another approach to measure 3D displacement fields using only a single camera is based on laser speckles.Using speckle photography, Tausendfreund et al. [19] measured 3D displacement fields during the deep rolling process.To achieve a high spatial resolution, the field of view of the camera is less than 10 mm wide, which is a too-small measurement range to cover a sufficient part of the machining volume in ISF.Therefore, due to the larger field of view, photogrammetric approaches seem more suitable for measuring the tool tip position in ISF.However, photogrammetric measurement is based on tracking features, which can be, e.g., a surface texture, the edges of an object or markers attached to or painted on the object surface, i.e., the tool tip surface.As a result, the information is only contained in a few of more than a million pixels in the image.
In order to maximize the image information content and to use the full image frame of the camera, Grenet et al. introduced a shadow imaging concept to measure the position of a light source [20].In shadow imaging, the light source casts a shadow through a mask in front of a camera chip and the light source position is calculated from the image of the shadow.Thereby, the lateral position is obtained from the shadow position and the axial position can either be calculated from the magnification of the shadow pattern or from triangulating the shadow positions of at least two sensors.Although the shadow of a moving light source is recorded in in-process measurement, the shadow position shift, i.e., the motion blur, during a single shot measurement can be kept sufficiently low, at less than 1 pixel, by using an appropriate sensor design.To enable an absolute threedimensional light source position measurement, i.e., an absolute two-dimensional shadow position evaluation, a checkerboard pattern with absolute coding or a center feature is proposed.Another pattern with absolute and two-dimensional features was created by André et al. [21], which contains periodic arranged squares and binary absolute coding.The pattern is applied as a micro-encoded target and the target's in-plane position is measured.
In summary, it stands out that stereo-and multi-camera systems, which are robust to failure, have not been used for measuring the tool tip position in ISF yet.Since the measurement uncertainty decreases with an increasing feature content in the image [22], the shadow imaging principle is pursued here for application in ISF.Previous work has shown that shadow imaging is capable of achieving the required tool tip position measurement uncertainty of the lateral position components, but also that the required dynamic range of the axial position component is not achievable using a single shadow imaging sensor [23].It was shown that the random error of the axial position measurement, which deteriorates as the measurement distance increases, exceeds 250 µm in a distance of 500 mm, whereas the random error of the lateral position is below 1.5 µm centered in front of the sensor.To increase the aperture and thus reduce the measurement uncertainty of the axial position component, the concept of using two shadow imaging sensors with overlapping measurement regions for the tool tip position measurement is proposed.However, the capability of a triangulation approach using shadow imaging sensors for 3D position measurement in the ISF machining volume is not clear, yet.To solve this issue, the question arises which uncertainty is achieved in which measuring volume when the measuring regions of two sensors overlap.Additionally, the limits of the measurement range that one sensor can cover and the different contributions to the measurement uncertainty budget, including the sensor calibration, must be explored.
Aim and Outline
The aim of the present article is to propose a triangulation approach based on shadow imaging sensors for measuring 3D tool deflection in incremental sheet forming.Hence, the measurement regions of two sensors overlap and the 3D tool tip position is measured using triangulation.On the one hand, the 3D measuring volume that two sensors are capable of covering is identified.On the other hand, the achievable measurement uncertainty of the three-dimensional tool tip position is assessed.The measurement uncertainty results from optical and geometric influences, which also affect the calibration.To reveal further optimization potential, the effects of these influences are investigated.
In the following, the 3D tool deflection measuring principle by means of a lightemitting diode as the point light source and two or more shadow imaging sensors is introduced in Section 2. Section 3 presents the experimental setup that is subsequently used to investigate the dynamic range of a two-sensor system.Studying the achievable measurement range and the measurement uncertainty, respective experimental results are shown and discussed in Section 4. Finally, Section 5 gives a conclusion and an outlook.
Principle of Measurement
To apply the shadow imaging principle for tool deflection measurement in ISF, a point light source is attached to the tool tip.For the determination of the light source position l = (x L , y L , z L ) T , two or more shadow imaging sensors are used, each of which consists of a mask and a camera chip.The light source casts a shadow through the mask on the camera chip.By evaluating the shadow position on the camera chip in the image, each sensor measures the direction to the light source.Note that a real light source is not punctual, but during the shadow position evaluation, the average shadow positions of mask features are obtained so that the resulting direction points to the center of the light source.Based on the shadow position evaluation, each sensor n = 1, . . ., N provides possible light source positions l n = (x L,n , y L,n , z L,n ) T that are arranged in a line: This line is defined by the sensor's position s n = (x s,n , y s,n , z s,n ) T , which is known from a calibration, and the measured direction vector r m,n = (r x,n , r y,n , r z,n ) T in the (x, y, z) machine coordinate system.The scalar parameter t leads to a certain point on the line.In practical 3D measurements, the lines measured with N sensors probably do not intersect at one point, which is shown in Figure 1 for a combination of three sensors.Note that the experimental investigations in this paper focus on the combination of two sensors per sub-region of the measurement range.For every number of sensors, the best estimate l of the sought light source position finally follows from the point with the closest squared distances d n to all lines, i.e., by calculating with As a result, for measuring the 3D light source position, it is necessary to determine the sensor positions s n via calibration and to extract the direction vectors r m,n from two or more sensors to the light source.Principle of measuring the light source position using the combination of two or more (here exemplarily three) sensors.The best estimate of the light source position l is the point with the closest squared distance d n to the red dashed lines, each of which is given by one sensor and contains possible light source positions.The distance d 2 between sensor n = 2 and the estimated light source position is exemplarily shown.Each sensor n = 1, . . ., N provides one line that is determined by the sensor's position s n , marked by a black cross and the evaluated direction vector r m,n in the (x, y, z) machine coordinate system.
For a detailed explanation of how the direction vectors are obtained, only one sensor is considered, and the index n specifying the sensor number is omitted in the following.The direction vector r m in machine coordinates is obtained via a coordinate transformation of the direction vector r s = (r ξ , r η , r ζ ) T that is detected in the (ξ, η, ζ) sensor coordinate system.The coordinate transformation is a rotation by the angle γ around the z-axis, then by the angle β around the y-axis and lastly by the angle α around the x-axis: i.e., the elementary rotation matrices R α , R β and R γ , based on the respective rotation angles α, β and γ, which are obtained from the sensor calibration, are applied.A possible misalignment between the mask and the camera is neglected, as the tilt is minimized by grooves that place the mask and the mask rotation is corrected based on the camera image.
The direction vector in sensor coordinates results from the shadow position (ξ i , ζ i ) detected in the camera image and calibrated intrinsic sensor parameters, namely the shadow position (ξ i,0 , ζ i,0 ) when the light source is centered in front of the sensor and the distance h between the mask and the sensor.The relation between the shadow position (ξ i , ζ i ) and the direction vector r s = (r ξ , r η , r ζ ) T in sensor coordinates including the sensor calibration parameters is visualized in Figure 2. As a result of Equations ( 4) and ( 5), each shadow imaging sensor finally provides the direction to the light source in machine coordinates.The position of the light source attached to the tool tip is then determined with the sensors' output and the calibrated sensors' positions by applying Equations ( 2) and (3).
Shadow Imaging Sensor
For the experimental investigation of the 3D position measurement capability, a minimal setup with a light source and three shadow imaging sensors is used, see Figure 3.The required measuring volume per sensor is investigated for sensor 1, and the measuring volume is divided in two sub-regions where sensor 2 or sensor 3, respectively, provides the second sensor for triangulation.The light source, whose position is to be measured, is a surface-mounted device LED type 0805 from the brand WINGER with a peak wavelength of 520 nm, a maximum luminous intensity of 1300 mcd and a beam angle of 140°.The LED is significantly smaller than, e.g., reflectors for laser trackers or targets for photogrammetry, which is, on the one hand, more susceptible to be covered by other objects in the process environment, but on the other hand can be attached closer to the tool tip, and the position measured is averaged over a smaller area.Each sensor consists of a 30 mm × 40 mm large mask with transparent and opaque parts, which is manufactured by laser exposure of a polyester film, and a DMM 37UX273-ML monochrome board camera from the company The Imaging Source.The camera has a resolution of 1440 px × 1080 px with a pixel size of 3.45 µm.The resolution is less than typically used for photgrammetry, which is affordable, because in shadow imaging, a higher content of the pixels contains information on the tool position.The distance h between mask and camera is 20 mm.With this sensor design, a lateral LED shift of 50 µm at a measuring distance of at least 300 mm leads to a shadow position shift, i.e., a motion blur, of less than 1 pixel.The LED is positioned using a coordinate measuring machine (CMM), which also serves as reference system.
Mask
For measuring the absolute 3D position of the tool tip, i.e., the LED, a mask is required that contains features in horizontal and vertical direction and absolute features.A section of the used mask is shown in Figure 4.The mask contains alternately arranged grids with vertical and horizontal stripes.Vertical stripes enable one to determine the horizontal shadow position ξ i and horizontal stripes allow for the evaluation of the vertical shadow position ζ i , respectively.In contrast to circular markers or random patterns, grids allow for averaging in one direction over a large area and thus decrease time-consumption for image processing, which increases the potential for real-time measurement.In order to ensure that at least one full grid is always visible in the image while the LED is moved through the entire measurement volume, each grid has a size of 2.0 mm × 1.5 mm.Each stripe in a grid is 100 µm wide.The absolute feature is realized by 8-bit binary codes in each first transparent stripe of a grid.Eight adjacent squares are either transparent and provide a '0' or opaque and provide a '1' and so form an index of the grid.In the mask, each index is used twice, once for a vertical grid and once for a horizontal grid.The index defines where each grid is located with respect to the mask center.Therefore, the coded grid mask enables the determination of the absolute shadow position of the mask center in horizontal and vertical direction so that the absolute 3D LED position can be measured by two or more sensors.
Image Processing
For the investigated shadow imaging sensors, cameras with a relatively low resolution, i.e., a low amount of data per image, are chosen, which offers the potential for real-time image processing and thus enables the active control of the forming process in future.To determine the position where the shadow of the mask center occurs in the image plane, the grids must be segmented first.In a second step, the stripes in each grid are localized, and then the index is read in the binary coded stripe.The position of the shadow of the mask center is then obtained by evaluating the location of the shadows of the stripes visible in the image, the location of these stripes in the mask with respect to the mask center and the magnification of the stripe spacing in the shadow image with respect to the stripe spacing in the mask.
To separate the grids, a threshold method is applied that detects the horizontal and vertical borders.For visualization, an example image with evaluated intensity profiles is shown in Figure 5. Horizontal borders are located as the drop of the intensity after a bright vertical stripe.A vertical stripe is detected as a peak in the column-wise averaged intensity.A previously performed low-pass-filtering ensures robustness of the image processing against noise.Then, the horizontal borders are located in the row, where the filtered column intensity first passes through a threshold intensity after a plateau on a higher level.Here, the threshold intensity is the average intensity of the entire image and the intensity plateau indicates a stripe of a vertical grid.A respective intensity profile is given by the orange profile in Figure 5. Similarly, vertical borders are detected.A right border of a horizontal grid is where the filtered intensity passes through a threshold on the right side of the high-level-plateau, i.e., a horizontal bright stripe in the image, see the blue intensity profile in Figure 5. Accordingly, the left border of a horizontal grid is where the intensity passes through the threshold on the left side of a low-level-plateau, i.e., a dark horizontal stripe, as shown by the intensity profile in Figure 5.In each grid, the stripes are localized separately by approximating a model function because preliminary investigations have shown that this method provides more accurate results than a phase evaluation based on a fast Fourier transform or a correlation [24].Before the approximation, the image section is averaged in the direction of the stripes and a low-pass-filter is applied to smooth the interferences due to noise and diffraction.Then, the location of each stripe is determined in the (ξ, η, ζ) sensor coordinate system which is aligned to the plane of the camera chip.For this purpose, the intensity profile of a bright stripe in the region between adjacent intensity minima is approximated by the model function.For each vertical bright stripe a, the applied model function over the horizontal image coordinate ξ is with and for each horizontal bright stripe b, the model function over the vertical image coordinate ζ is with , respectively.The model function is a limited Gaussian function with an offset I 0 , an amplitude A, a width w, a peak position µ and an intensity limit I max .The index v refers to a vertical stripe and the index h to a horizontal stripe.For the approximation of the model function between adjacent minima, each pixel with its intensity provides one data point, to which the model function is fitted using a non-linear least squares approach.With the approximation, the parameters of the model function are determined.The resulting position µ v,a serves as ξ-stripe location for a vertical stripe a and the determined peak position µ h,b is the ζ-stripe location of a horizontal stripe b.Applying this approach, the stripe locations are obtained with subpixel resolution.
To calculate the absolute shadow position, the index of one grid in the image is needed.The locations of stripes in adjacent grids are used to determine the borders of each code bit.The intensity averaged in the quadratic range of each bit of the coded line is compared with an empirical threshold, which adapts to the image intensity.Mean bit intensities higher than the threshold are associated with a '0' and lower intensities provide a '1', and thus, the index is composed of the code bits.
The determined index enables to calculate the shadow position of the mask center.Indeed, using the index, each stripe in the image can be associated with a stripe in the mask whose absolute position with respect to the mask center is known.To transfer the mask stripe position with respect to the mask center to the image plane, the magnification of the stripe spacing l S in the shadow on the camera chip with respect to the stripe spacing l M in the mask is applied.Using the stripe position d in the mask, the magnification k and the location µ of the stripe shadow, each stripe donates an estimation for the mask center shadow position.As a result, the mask center shadow position is calculated by averaging the estimations.For the horizontal and vertical coordinates, this means that the horizontal mask center shadow position and the vertical mask center shadow position are calculated from the stripe shadow positions µ v,a of each vertical stripe a or µ h,b of each horizontal stripe b visible in the image, the positions d v,a in the horizontal direction or d h,b in the vertical direction of each stripe in the mask, and the magnification k.Here, s v,0 is the first and s v,1 the last index of the vertical stripes in the image, and s h,0 is the first and s h,1 the last index of the horizontal stripes.This way, the absolute shadow position (ξ i , ζ i ) is evaluated for each image.The shadow position is then inserted into Equations ( 4) and ( 5) to obtain the direction vector r m,n pointing from the sensor to the LED, and the measured directions to the LED from several sensors finally provide the sought LED position according to Equations ( 2) and (3).
Experimental Setup with Three Sensors
In the experiments, sensor 1 is investigated in an axial measurement range of 500 mm beginning at a minimum measuring distance of 300 mm.The investigated lateral measurement range is 300 mm in horizontal direction and 200 mm in vertical direction, each centered in front of the sensor.Sensor 2 and sensor 3 are oriented perpendicular to sensor 1, and sensor 2 serves as second sensor for triangulation in the closer half of the axial measurement range of sensor 1, whereas sensor 3 covers the farther half.The investigated measurement range is located at a distance of 400 mm in front of sensor 2 and sensor 3. The perpendicular sensor arrangement is chosen because it is expected that a lower sum of the squares of the position component uncertainties is achieved if the angle between two measured direction vectors is close to 90°and if the angle of view from the sensor to the LED is close to 0° [22].The LED is oriented at an angle of 45°towards the negative xand y-axis so that the LED illuminates all sensors.For future applications in ISF, a clamping might conceal the tool and thus the LED.A solution for this is a tilted sensor setup, which is feasible due to the sensor's three-dimensional position measurement capability.
During the experiment, the LED is moved step-wise by the coordinate measuring machine Leitz PMM-F 30.20.7, which simulates the forming tool in ISF and simultaneously serves as reference.At each position, ten images are recorded with an exposure time of 25 ms.Note that a flashing high-power LED can be used in future to meet the required measurement duration of 1 ms.In the first step, the LED is moved to defined positions to calibrate the sensor parameters.By recording ten images, the random error is reduced by averaging and thus the accuracy of the calibration is improved.In the second step, the LED is moved to a set of positions to investigate the measurement uncertainty.Here, ten images per position are recorded to study systematic and random errors.
In the uncertainty investigation, the LED is moved along the paths shown in Figure 6, which are arranged parallel to the global x-, yand z-axis.Images are captured every 10 mm where the LED movement stops.Therefore, the uncertainty of the 3D position measurement by means of triangulation of two shadow imaging sensors can be evaluated in dependence of the LED location in a measurement volume of 500 mm × 300 mm × 200 mm, which is sufficient with respect to the application in a multi-sensor system in ISF.
Calibration
According to the sensing principle explained in Section 2, the LED position is calculated from the shadow positions of two or more sensors.For this purpose, the relation between the shadow position and the line of possible LED positions must be calibrated for each sensor.One calibration option is to record a full calibration map and the other option is to conduct a model-based calibration in which the geometrical quantities are determined.Since a calibration map requires the evaluation of shadow positions assigned to multiple LED positions that are arranged in a fine grid, this method is time-consuming, especially in a three-dimensional measuring volume.In addition, the interpolation between the LED positions might lead to deviations because of the non-linear relations and its dependence on unknown geometrical parameters.Instead, the model-based calibration is preferred here due to the lower number of LED positions in the calibration process.
For the model-based calibration, each sensor is calibrated separately, whereby a grid of LED positions is recorded.The positions are arranged in planes approximately parallel to the image plane of the sensor.The distance between adjacent positions in horizontal and vertical lateral direction is 20 mm, and the axial distance between the planes is 33.3 mm.For the future implementation of a calibration procedure in the ISF machine tool, a calibration target could be realized on which LEDs are arranged in a two-dimensional grid.The target can be moved in defined steps whereby the LEDs blink in sequence.In contrast to the calibration map, the distance between the adjacent points is larger, which significantly reduces the number of LED positions required.The axial and lateral range of the calibration volume is adjusted to the intended measurement range of the sensor.In the first step, the sensor position s n is evaluated as the intersection of lines fitted in LED positions that provide the same shadow positions.For this purpose, the LED position of each plane belonging to a certain shadow position is obtained via a regression in the calibration plane.In the second step, the geometrical model according to Equations ( 4) and ( 5) is fitted to the direction vectors pointing from the extracted sensor position to the defined LED positions.As a result, the remaining sensor parameters are obtained, i.e., the distance h between mask and sensor, the shadow position (ξ i,0 , ζ i,0 ) that belongs to LED positions centered in front of the sensor and the sensor orientation (α, β, γ) in the machine coordinate system.
Measurement Range
The point grid of LED positions captured during the calibration of sensor 1 is also used to evaluate the measurement range of each shadow imaging sensor.To evaluate the limits of the measurement range, two criteria are considered, the contrast-to-noise ratio (CNR) and the detectability of all stripes in the image using the algorithm described in Section 3.1.2.The CNR is the main limiting factor of the measurement range, which here is defined as to characterize the quality of the images.The CNR is evaluated based on all ten images captured at the same LED position.Therefore, the intensity is averaged over all images and the contrast is measured by the difference between the 95% percentile I 95 and the 5% percentile I 5 of the average intensity, and the noise s is the averaged standard deviation per pixel.The percentile ensures that rare pixels with very high or low intensities are excluded, so that the contrast represents the main characteristic of each image.However, at large angles of view, the stripes might not be detected despite a high CNR, because the stripe intensity profile changes due to diffraction effects.The resulting CNR and the identified boundaries of the lateral measurement range are given in Figure 7 for different axial distances to sensor 1 from 300 mm to 800 mm.The largest CNR occurs centered in front of the sensor at the closest axial distance and decreases sharply in the lateral direction.Here, the valid lateral measurement range is the smallest with 280 mm in the horizontal direction and 220 mm in the vertical direction, but the outer corners are outside the measurement range.This means that the measurement range of 200 mm in the lateral direction is fully covered.Although the CNR decreases with an increasing axial distance, the lateral measurement range increases because the CNR drops less sharply in the lateral direction.At y L,ref = 400 mm, the outer corners are still not covered, but at y L,ref ≥ 600 mm, the measurement range is mainly shaped rectangularly and a lateral extent of 300 mm is achieved.Additionally, the maximum CNR is not centered in front of the sensor but is slightly shifted in the positive x-direction as the achieved measurement range which corresponds to an according shift in the measurement range.In summary, the measurement range of each sensor is primary limited by the CNR of the images.To further increase the CNR, a brighter LED or a longer exposure could be used.Note that ambient light, which is relevant in ISF applications, indeed decreases the CNR, but this effect can be reduced by applying a bandpass filter.The angle of incidence is the dominant affect on the CNR, which means that a larger lateral measurement range is covered at larger axial distances.Another significant influence is the axial distance, but even at the largest axial distance, the CNR is sufficient for the evaluation of the shadow position.As a result, the aimed lateral measurement range of 200 mm is reached and an enlargement of the measurement range in the axial direction is possible.Finally, each point in the machining volume must be covered by at least two sensors to measure the tool tip position in the ISF.This is evidenced by the proven axial measurement range of at least 500 mm.
Random Error
The paths shown in Figure 6 are used to assess the random and systematic measurement error of the three-dimensional LED position.The random measurement error is given by the standard deviation of the measured LED positions and is subsequently considered for each position component separately.
The position component x L is directed horizontally lateral to sensor 1.Its random error σ(x L ) in dependence of the position component x L,ref horizontally lateral to sensor 1 is shown in Figure 8a, wherein the included paths are located at z L,ref = 0 mm, i.e., vertically centered in front of sensor 1 and at various y L,ref coordinates.The missing random errors at y L,ref = 300 µm at small x L,ref result from the limitation of the measurement range.
Additonally, at several positions on the path at y L,ref = 425 mm, invalid indexes were evaluated which lead to invalid shadow positions so that the LED position calculation is not possible.Larger coded bits would increase the robustness of the algorithm and prevent invalid results in future.Nevertheless, most of the LED positions provide valid results that contribute to the error evaluation.Among the evaluated random errors σ(x L ) on paths along the x-axis at z L,ref = 0 mm, 80% are below 4 µm.However, a significant tendential increase in the random error σ(x L ) towards small x L,ref coordinates is prominent.In addition, a few randomly occuring higher random errors are present.
To reveal the causes for the principle course of the random error, the uncertainty budget is discussed in detail in Appendix A. The results of the theoretical model for the position component x L at y L,ref = 800 mm are also included in Figure 8a.The shadow imaging sensors mainly determine the respective lateral position component, and according to Equation (A2), the uncertainty of that position component depends on the uncertainty of the evaluated shadow position and the axial distance to the LED.Thereby, the shadow position uncertainty is dominated by the angle of view which linearly affects the propagation of the magnification uncertainty (see Equation (A4)).The contribution of the shadow position uncertainty to the lateral position uncertainty increases with the axial distance.However, the effect of the axial distance is smaller than the effect of the angle of view.
The experimentally evaluated random error σ(x L ) in Figure 8a validates that the angledependent increase in the error due to the propagation of the magnification uncertainty is the dominant effect.The slight increase in the random error σ(x L ) with an increasing y L,ref at the inner lateral positions proves the dependence on the axial distance.Since the axial distance dependency is a minor effect, it will not be investigated further.Additionally, the magnitude of the experimentally evaluated error corresponds to the theoretically propagated uncertainty.The remaining deviations between the propagated uncertainty and the evaluated random error result from averaging the uncertainty of the stripe location, the assumptions and simplifications considering the calibration parameters and the deviations of sensors 2 and 3 affecting the axial position component.An investigation of outlying high random errors revealed that the reason for the outliers are stripes on the outside of an image that are detected in some but not all images captured at the same LED position and strongly affect the calculation of the magnification.
The same angle-dependent increase is expected for all position components.The random error σ(y L ) along the x-axis for various y L,ref coordinates on the vertically centered plane at z L,ref = 0 mm is shown in Figure 8b.At the positions laterally outside from the perspective of sensors 2 and 3, at y L,ref = 300 mm, y L,ref = 550 mm and y L,ref = 800 mm, the random error σ(y L ) is higher than centered in front of the sensors at y L,ref = 425 mm and y L,ref = 675 mm.Moreover, the random error σ(y L ) decreases at the outer lateral positions with an increasing axial distance, which highlights the dominant angle-dependent increase in the random error.However, the random error σ(y L ) is on a higher level than the random error σ(x L ), and despite additional outliers, 80% of the random errors σ(y L ) are below 5 µm.
Also, the random error σ(z L ) of the position component z L provided in Figure 8c shows a slight angle-dependent increase.Furthermore, the random error σ(z L ) is smaller than the random error σ(x L ), which is caused by the smaller measurement range, by the more centered minimum of the error and by the fact that all three sensors are significantly sensitive to the position component z L , because the position component z L is vertically lateral to all sensors.In summary, the dominant influence on the random error is the angle of view to the laterally measuring sensor because the contribution of the uncertainty of the magnification k increases with the angle.Nevertheless, the random error is lower than 5 µm at most of the tested reference LED positions (x L,ref , y L,ref , z L,ref ), which is one order of magnitude better than the required position measurement uncertainty of 50 µm.Therefore, the achieved random error proves the potential of shadow imaging sensors for application in a multisensor system for measuring the three-dimensional tool tip position in ISF.
Systematic Error
The systematic position measurement error is also evaluated for the (x L , y L , z L ) position components separately and is presented in Figure 8.The systematic error ∆(x L ) of the position component x L along the x-axis, i.e., lateral to sensor 1, shown in Figure 8d for various axial distances y L,ref and at z L,ref = 0 mm, ranges from −160 µm to 92 µm, and its course is similar to an inverted parabola.The maximum is shifted from the middle in front of sensor 1 to higher x L,ref values, which corresponds to smaller horizontal shadow position components ξ i .Furthermore, it stands out that the range between the minimum and maximum systematic error is higher the closer the LED is to sensor 1.A probable reason is that the systematic error depends on the angle to the LED, which is larger at shorter distances for constant lateral positions.In addition to the tendential course, the systematic error ∆(x L ) scatters at small lateral positions x L,ref .
For the position component y L , the systematic error ∆(y L ) is shown over z L,ref coordinates in Figure 8e In all position components, systematic errors occur.It is assumed that the model-based calibration does not cover all main influences on the position component.For example, the orientation between the mask and camera chip is not considered, yet.In addition, crosssensitivities are justified by the orientation angles of the sensor alignment in the machine coordinate system and also the neglected orientation between the mask and camera chip.The tendential course, which is the largest contribution to the systematic error, can be compensated by the extension of the geometrical model or the application of empirically obtained polynomial correction functions.The correction can reduce the standard deviation of the systematic error below 10 µm on paths centered in front of the respective sensor in the respective axis.However, this correction does not reduce the detected scatter which results from the angle of view-dependent propagation of variations in the evaluated magnification k according to Equation (A4).Probable reasons for the scatter in the magnification k are manufacturing deviations in the mask, which can either be calibrated by an individual characterization or reduced through a more precise manufacturing process.
In summary, the evaluated systematic errors in the range between −150 µm and 150 µm prove that valid three-dimensional LED positions are measured by combining two perpendicular shadow imaging sensors.By compensating the parabolic course of the systematic error, the aimed measurement uncertainty of 50 µm is achieved in most of the measuring range.However, the angle of view strongly affects the propagation of magnification deviations and therefore limits the lateral measurement range in which a sufficient measurement uncertainty is reached.Although the experimental results show a larger systematic error than other optical measurement approaches, like photogrammetry, laser interferometry and laser triangulation, shadow imaging sensors benefit from being able to measure the position close to the tool tip from a single shot and do not require tracking the region of interest.Concurrently, the random error proves the great potential of shadow imaging sensors.
Conclusions and Outlook
In order to measure the three-dimensional tool tip position in ISF in a measuring volume of 2.0 m × 1.0 m × 0.2 m with a measurement uncertainty below 50 µm, a multisensor system is proposed.The multi-sensor system consists of shadow imaging sensors, each of which provides the direction vector to an LED attached to the tool tip, and the LED position is obtained by combining the sensor data.Therefore, the measuring volume is split in sub-regions where each is covered by at least two sensors.A minimal configuration of three shadow imaging sensors is experimentally investigated to reveal the system's threedimensional position measuring capability for a sub-region of 300 mm × 500 mm × 200 mm.
The conducted experiments show that the combination of two perpendicular shadow imaging sensors is capable of measuring the three-dimensional tool tip position.Hence, for one sensor, an axial measurement range of at least 500 mm is proven, whereas the lateral measurement range is about 300 mm but depends on the angle of view and thus increases with the axial distance.The measurement uncertainty achieved by combining two sensors is dominated by the systematic error which can be compensated.However, the main contribution to the systematic and the random error is the magnification evaluated in the images, which propagates to higher position uncertainties the larger the angle of view is.As a result, the angle of view limits the achievable measurement uncertainty, but with a compensation of the tendential course of the systematic error, the measurement uncertainty is sufficient for tool tip position measurement in ISF.
The presented work revealed the limits of the lateral measurement of the sensors but not yet the limits of the axial measurement range, which will be subject of future work.A further study will include the extension of the geometrical model to cover previously neglected quantities affecting the systematic error.Additionally, the potential to reduce the measurement uncertainty by integrating additional sensors per sub-region will be explored in future.After further developments and characterizations regarding the sensor in laboratory environments, the next step will be the transfer of the measurement system to the manufacturing environment.Under manufacturing conditions, it is substantial to validate an adapted calibration procedure and to overcome specific challenges such as machine vibrations or thermal fluctuations.shifted from the center of the camera chip to higher x L,ref coordinates, which depends on the location of the mask with respect to the sensor.Then, the uncertainty of the lateral LED position is scaled by the axial position y L = y L,ref and the approximate distance h = 20 mm between the mask and camera chip.So, the principle course of the position uncertainty u(x L ), included in Figure 8a for y L,ref = 800 mm, corresponds to the course of the shadow position uncertainty u(ξ i ).Due to the impact of the axial position coordinate y L,ref , the minimum lateral LED position uncertainty u(x L ) is larger at farther axial distances, but the angle-dependent increase dominates the uncertainty u(x L ).
Figure 2 .
Figure 2. Principle of a single shadow imaging sensor.The light source at position l projects the shadow of a mask onto a camera chip.The shadow position (ξ i , ζ i ) is the position in the (ξ, ζ) image plane where the shadow of the mask center, i.e., the sensor position s highlighted by a black cross, appears.The direction vector r s in sensor coordinates, shown by its components r ξ , r η and r ζ , points to the light source.
Figure 3 .
Figure 3. Experimental setup to investigate the 3D position measurement capability of a measurement system of several shadow imaging sensors in a measurement volume of 500 mm × 300 mm × 200 mm.The LED is positioned using a coordinate measuring machine (CMM), which also serves as reference system.
Figure 4 .
Figure 4. Section of the mask used in the shadow imaging sensors.The mask is the black and white structure, wherein black areas represent opaque contents and white areas transparent contents.The orange lines highlight the borders between the horizontal and vertical grids.The red squares visualize the bits used to build the binary index of each grid.The center of the entire mask is marked by the green cross.The axes ξ and ζ are projected from the sensor coordinate system to the mask plane.The distances d v,a and d h,b in the mask plane between the mask center and the stripes with the indexes a and b, respectively, are known.As an example, the distances d v,a and d h,b for the stripes a = 15 and b = 9 are visualized.
Figure 5 .
Figure 5. Camera image with intensity profiles used for grid segmentation.The lines in the image are the columns or rows where the filtered intensity profiles shown in the same color are taken.The red line in each intensity graph presents the threshold intensity.The intersections of the orange intensity profile and the threshold next to high-level-plateaus are horizontal borders.The left vertical border is located where the green intensity profile crosses the threshold on the left of the low-level-plateau and the right vertical border is located where the blue intensity profile crosses the threshold on the right of the high-level-plateau.
Figure 6 .
Figure 6.Paths on which the LED is moved in order to evaluate the 3D position measurement uncertainty.The lines are oriented parallel to the machine coordinate axes x, y and z.
Figure 7 .
Figure 7. Contrast-to-noise ratios (CNRs) calculated in the images during the calibration of sensor 1 depending on the LED position.The red lines show the boundaries within which valid shadow positions are evaluated from the images.The graphs contain the results for the planes at (a) y L,ref = 300 mm, (b) y L,ref = 400 mm, (c) y L,ref = 600 mm and (d) y L,ref = 800 mm.
Figure 8 .
Figure 8. Random errors σ and systematic errors ∆ of the measured LED position components (x L , y L , z L ) evaluated in the test data set for various y L,ref : (a) random error σ(x L ) over x L,ref coordinates at z L,ref = 0 mm, including the theoretical course calculated for y L,ref = 800 mm based on an uncertainty propagation; (b) random error σ(y L ) over x L,ref coordinates at z L,ref = 0 mm; (c) random error σ(z L ) over z L,ref coordinates at x L,ref = 0 mm.In the random error, outliers occur due to the random detection or non-detection of stripes at the image edges.(d) Systematic error ∆(x L ) over x L,ref coordinates at z L,ref = 0 mm, (e) systematic error ∆(y L ) over z L,ref coordinates at x L,ref = 0 mm, (f) systematic error ∆(z L ) over z L,ref coordinates at x L,ref = 0 mm.
for various y L,ref , all in the same axial distance to sensors 2 and 3 at x L,ref = 0 mm.On these paths, the minimal systematic error ∆(y L ) is −103 µm and the maximal is 145 µm.A strong scatter of the systematic error ∆(y L ) occurs on the outer paths at y L,ref = 300 mm and y L,ref = 800 mm with a standard deviation of 54 µm or 46 µm, respectively.Centered in front of sensor 2 at y L,ref = 425 mm and in front of sensor 3 at y L,ref = 675 mm, the systematic error ∆(y L ) barely scatters but, due to a slope in the systematic error over the z L,ref position, the standard deviation of the systematic error ∆(y L ) is between 15 µm and 20 µm.The tendential slope depending on the z L,ref coordinate shows a cross-sensitivity between the position components vertically and horizontally lateral to the sensors.The systematic error ∆(z L ) of the z L component along the z-axis given in Figure 8f for various y L,ref coordinates centered in front of sensor 1 at x L,ref = 0 mm is, in total, more constant than the systematic error of the other position components and ranges from −43 µm to 60 µm.
|
v3-fos-license
|
2023-01-12T16:44:21.576Z
|
2023-01-01T00:00:00.000
|
255745722
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/28/2/644/pdf?version=1673227422",
"pdf_hash": "eac0d2c296bb425b2cabb6afebf4b3d56033cf0c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46164",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "d798e40ddcd6ac4b04c03e74f9346c14af27172e",
"year": 2023
}
|
pes2o/s2orc
|
Enzymatic Synthesis of Ascorbyl Palmitate in a Rotating Bed Reactor
Ascorbyl palmitate, an ascorbic acid ester, is an important amphipathic antioxidant that has several applications in foods, pharmaceuticals, and cosmetics. The enzymatic synthesis of ascorbyl palmitate is very attractive, but few efforts have been made to address its process scale-up and implementation. This study aimed at evaluating the enzymatic synthesis of ascorbyl palmitate in a rotating basket reactor operated in sequential batches. Different commercial immobilized lipases were tested, and the most suitable reaction conditions were established. Among those lipases studied were Amano Lipase PS, Lipozyme® TL IM, Lipozyme® Novo 40086, Lipozyme® RM IM and Lipozyme® 435. Initially, the enzymes were screened based on previously defined synthesis conditions, showing clear differences in behavior. Lipozyme® 435 proved to be the best catalyst, reaching the highest values of initial reaction rate and yield. Therefore, it was selected for the following studies. Among the solvents assayed, 2-methyl-2-butanol and acetone showed the highest yields, but the operational stability of the catalyst was better in 2-methyl-2-butanol. The tests in a basket reactor showed great potential for large-scale application. Yields remained over 80% after four sequential batches, and the basket allowed for easy catalyst recycling. The results obtained in basket reactor are certainly a contribution to the enzymatic synthesis of ascorbyl palmitate as a competitive alternative to chemical synthesis. This may inspire future cost-effectiveness studies of the process to assess its potential as a viable alternative to be implemented.
Introduction
Ascorbyl palmitate (AsPa) is an antioxidant derived from ascorbic acid (AA) that represents a feasible substitute which can be used in fatty matrices with applications in the food, pharmaceutical and cosmetic industries, providing the same essential properties as ascorbic acid [1][2][3][4][5]. The production of ascorbyl palmitate is accomplished mainly by chemical synthesis. Nevertheless, by-product formation may cause serious environmental risk (also complicating purification), which in turn compromises product quality and reduces process efficiency. Enzymatic synthesis, on the contrary, prevents by-product formation thus facilitating purification and reducing negative environmental impact, being a sustainable approach with great potential as a process alternative to chemical synthesis at industrial level [3]. Although enzymatic synthesis is very attractive, only a handful of studies address reaction conditions in reactor aiming at process scale-up and implementation [6].
The enzymatic route to ascorbyl palmitate synthesis is catalysed by a lipase. Lipases are quite versatile catalysts with numerous advantages that have secured their application in a broad range of reactions of commercial interest [7,8]. These enzymes have been immobilized on a wide variety of supports using different strategies, allowing their stabilization and reuse under operating conditions. The numerous published and patented works on this topic provide testimony of the great potential of these catalysts in applications such as esterification and hydrolysis of oils, with many interesting results that make use of ultrasound, supercritical fluids, and ionic liquids as novel strategies to maximize reaction performance [7]. Not many studies in the literature treat the behaviour of immobilized catalysts in reactors or the factors relevant for process scale-up. Among the latter, it is necessary to consider the mode of operation (batch, fed batch or continuous), characteristics of the substrate, characteristics of the biocatalyst such as resistance and the need for organic solvents. The most used reactors in processes involving immobilized lipases are batch stirred tanks (STR), followed by packed bed reactors (PBR) operated in continuous mode. PBRs stand out for their scale-up ease and low shear stress that prevents enzymatic desorption [9].
In the case of immobilized enzymes, it is very important to assess the mechanical stability of the support. For example, agitation in a STR could damage the support due to shear stress, which may lead to enzyme leakage and activity loss [10]. The search for good strategies to protect the enzymes and facilitate catalyst recovery in STRs have been advanced using basket reactors with very good results [11,12]. In addition, Hajar et al. [13] evaluated the effects of mass transfer, a key factor in heterogeneous catalysis, on the synthesis of an n-butyl oleate ester using Novozym 435 in a laboratory scale basket reactor (SBR). They carried out a statistical study for the determination of parameters considering a bi-bi ping-pong reaction mechanism, yielding values for the Thiele modulus and the Damköhler number that showed the absence of internal and external diffusion limitations over the reaction.
The aim of this work was to study the enzymatic synthesis of ascorbyl palmitate in a basket reactor operated in consecutive batches in order to evaluate the potential of this enzymatic system for large-scale operation. To this end, we evaluated different commercial immobilized lipases and tested different conditions to maximize reaction performance. The lipases studied were Amano Lipase PS, Lipozyme ® TL IM, Lipozyme ® Novo 40086, Lipozyme ® RM IM and Lipozyme ® 435. Once the appropriate enzyme was selected, the best reaction conditions were defined, allowing thereafter for the study of the behaviour of the biocatalyst in a sequential batch operation.
Biocatalyst Screening
A study of five immobilized commercial lipases was conducted by measuring hydrolysis activity of the artificial substrate pNPB and the synthesis activity of AsPa under the conditions described in Sections 3.2 and 3.3. The results obtained are shown in Table 1. [14]. This catalyst has shown great stability in organic solvents and has been applied in a broad range of reactions [4,[15][16][17].
Comparison of Different Biocatalysts on Ascorbyl Palmitate Synthesis
The above five commercial immobilized lipases were then tested on AsPa synthesis. Previously, sieve concentration and AA adding strategy were studied using the conditions proposed by Tufiño et al. [3] as a starting point. Due to the formation of water during the reaction and its negative effect on the performance of lipases, different concentrations of molecular sieve were evaluated. Preliminary results showed that adding 180 mg of activated molecular sieve (3.5%) at the beginning of the reaction was the most adequate strategy to reach maximum yield (see Section 3.7). As a comparison, in the work of Costa et al. [18], reactions for the enzymatic synthesis of ascorbyl oleate were carried out using 20% molecular sieve.
Regarding the strategy of AA addition, it was possible to witness that adding the entire concentration of substrate at the beginning caused the medium to turn dark, a sign of dehydroascorbic acid presence due to AA oxidation [18,19]. Consequently, adding AA in two steps was evaluated, favoring reaction conversion and preventing AA oxidation. Fed-batch operations have been reported in several studies as a solution for cases in which substrates cause enzyme inhibition. A clear example is the synthesis of biodiesel where methanol exerts inhibition [20]. Another case is the synthesis of benzyl butyrate by direct esterification using Novozym ® 435. In the latter, the feeding strategy of propionic acid improved the performance of immobilized lipase, facilitating ester conversion and allowing also recycling the biocatalyst throughout many esterification cycles [21].
Once the amount of sieve and the adding strategy of AA were established, AsPa synthesis was carried out using the five commercial immobilized lipases. Figure 1 shows AsPa production kinetics with the different enzymes proposed under the previously defined conditions. As shown in Figure 1, after 120 h of reaction the best results were obtained with Lipozyme 435 (77.7% yield) and Amano PS (58.3% yield). The other immobilized catalysts showed a poor performance reaching less than 30% of final yield. Additionally, initial synthesis reaction rates are presented in Table 2. As expected, Lipozyme 435 showed significantly higher activity than the other immobilized catalysts (about 18-8 fold). This correlates well with the initial assessment of synthesis activity (Table 1), showing the predictive value of this simple and fast catalyst screening approach. Results of the AsPa synthesis are consistent with studies of Candida antarctica lipase in similar reactions and conditions [18]. For example, the commercial lipase NS 88011, a Candida antarctica lipase immobilized on a hydrophobic polymer resin, was used for the synthesis of ascorbyl oleate, reaching a maximum conversion of 50% at optimized conditions of 70 • C, 30% enzyme loading, and ascorbic acid to oleic acid molar ratio 1:9 [18]. These represent much more drastic conditions than those assayed in this work risking enzyme inactivation.
Considering the low initial reaction rate of four of the five catalysts assayed (Amano PS, Lipozyme TL IM, Novo 40086 and RM IM), we studied the synthesis of ascorbyl palmitate under the same reaction conditions but increasing the amount of these biocatalysts by threefold, expecting to witness an improvement on conversion. The results are presented in Figure 2. As can be seen in Figure 2, even though yields improved in almost all cases, they did not increase as expected. This is consistent with the initial reaction rates shown in Table 2. These lipases display good hydrolysis behavior, but the AsPa synthesis conditions seem to be detrimental for their activity (see Table 1). Based on these results and having in mind As can be seen in Figure 2, even though yields improved in almost all cases, they did not increase as expected. This is consistent with the initial reaction rates shown in Table 2. These lipases display good hydrolysis behavior, but the AsPa synthesis conditions seem to be detrimental for their activity (see Table 1). Based on these results and having in mind the large-scale production of the antioxidant, Lipozyme ® 435 lipase was selected for the following study. Tufiño et al. [3] also reported Candida antarctica lipase as a preferable biocatalyst for AsPa synthesis as compared to the lipase of P. stutzeri immobilized in silica.
It is not always the case that Candida antarctica lipase yields the best results as compared to other enzymes. For instance, in a study reported by Zhu et al. [22], they compared Novozym ® 435 and Lipozyme TL-IM on the interesterification of a palm stearin and vegetable oil blend in order to enhance its physicochemical characteristics, with Lipozyme TL-IM delivering the best results. On another work, Lipozyme RM-IM showed best performance in the synthesis of capric acids via batch acidolysis in solvent-free medium [23].
The Effect of Solvent and Temperature on Ascorbyl Palmitate Synthesis by Lipozyme 435
Further studies of the reaction conditions were carried out assessing the effect of solvent and temperature. Three different solvents were tested, 2-methyl-2-butanol (2M2B), acetone and ter-butanol. The temperature of synthesis was evaluated between 45 and 55 • C. One important criterion for the selection of a solvent is how easy it can be recovered and reused. In this case, recovery of all of the solvents tested was feasible and easy. Figure 3 shows the yields obtained in the conditions evaluated. As can be seen in Figure 3, the highest yield was achieved in acetone at 55 °C obtaining over 80% (22.6 g/L). Acetone has shown to be a good solvent for esterification, in particular using immobilized Candida antartica lipase, as in the synthesis of ascorbyl acetate [24] and alkyl esters of prunin [25].
High yields were obtained also in 2M2B, accentuating the effect of temperature, as As can be seen in Figure 3, the highest yield was achieved in acetone at 55 • C obtaining over 80% (22.6 g/L). Acetone has shown to be a good solvent for esterification, in particular using immobilized Candida antartica lipase, as in the synthesis of ascorbyl acetate [24] and alkyl esters of prunin [25].
High yields were obtained also in 2M2B, accentuating the effect of temperature, as was also the case of acetone above, meaning that yield increases as temperature rises.
These results confirm what we showed in a previous work, where 2M2B resulted as an excellent candidate for ascorbyl palmitate esterification using commercial or made-in-house immobilized lipases [3].
Ter-butanol was dismissed due to the low conversion achieved. Results in ter-butanol were unexpected. In contrast to this work, Balen et al. [26] obtained conversions of linoleic acid into AsPa of 90% in ter-butanol using Novozym ® 435. In another example, Nehdi et al. [27] achieved yields as high as 97.8% in the production of biodiesel with Novozym ® 435 in ter-butanol medium. Also, Yadav et al. [17] reported 50% conversion of ascorbyl palmitate with Novozym ® 435 in ter-butanol. Furthermore, stability of this biocatalyst has also been reported to be favorable in ter-butanol, something that is key for the economic viability of the process. In studies with Novozym ® 435 for the synthesis of ascorbyl oleate, enzyme activity dropped to 50% only after 14 cycles in ter-butanol. The authors associated this loss to the effect of the solvent on the hydration layer of the enzyme [18].
As already mentioned, temperature exerts a significant effect in both 2M2B and acetone media, with the highest yield reached at temperature 55 • C. These conditions were selected to deepening studies on the stability of the catalyst.
Solvent Effect on the Operational Stability of the Biocatalyst
In order to study the effect of the type of solvent on the biocatalyst stability, an operational stability test was carried out consisting on three sequential batches of antioxidant synthesis in 2M2B and acetone at 55 • C. The results are shown in Figure 4.
Solvent Effect on the Operational Stability of the Biocatalyst
In order to study the effect of the type of solvent on the biocatalyst stability, an operational stability test was carried out consisting on three sequential batches of antioxidant synthesis in 2M2B and acetone at 55 °C. The results are shown in Figure 4. Each batch was conducted for 72 h, a period much longer than the time needed to reach the maximum yield. Therefore, more batches could have been carried out during the same time. Figure 4 shows that although a higher yield is achieved with acetone in the first batch, stability decreased substantially, and after the third batch using the enzyme is no longer feasible (30% yield approximately). Conversely, the reaction yield remained at about 70% after three batches in 2M2B, showing a much higher stability. Consequently, 2M2B was selected as the most adequate solvent for the next assays in rotating bed reactor. Each batch was conducted for 72 h, a period much longer than the time needed to reach the maximum yield. Therefore, more batches could have been carried out during the same time. Figure 4 shows that although a higher yield is achieved with acetone in the first batch, stability decreased substantially, and after the third batch using the enzyme is no longer feasible (30% yield approximately). Conversely, the reaction yield remained at about 70% after three batches in 2M2B, showing a much higher stability. Consequently, 2M2B was selected as the most adequate solvent for the next assays in rotating bed reactor.
Synthesis of Ascorbyl Palmitate in a Rotating Bed Reactor in a Sequential Batch Operation
After establishing the biocatalyst and the reaction conditions, the synthesis of the antioxidant was carried out in a rotating bed reactor. Rotation provides optimal mass transfer during the reaction, which translates into higher yields. The reactor basket allows for catalyst recycling and prevents catalyst breakup caused by stirring. The basket has a mesh cut of 100 µm, suitable for the retention of Lipozyme ® 435, which has an average particle size of 315-1000 µm [14].
Each batch duration was set at 30 h, using the biocatalyst 4 times under the conditions indicated in Section 3.9. No activity or yield loss was detected in the evaluated time. It is noteworthy that the yields achieved are the highest among those obtained in this study and those reported in the literature. This could be the outcome of the mixing provided by the reactor agitator, which enhances mass transfer as mentioned.
Overall, the second batch achieved higher yields than the first batch at comparable times, which could be attributed to the interfacial activation suffered by lipases. Rodrigues et al. [28] argue that the seeming activation of Novozym ® 435 lipase is the result of catalyst rupture due to mechanical agitation. This may be occurring to a lesser extent in the basket reactor, explaining the performance boost witnessed in the second batch. Although the reaction was tracked for 30 h, it can be seen from Figure 5 As can be seen in Table 3, yields of ascorbyl palmitate synthesis remained high during the four batches. Space time yield (STY) showed excellent values in each batch averaging 0.84 (gAsPa g −1 L −1 ). This represents 1.8 to 4.2 fold increase as compared to previous studies on AsPa synthesis [3,29]. Also remarkable was the biocatalyst yield, obtaining 8.4 (gAsPA gcatalyst −1 ). This value can be substantially improved by recycling the catalyst into more batches considering that no significant loss of catalyst activity is observed. To our knowledge, this is the first study of biocatalyst reuse for the synthesis of AsPa. As can be seen in Table 3, yields of ascorbyl palmitate synthesis remained high during the four batches. Space time yield (STY) showed excellent values in each batch averaging 0.84 (g AsPa g −1 L −1 ). This represents 1.8 to 4.2 fold increase as compared to previous studies on AsPa synthesis [3,29]. Also remarkable was the biocatalyst yield, obtaining 8.4 (g AsPA g catalyst −1 ). This value can be substantially improved by recycling the catalyst into more batches considering that no significant loss of catalyst activity is observed. To our knowledge, this is the first study of biocatalyst reuse for the synthesis of AsPa. Table 3. Summary of the main metrics obtained in the synthesis of ascorbyl palmitate in a rotating bed reactor operated in sequential batches. The results achieved in this work represent a contribution to enzymatic synthesis studies mediated by lipases. Higher yields and greater productivity were accomplished than those achieved in previous works [3,29,30].
The biocatalysts used in this research are shown in Table 4, all generously donated by Novozyme Spain. Table 4. Description and optimal operation conditions of the enzymes used [31,32].
Hydrolysis Activity Assay
The hydrolytic activity of the commercial lipases was determined by measuring pNP resulting from the hydrolysis of pNPB. Briefly, 19.8 mL of a phosphate buffer solution (25 mM pH 7.0) containing 5 mg/mL of enzyme was incubated in a bath at 30 • C. The reaction was started by adding 0.2 mL of pNPB (50 mM in acetonitrile). Samples of 1 mL were taken and filtered every 30 s during 3 min and then subjected to absorbance measurements at 348 nm in spectrophotometer. Activity was determined considering a molar extinction coefficient ε = 5.15 mM −1 cm −1 . One lipase hydrolytic activity unit (IU H ) was defined as the amount of enzyme producing 1 µmol of pNP per minute at pH 7.0 and 30 • C. Assays were conducted in triplicates.
Synthesis Activity Assay
The initial synthesis reaction rate was measured by keeping track of the reaction kinetics for 24 h. Samples were taken and filtered every 30 min for the first 3 h and then at 5 and 24 h of reaction. The assay was conducted at 45 • C 150 rpm in 5 mL 2M2B at substrate molar ratio 1:5 (60 mg AA, 436 mg PA) adding 60 mg of biocatalyst and 70 mg of activated molecular sieve. Before the reaction, the substrate solution was prepared by solubilizing AA in 2M2B for 1 h in shaker at 45 • C 150 rpm. The samples were analysed in HPLC measuring AsPA and AA. Activity was determined considering only those data below 20% conversion (first 4 h of reaction approximately). One synthesis activity unit (IU S ) is defined as the amount of enzyme producing 1 µmol of AsPa per minute at pH 7.0 and 45 • C. Assays were conducted in triplicates.
Molecular Sieve Activation and Solvent Drying
Molecular sieves of 3Å were used for the drying of solvents and the synthesis of AsPa. The sieves were activated by vacuum drying in a centrifugal concentrator (SpeedVac SPD111 VP2, Thermo Scientific, Waltham, MA, USA) for 2 h at 35 • C and 1 h in vacuum.
The solvents used for the synthesis of AsPa were dried by contacting them with 3Å molecular sieves at 10% weight per solvent volume for 48 h.
HPLC Analysis of Reagents and Product
The quantification of AsPa and substrates AA and PA was carried out by HPLC with a C-18 column Kromasil C18, 5 µm, 4.6 mm × 150 mm (Analisis Vinicos S.L., Ciudad Real, Spain) and a UV-vis spectrophotometer (JASCO model AS-2089). Analyte separation was accomplished by using an acetonitrile:water mobile phase at 1 mL/min following a gradient schedule where the mobile phase was 60:40 v/v for the first 6 min and 95:5 v/v for the next 22 min. AA and AsPa retention times were 1.2 and 13.5 min respectively. Concentrations of AsPA, PA and AA were determined using standard curves previously elaborated in the concentration range from 0 to 4 mM.
Synthesis of Ascorbyl Palmitate with Different Commercial Immobilized Lipases
The synthesis of AsPa was conducted in shaker at 55 • C and 160 rpm based on the conditions reported by Tufiño et al. [3]. The reaction mixture contained 5 mL 2M2B, 180 mg of activated molecular sieve, 698 mg PA, and 60 mg AA added in two steps (30 mg initially and 30 mg after 4 h of reaction). The reaction started once 60 mg of commercial biocatalyst were added to the medium. Samples were taken during 120 h and subjected to HPLC analysis. Assays were conducted in triplicates. The yield (Y, %) was defined based on the content of AsPa as follows: where, AsPas: Concentration of synthesized ascorbyl palmitate, mM AsPat: Theoretical concentration of ascorbyl palmitate at full ascorbic acid conversion, mM The initial synthesis reaction rate corresponded to the slope of the initial AsPa concentration vs time readings (first 4 h of reaction approximately).
Evaluation of Reactions Conditions
This stage was carried out only with Lipozyme ® 435, which expressed the highest conversion yield at the reaction conditions reported by Tufiño et al. [3]. Three solvents were tested: 2M2B, acetone, and tert-Butyl alcohol, all dried using activated molecular sieve. In addition, temperatures 45 • C, 50 • C and 55 • C were assayed. The reaction mixture and other conditions were as described above (Section 3.6).
Operational Stability of Lipozyme 435
The operational stability assay of Lipozyme 435 was carried out in 5 mL solvent and 60 mg of enzyme per batch using alternatively 2M2B and acetone at 55 • C. Other conditions were those reported by Tufiño et al. [3]. Samples were taken for 73 h and analysed in HPLC. After each batch, the biocatalyst was filtered, recovered, and rinsed three times with the solvent used in the reaction. When the biocatalyst was not reused immediately, it was stored at 4 • C. Assays were conducted in triplicates.
Synthesis of Ascorbyl Palmitate in a Basket Reactor Operated in Sequential Batches
A 250 mL rotating bed reactor (RBR S2, Spinchem, Umeå, Sweden) was used [33]. Rotation in this reactor favours mass transfer and the basket allows for biocatalyst recovery. The synthesis was conducted at 290 rpm 55 • C in 150 mL 2M2B solvent with 20.9 g PA, 5.4 g molecular sieve, and 1.8 g AA added in two steps (900 mg at initial time and 900 mg after 4 h of reaction). The reaction started once 1.8 g Lipozyme ® 435 were added. AsPa concentration in the medium was traced for 30 h. Once each batch ended, the reaction media was removed and the solvent and product were recovered using a rotary evaporator (Rotavapor R-100, Buchi). The biocatalyst was rinsed with 2M2B without removing it from the basket. Finally, four batches were carried out in total under the aforementioned conditions.
Conclusions
The results in this study reveal the prime importance of adequate screening of the catalyst and the operating conditions for the synthesis of ascorbyl palmitate. Among the commercial immobilized catalysts assessed, Lipozyme ® 435 was the enzyme reaching the highest initial reaction rate, yield and productivity, securing its selection for the assays to follow. The study of the operating conditions showed that the solvent, sieve, and substrate addition in two steps, were the key process intensification factors for the success of the synthesis tests in reactor. The reactor basket allowed for enzyme recycling, and four repeated batches were performed obtaining over 80% yield in each batch. Such satisfactory results are arguably the outcome of the ideal mixing conditions provided by rotation that reduce mass transport limitations both inside and out of the basket containing the biocatalyst particles. The results obtained in this study in terms of reaction yield and catalyst recycling are an important contribution to lipase-catalyzed synthesis processes. Future studies will be necessary to elucidate how profitable and viable this technology can be for industrial application, considering not only catalyst cost, but also solvent recovery and product purification. The latter may involve considerable savings in purification equipment, operation costs (reactants, time, energy, labor, etc.), and stream treatment to comply with environmental regulations. Data Availability Statement: All data are reported in the paper, any specific query may be addressed to lorena.wilson@pucv.cl.
|
v3-fos-license
|
2023-07-11T00:39:25.680Z
|
2023-06-18T00:00:00.000
|
259441014
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://curriculumstudies.org/index.php/CS/article/download/127/79",
"pdf_hash": "4c27ea3fdafadde4e1656d3e8dd19523d796f136",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46165",
"s2fieldsofstudy": [
"Education"
],
"sha1": "35bd83260eeb5edf542d40a84a774e9e47902df0",
"year": 2023
}
|
pes2o/s2orc
|
Fictional Stories: The Learning Strategy to Mitigate the Challenges of Reading Comprehension for University Students
Reading comprehension is the ability to understand a text, decode and infer its meaning according to reader’s level of comprehension. Similarly, reading is the ability to deduce, critique and construct the attributes of a text. Hence, reading and reading comprehension are intertwined and embedded as a skill to analyse the meaning of the text in general and synthesise the interpretation of your own understanding of a particular text. Despite this prerequisite skill, there are challenges that impede reading comprehension, and as a result, the paper intimates these challenges of reading comprehension using critical theory as a conceptual framework. It further employs participatory action research as a technique whereby co-researchers were purposively sampled and interviewed in a free attitudinal interview. Equally, the results are analysed using critical discourse analysis, where it is established that lack of collaborative learning, exposure to informational text, students’ prior knowledge and punctuation marks are the core attributes of the challenges of reading comprehension. In brief, the paper contends that the use of fictional stories as a learning strategy can enhance the reading comprehension of first-year students
BACKGROUND AND INTRODUCTION
Countries pride themselves with their high rate of literacy and numeracy and the steady growth in which these skills are emerging grants every nation the opportunity to grow economically. Therefore, the significance of these skills cannot be negated but rather underemphasized because the population prone to literacy and numeracy has the propensity to solve socio-economic challenges compared to the less literate one. This contention is endorsed by Zua (2021) who purported that the most literate nations like Finland, Norway, Sweden and Denmark have the greatest and impeccable skills to solve their domestic challenges and affairs. In addition, the highly literate nations can be measured against the degree and extent to which their Human Development Index develops (Max, 2014). Notwithstanding these benefits, Africa, in general, is confronted with the challenges of literacy (UNESCO, 2019) and South Africa is not immune to these devastating ripple effects of illiteracy.
As a result, Wilfred (2017) depicted oblique margins of reading literacy in the South African context, whereby the reports inferred that, on average; a grade four (4) learner cannot read for meaning. This infers that our learners do not have the requisite skills of reading comprehension and they cannot read for meaning; hence the former is the ability to discern, deduce and interpret the meaning of the text (Graves et al., 1998), while the latter is the ability to understand the text and infer its main idea (Klapwijk, 2016). Now, the primary objective of having these abilities is to demonstrate the levels of literacy among our nation and be able to aptly solve socio-economic challenges as there is a direct proportion between the high levels of literacy and socio-economic factors. This notion is contended by McGarvey (2007) when argued that the propensity of people who are not educated or literate to be unemployed is higher than those who are educated and literate. Hence, literacy and education directly impact our economic trajectory and development.
Equally, Gruenbaum (2012) asserted that the poor state of literacy is largely attributed to insufficient reading skills. This implies that the failure of a nation to address reading skills and abilities shall have a negative ripple effect on its economic development and thus perpetuates the entrenched socio-economic challenges such as poverty, crime and unemployment. Lind (2011) conceded that poverty and illiteracy are intertwined and irreversibly dire in our political-economic dimension, and this impact shall delay the attainment of the National Development Plan forecasted to be completed and achieved in 2030 with sustainable development. In contrast, reading skills cannot be expeditiously attained if reading comprehension challenges are not addressed. Hence, this paper aims to identify the challenges of reading comprehension by using fictional stories (FS) as a learning strategy to enhance first-year university students' reading comprehension (RC).
Research Questions
The paper aims to identify the challenges of reading comprehension by proposing the use of fictional stories as a learning strategy. This aim is achieved under the auspices of the following research question and objective.
• How can fictional stories be used to minimise the challenges of reading comprehension? • Its objective is to determine the challenges of reading comprehension for first-year students using academic and fictional texts.
LITERATURE REVIEW
Amid the aim and objective of the paper, this section critically discusses the relevant literature on the use of FS to enhance RC. To begin with, the literature review is the general view of the research title and/or topic and seeks to provide relevant, significant and empirical answers to the research title/or topic (Alexiades, 1996). The significance of a literature review is to establish the organisation, the evaluation and the synthesis of the study to provide critical thinking to the reader (Antshel et al., 2014;Johnson, 2020). Moreover, the literature review provides an executive summary of scientific facts that depict the authors' knowledge and understanding of phenomena, constructs and theories (Alvin, 2016); hence this section focuses on the aforementioned objective.
The Use of BICS and CALP in the Context of Reading Comprehension
The dutiful task of informational disseminators (lecturers, facilitators and teachers) is to ascertain that collaborative learning (CL) is not only effectively implemented but is incorporated and intertwined into Basic Interpersonal Communicational Skills (BICS) and Cognitive Academic Language Proficiency (CALP) in order to fuel RC skills. Cummins (1981, 1991, as cited in Taboada and Rutherford, 2011, distinguishes between BICS as the basic interpersonal communication skills used in our social interaction and CALP as the cognitive academic language proficiency, which is a literacy-related skill for academic writing, vocabulary, presenting and speaking. As such, the requisite for merging CALP, BICS and CL into FS constitute the crucial aspect of accomplishing RC. According to Howie et al (2017), universities' solution and accomplishments rely on using CALP -the literacy-related skill for academic writing, RC, presenting and speaking. This was endorsed by Bailey and Heritage (2014) that academic language (CALP) is distinct from the social dialect used in universities because CALP contains the skills of synchronisation of ideas, syntax, semantics, comprehension, deduction and decoding. Secondly, this premise suffices the requisite to implement collaborative learning (CL) to the latter because CL improves students' communicative skills as they will have to converse in the medium of instruction (Molotja & Themane, 2018).
This implementation improves students' RC skills since the medium of instruction contains similar principles of fluency and fluency in English, which can increase students' RC skills, and serves as a critical vehicle for academic success. Furthermore, CL, as conceded by Molotja and Themane (2018), has the advantages of students exchanging ideas, completing tasks in time, establishing sustainable friendships and sharing knowledge. This infers that mutual understanding is attained as a result of CL and thus, creates significant opportunities for students to acquire RC skills. Initially, it was depicted that ineffective implementation of CL can deter RC. The crux of FS features pertinently on CL as one would recall that FS presents diverse fictional content comprising fantasy and generalisation, which will require students to transfer such fantasy and generalisation into RC and real-world knowledge.
As a result, a direct correlation is established between CL and FS as the former emotionally and socially challenges students, ultimately enabling them to establish a conceptual framework of ideas and textual meaning (Molotja & Themane, 2018). In contrast, the latter challenges students cognitively and abstractly to formulate ideas from fiction into RC and real-world knowledge (Walker et al., 2015). So, the implications of espousing CL into FS as a learning strategy to enhance RC is that students will be afforded the opportunity to simultaneously (1) learn and have comprehension, (2) establish friendships primarily to exchange ideas and knowledge while learning, and RC takes place cumulatively, (3) to instil conceptualisation presented by FS for them to debate, establish and distinguish fiction from facts and lastly, (4) to have a common understanding on how fictional content influences factual content and thus, ultimately having a holistic universal view of the reading.
Reading Engagement and Motivation
It was initially alluded that one of the deterrents of RC can be attributed to the lack of reading engagement and motivation that students are anticipated to possess. Hence, according to Alexiades (1996), an ample output of scholarly research does regard intrinsic motivation as a dominant impact on RC, especially when students read for internal reasons such as to address a certain level of inquisitive, curiosity, interest and for fun. This premise emphasises the necessity to equip our students with reading engagement and motivation, which according to the bulk of research (Alvin 2016; Antshel et al., 2014;Rosenbaum, 2016), can be solicited from the use of FS, as fiction has a solution to provide students with a significant level of motivation, interest, pleasure and engagement. Although fictional materials contain enormous fallacy and pretence, students have the potential to distinguish between fantasy and reality (Barness & Bloom, 2014), which can be attributed to the propensity to acquire and attain reading engagement and motivation because these two concepts cannot be achieved insofar as RC is not achieved.
Furthermore, reading engagement is defined as the combination of motivation and cognitive processes that occur during the reading process (Williams, 2013). By analysis of this definition, there is a blatant suggestion that RC can be achieved and readily available during the reading process. In contrast, readers need to master ninety-eight per cent of the fictional text in order to achieve effective RC (Biesta, 2010), meaning that an individual student needs to have a combination of motivation and cognitive processes incessantly sustained if one has to master this percentage. However, fictional materials hinder and impact a reader's interests adversely if that interest is directed on insignificant content material which cannot add insightful value to the students' knowledge (Campbell, 2014). In addition, the use of fictional materials does not necessarily mean that students will definitely have an interest, since these materials differ in terms of the quality of content, degree of complication and logic (Bronner & Kellner, 1989). However, these submissions do not disregard and dissent the use of FS for enhancing RC since FS can be resourceful in assisting students to be critical viewers and readers insofar as they are concerned. In a nutshell, the use of FS can really enhance RC because the catalyst to this process is reading engagement and motivation that can be found from using FS.
Students' Achievement
It is evident that the majority of students no longer have a keen interest in reading fiction (Garro, 2016) and this has an adverse impact on the RC, as there is a distinction between reading for fun and reading for understanding. According to Cohen et al. (2011), reading fiction assists students in enhancing their vocabulary, fluency and moral choices. This influence from fiction results from characters' role, actions, and abilities to make decisions. Therefore, this infers that students' achievements in their academic work will have a positive impact as they gain a range of skills in terms of reading, decision making, problem-solving and expanding their vocabulary. Furthermore, students' achievements are ignited and encouraged by developing eReaders or eBooks to motivate fictional reading (Guthrie, 2003). This inculcates the culture of reading for fun and reading with comprehension so far as fiction is concerned. It is also indicative that reading fiction enables students to have a significant vocabulary that will enable them to consider their actions, ponder about the consequences, compare the pros and cons before taking hasty decisions and eliminate or elude sources of problems.
Fiction reading has the propensity to release people from the complexities and challenges of life and negate the fictional world with fascinating imaginations that trigger fun, interest and emphasis (Heuman, 2014). The relaxed and subdued mind can forecast the future, and weigh the dynamics of life without any interferences or distractors; hence fiction reading significantly benefits students' achievement. However, some challenges prevent people from reading fiction, such as the lack of interest, finding it difficult to read, outdoor activities, social media (Twitter, Facebook, Instagram and WhatsApp) and television. Despite these obstacles, eReading can change students' perspectives and mentality as there are many programmes and online applications such as M-Reader and Cahoot, among other things, which seem to function effectively. Therefore, these programmes can greatly benefit students' achievement as their critical thinking and writing skills could be enhanced.
In concise, the solutions mentioned above can significantly bring stability and enhance RC if the implementation of reading fiction can be executed to the latter. This implementation means considering the students' achievement, ascertaining reading engagement and motivation subsistence and imparting the skills of BICS and CALP. If these can be done accurately, the challenges of RC can be history and the determinants of such history depend on the implementation of these factors.
THEORETICAL FRAMEWORK This paper aims to identify the challenges of reading comprehension by proposing the use of fictional stories as a learning strategy. Therefore, this aim is achieved by the pertinent use of the theoretical framework, the theoretical framework can be defined as the designed plan or guide that serves as the basis for inquiry of study and its main purpose is to review and interpret the conceptual elements which originate from the existing theory (Adom et al, 2018). Hence, the designed plan provides the significant direction of the study as to what it entails in terms of its aim, objectives and goals.
This designed plan is well compared to Howie et al. (2017) as the map the traveller uses to reach their final destinations. As a result, this comparison is well adopted into this paper because the theoretical framework provides the necessary direction in terms of understanding the phenomenon, the significance of the study, the research question and its aims. In addition, as stated by Kidd and Castano (2013) that theoretical framework provides a significant structure to cement the theories together; this clearly indicates the importance of a theoretical framework as it demarcates the study into the area of interpreting, predicting, criticising and synthesising the existing theories to generate the new knowledge (Creswel, 2008).
Thus, critical theory (CT) is utilised as the conceptual stanza in an attempt to understand, analyse and interpret the social phenomenon that informs our learning in general. In this context, the use of FS as a learning strategy to enhance RC of first-year university students possesses salient features similar to that of the society whereby the inherent status quo of inequality, oppression and lack of liberty and freedom are depicted in the way of narration from these stories. As a result, the institutions of learning such as schools and universities have not escaped the jaws of oppression and inequality; hence, the impediments presented by these factors contribute to the challenges of reading, especially RC. Therefore, the notion of emancipation plays a critical role in the context of reading since the primary goal of reading is to be informed, entertained and educated (Biesta, 2010). This notion premises the necessity to liberate students from various forms of oppression and powers manifested by the traditional approaches and practices of society. The independence and freedom of students as human capital to function optimally come from CR.
This inference endorses the fact that students' abilities, skills and expertise to interpret, view and conceptualise phenomena are fundamentally embedded in the impetus of emancipation. Hence, CT seeks to make sense of the world and insists that thoughts should respond to new challenges and solutions (Hannel & Bradly, 2009). This inference was endorsed by Bronner (2011) when stating that CT responds, in a way to cognitive process, to the problems arising and possibilities around the pre-existing circumstances. Furthermore, it asserts Vygotsky's view (1978) that CT's characteristics do not invariably concern how things were but rather seek to conceptualise and comprehend how things might and should be. Issues of emancipation play a vital role in equipping students to be independent and critical thinkers on societal phenomena. This view is further propelled by Carrington and Selva (2010) when they cited that in quality education, students are infused with the competencies and abilities to unlearn and relearn new concepts in the quest for emancipation. Indeed, to conclusively test the veracity of quality education, students ought to showcase these abilities without interference from power relations that are at loggerheads with the independence and freedom of students.
In brief, it is patent that competencies and reading skills cannot be achieved if intellectual emancipation is not addressed and attained. Suffice to appreciate the extent and impact the effective reading skills has on our learning abilities and reading comprehension; hence the challenges which manifest themselves during RC are evident as there is limited scholarly work that has a primary focus on the use of FS to enhance RC (Bal & Veltkamp, 2013;Baldwin, 2015;Molotja & Themane, 2018;Taboada & Rutherford, 2011). Therefore, this paper's primary aim is to discover how the use of FS can enhance the RC of first-year university students. In addition, it was deemed necessary to undertake this research amid the challenges of RC indicated by a number of studies such as the International Reading Literacy Study Report (2016), Walker et al. (2015), and Klapwijk (2016) that the challenges of RC, if not expediently addressed, shall be perpetuated if there are no sustainable modern solutions to reading comprehension. Therefore, this paper has the potential to contribute immensely to providing solutions needed in the current era of education so far as using FS is concerned with improving the RC of first-year university students.
METHODOLOGY
The study relied on the principles of the qualitative method espoused by participatory action research as research design. This research design (PAR from now on) relies on the qualitative method and is the umbrella of interpretive and critical emancipatory inquiry (Gilbert et al, 2018) argue that PAR is a long-term investment: that is, both intervention and research; hence it develops in the research cycle. As a result, PAR is implemented with the participation of first-year university students during an intervention, usually with their help and with the aim of emancipation for the co-researchers. Data generation is conducted with co-researchers who are purposively sampled from the first-year university students at the University of the Free State (Qwaqwa Campus), which is geographically located in the Eastern rural part of the Free State. A purposive sample of thirty (30) African/Black first-year students between the ages of 18-22 years, on a 50/40 pro rata basis of females and males proportion across all faculties, is drawn from the students' population as students tend to experience the most common challenges of reading comprehension. Furthermore, the procedure is that co-researchers must be first-year students doing an English Academic Literary module. Similarly, five (5) module facilitators are sampled as the study seeks to propose fictional stories as a learning strategy to improve reading comprehension. Therefore, their insights and expertise are deemed necessary given facilitators' inherent experiences in teaching English as Academic Literacy.
The study aimed to propose the use of fictional stories as a learning strategy; therefore, the reading skills of first-year students are rigorously subjected to tests using the M-reader online system, which is a designed quiz of extensive reading geared to assess the reading comprehension of the students. Data generation is critical in research, as data enhance comprehension of the phenomenon towards the theoretical framework (Bronner & Kellner, 1989). As a result, the choice of purposive sampling is based on the fact that this technique is based on the qualities of informants or co-researchers, and is a non-random technique that does not require underlying theories or a set number of informants. Free attitudinal interview (FAI) is utilised to generate data wherein co-researchers are divided into fictional and textual fans; each group of fans read either the fictional or comprehension texts and respond to contextual questions. The first fifteen cohorts of co-researchers collected and read the graded books from the university library and after reading, co-researchers had to take a quiz from Mreader where contextual questions were structured. In contrast, the other fifteen cohorts of co-researchers were assigned reading comprehension and academic texts (textual fans) to read and respond to contextual questions. Thereafter, co-researchers and principal researchers discuss the responses based on each text in conjunction with the FAI questions. FAI is an instrument of data generation embedded in the principles of equality, mutual respect and social justice, which reciprocates the personality traits of co-researchers in social inquiry (Tshelane, 2013). Hence the choice of this instrument is that it reinforces the qualities and values of PAR which are proponents of empowerment, self-liberation and social emancipation. Critical discourse analysis (CDA) is used to interpret and analyse verbal and tacit words. According to van Dijk (2014), CDA is the basic study of the methods of the alteration, rebirth, promulgation and defiance of specific descriptions within social and political settings of social power and inequality. It seeks to understand, interpret and explicitly challenge social inequality and herein, the use of CDA to analyse the generated data is to reflect explicitly on the effect of poor reading comprehension, which leads to social inequality in a sense that knowledge empowers individuals and produces critical thinkers (Mogashoa, 2014). Hence CDA is utilised to analyse the spoken and written words whereby the co-researchers' responses are analysed verbatim in order to infer and denote the meanings. In addition, co-researchers' responses are presented verbatim to decipher and interpret such meanings in the context of reading comprehension to propose fictional stories as a learning strategy. Furthermore, the use of pseudonyms is adopted to conceal the identity of co-researchers, and reserve and respect the rights of anonymity.
RESULTS AND DISCUSSION It is initially indicated that RC demonstrates the reader's competencies and abilities to understand the world and its reasons for existence (Kozan et al., 2015). As a result, the opportunity to demonstrate these abilities must be presented during the process of reading, where students can discern meanings and have abilities to draw inferences. However, the challenge that emerges from the disengagement of the text, which in turn constitutes miscomprehension, is because of insufficient exposure to the text (Pennington et al., 2014). Therefore, to overcome this challenge, it was crucial to significantly expose co-researchers to the informational text to determine the RC, bearing in mind that the aim is to propose the strategy to enhance RC using fictional text. Co-researchers had to be exposed to different informational texts, such as academic and fictional texts (materials/books).
Furthermore, the exposure aimed to intrigue collaboration, trigger existing knowledge, encourage and maintain reading engagement and sustain the concentration of co-researchers. The reading process occurred in a serene environment where distinctive variables such as social networks (Facebook, WhatsApp and Instagram) are limited to solicit reading comprehension. Thus, the following findings promote the use of fictional stories as a learning strategy to enhance the reading comprehension of first year-university students: Informational Text This is one of the challenges discovered after the co-researchers are subjected to the process of reading both academic and informational text. Therefore, the empirical findings attest those fictional texts are more relevant and suitable to attain RC as co-researchers can maintain engagement with the text. It is noted from this excerpt of a verbatim response of one of the co-researcher: Mr According to the excerpt, the analysis of the graded book Love of money indicates that the co-researcher elicited the meaning or main idea of the book. The co-researcher can read to the events and apply events in reality by evaluating and comparing socio-political issues that prevail versus the events from the book. It is because FS provides details about the setting, characters and main event, which are the fundamental aspects of themes related to the main topic/title. However, informational text requires critical thinking and reasoning, which a reader must have while reading. As a result, an academic text presented no opportunity for co-researchers to reflect, think, stop, connect and ask questions. As it is reflected upon in this excerpt which is written verbatim: Mr. Lekwala: During the reading of the academic text, were you able to identify hidden meaning? Mr. Lekwala: So do you think, based on the academic text you read, that its main ideas couldn't be analysed and related to the context of reality? Female student: I could say, Yes and No, sir! Yes, because the ideas were quite congested and, as such, difficult to comprehend what is the main idea. And no, in the sense that these were authentic events of an idea, one needed not to think out of the box. Mr. Lekwala: How will you say this goal of identifying hidden meaning was achieved? Female Student: Jrrrr! It was a challenge to achieve the goal of identifying the hidden meaning because the text was boring and full of rhetorical meanings. Even though academic texts are predominately used as a learning strategy, fictional texts manage to present an effective opportunity for the reader to stop, think, ask, connect and reflect. This results in co-researchers' ability to remain engaged, entertained, and captivated throughout the fictional text. In concise, informational texts succeeded in attaining reading comprehension because co-researchers are holistically involved and the strategy described above assisted in challenging the critical thinking and reasoning capacity. This proves that FS IS a suitable learning strategy to enhance RC amid the engaged reading attained.
Collaborative Learning
Co-researchers, as per empirical findings, are able to establish synergy in order to achieve RC. During data generation, in one instance, a female researcher sought to solicit an intervention or assistance of the other fellow co-researcher. However, because of significant restriction, RC was deterred as a result of this particular lack of collaborative learning, as evident from this excerpt: Mr. Lekwala: During the process of reading academic text, were you able to understand some of the words while reading? Female student: Absolutely not, iyoh! I wanted to ask the next person sitting apart from me, but it was difficult for me because I was not allowed to interact with anyone. Mr. Lekwala: How will you describe reading in isolation? Female Student: It is very difficult because silent reading is boring and overwhelming. So it would be better if we had to read in groups and consult each other in terms of pronunciation of words and stuff. Therefore, collaborative learning is essential to reading activity to achieve RC purposively. In comparison, FS depicts the sense of synergy where characters collaborate to achieve a specific goal. As a result, students happen to learn these skills from fictional texts and are enticed to adopt and implement them in their daily life roles. Once this form of ability manifests itself, assurance is attained that RC was certainly achieved because the reader acquired a certain and peculiar set of skills that are requisite for effective learning. Furthermore, collaborative learning enhances communication skills, emotional intelligence and tenacity. These skills and traits manifest across the plots in a fictional story wherein a particular character must communicate unequivocally and without contradictions in an attempt to fulfil a mission. Similarly, the character must maintain composure, demeanour and perseverance to realise the set goal. Conversely, the prospects of a fictional story reader acquiring these traits and skills are high as the reader's critical thinking is not only challenging but anticipated to visualise how these tenets can be applied in a particular situation. In brief, fictional stories propel independent thinking in a manner that requires initiatives or situations holistically, that is-intellectually, emotionally and physically have a rapport with the dynamics of the environment with the society. Prior Learning or Existing Knowledge When reading, prior knowledge is activated and interest invigorated so that the leverage to sustain the reader through the text is dominantly maintained. However, this was a challenge when using academic text as RC is not optimally achieved because prior learning or knowledge is not triggered by readers. It is discernible from this excerpt: Mr. Lejoi: Were your preconceived ideas changed after reading fictional text? Male student: Yes, initially, I viewed women's dignity as drained by abuse while I didn't consider that values can play an important role...to change a woman's confidence. Mr. Lejoi: Based on your prior knowledge, between fictional text and academic text, which one is difficult and why? Male student: Huh….! I thought that fictional texts were difficult to understand because fictional materials are imaginative, but as I read, I realised that imaginative things happen in real life, so reality is based on imagination… you know! Mr. Lejoi: Have your knowledge been tested by fictional things?
Male student: Mmme ja! Because, for example, I thought crime could be an alternative for surviving but, eish! After reading fictional material, I realised that it is not an alternative. Relatively, the fictional text contains the features that potently challenge the reader to apply the existing knowledge to what is being read. This was identified during empirical data generation, where Mr Lejoi asserted that his preconceived ideas about women were drastically changed after reading fictional text. Amid this assertion, the findings conclude that fictional texts activate prior learning or knowledge, which is critical and essential for one to have RC. This prior knowledge is activated when certain events from the fictional text test the existing knowledge about the reader, such as the use of criminal acts as a means of survival. This tested the co-researchers' prior knowledge, who thought crime was an alternative means of surviving. However, after reading about crime and its effects in fictional text, he learned that crime is not an alternative means of surviving as its consequences are death and prison. In other words, the co-researchers' preconceived ideas are changed during the reading of the fictional text, indicating that his prior knowledge is activated, linked to the text and changed after reading, thus showing comprehension.
In contrast, it is established that prior learning exposes the reader to the learning curve where an assessment of what is known and not known is established based on empirical data, and the concept of transition into reality check mode (RCM) is coined. This RCM is the mode that enables the reader to juxtapose the fact and fiction extracted from fictional materials through critical thinking, which in turn, reflects to the level of comprehension that one has optimally attained. Invariably, RC might be blurred once prior learning is not equated to generating the flow of thoughts and reasoning. However, the credence of empirical data attests that FS has the potency to equate the thoughts and critical reasoning into prior learning; thus, it affirms the use of FS as the alternative learning strategy to enhance RC. Obstacles to Reading Social networks and related media are the distractors that impede RC. Therefore, findings conjured the concept of reciprocal reading (RR) as the process whereby reading fictional texts is simultaneously intertwined with the thoughts and feelings of the reader. FS produces this to avert the situation whereby the reader incessantly solicits entertainment, interest and attention from social media because there is a significant lack of RC. Once there is a lack of RC, it is a vivid sign that interest and attention are not captured in the reading process thereof, which propels one to detour his/her cognitive processes into social networks and related media.
Moreover, to curb this challenge, FS must be used as the learning strategy for RC, wherein readers' challenges of concentration bred by lack of attention and interest will be overcome by RR espoused by the potency of FS. Insofar as FS is concerned, the lack of attention and interest during the reading process is sealed by the use of FS as they present flexibility, induced entertainment, adaptation and relaxation demonstrated by characterisation and conflating of a sequence of events from fictional materials. This surmises that FS is used as the learning strategy for RC which is significantly imperative to overcome the challenges of RC.
Other Challenges
The aim is to propose using FS as the learning strategy to enhance RC and as a result, the inquiry conducted managed to generate empirical data that discovered other challenges as some of the barriers to RC. These challenges are among other things but not limited to the use and application of comma (,) and colon (;) in the sentences of the texts. Although related studies educate us about the use and application of punctuation marks, there have not been significant studies that focus on how their use and application enhance RC. Therefore, this necessitates future studies to focus on how the use of punctuation marks enhances RC. Amid these identified challenges by empirical data, it is conceded that even during the reading of FS, the use and application of punctuation marks perpetuated the need for future studies as they constantly kept on appearing as one of the barriers to RC. However, during the reading of FS for the purpose of RC, these challenges were attributed to the lack of exposure of readers to these punctuation marks, which caused them not to be in possession of what the researcher, influenced by empirical data, coined the concept: linguistic dexterity. This linguistic dexterity is defined as the mental touch and understanding of perceived punctuation marks' use and application in the text during the reading process and is derived when the reader can clearly notice the punctuation mark's position in the text and relate the meaning of the entire text without deviating from the main idea. Therefore, this poses a challenge and invitation to future studies and scholars to focus on how punctuation marks enhance RC, thus constituting the research gap in the body of knowledge. Decisively, this concludes that punctuation marks are still a perpetual barrier that seeks further studies in relation to RC. Therefore, the use and application of punctuation marks constituted a delay in the acquisition of the term coined by the researcher, such as linguistic dexterity.
CONCLUSION
One of the major social skills which are required is the sense of collegiality and in relation to proposing the learning strategy of using FS to enhance RC, empirical findings confirmed the assumption of literature review those challenges such as collaborative learning, illogical informational text and prior learning can be overcome by using FS. Collaborative learning is very significant in reading because it not only instils a sense of belonging but also derives communication skills that are paramount to learning. In addition, informational text and prior learning are the challenges that are mitigated during the reading of FS, wherein the coresearchers' prior experiences were ignited by fictional events in relation to certain themes on which the theme was known to be in a particular way. However, after reading the fictional text, the readers (in this instance, the co-researchers) were able to adapt a new sphere of knowledge by synthesising and interpreting the fictional events in the manner that beset reality.
In concise, these reasons are the fundamental aspects that qualify the proposed learning strategy as the best alternative strategy that can be used to enhance RC. Therefore, this paper, based on its empirical findings, analysis and presentations of results, recommends FS as the learning strategy to enhance RC. This means that scholars, students and stakeholders in academia can use this learning strategy to address the relative challenges that confront RC in various spheres of education. It is because there is a patent indication that FS provides the content that succinctly gives informational text as readers can maintain concentration throughout it. In contrast, they also activate readers' previous learning experiences, which could be linked to what has been read, rather than changing the existing perception during the reading of FS.
Moreover, FS mitigate the challenges of collaborative learning as, by their nature, FS enables a conducive environment of reading where collective engagement and discussions are encouraged and optimally used among students. Irresistibly, FS defeated the obstacles to reading, such as interruption sourced by social networks and other related media, which were empirically vibrant given the fact that these are the incessant challenges that are endemic and keep on prevailing among readers. Hence, FS is structured to maintain and preserve interest, attention and concentration. However, the paper restricted its scope to the challenges of reading comprehension and there is a necessity to examine the methods used in both primary and secondary schools to teach reading skills, particularly in relation to the use of fictional stories such as short and long stories. In addition, further studies are required to determine the use of literary texts such as a poem, to teach reading skills for reading comprehension. Although the paper recommends fictional stories as the learning strategy to enhance reading comprehension in first-year university students, it is essential to review some of the literary texts used in primary and secondary school to understand the significant impact these texts have on the comprehension levels of learners.
|
v3-fos-license
|
2020-02-06T09:13:51.061Z
|
2018-12-30T00:00:00.000
|
216751264
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://revmaterialeplastice.ro/pdf/ROMANEC%204%2018.pdf",
"pdf_hash": "63a05af47b16cb93f8d1fcd846381e9717bfabf3",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46166",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3b53cdc80b1e773196f0aa066b2eda5db1cf7af0",
"year": 2018
}
|
pes2o/s2orc
|
Morphofunctional Features in Angle Second Class Malocclusion on Dental Gypsum Models
686 The dental anomaly has now become a public health problem due to its special features: wide spread in the population, with increasing general trend, aesthetic disturbances that may lead to difficulties in social integration of individuals, a complex etiopathogeny, disturbances in the general state of the organism [1-4[. To establish a proper treatment it is required, first of all, to know the degree of spreading of the disease, the quantitative dimension of the phenomenon, but also the qualitative aspect expressed in the gravity index of the malocclusion. The need for treatment is correlated with the development of the dentition. In Finland it is found that at the age of 7.23% of children have a malocclusion requiring immediate treatment, and 34% require repeated controls to observe the evolution of anomalies [5]. In Iceland, there is a prevalence of dentomaxillary anomalies of 11% in temporary dentition and 52% in permanent dentition [6]. Establishing an orthodontic diagnosis and treatment strategy involves knowing the characteristics of a dentomaxillary anomaly and also the identifying and quantifying changes in the dental and muscular skeleton [7-10]. The dental arch is defined by: size and shape. The interest in knowing this sector of the stomatognate system is determined by: the relations established between the dental arcade and the cranio-facial structures, the fact that the dental arcade often reacts, compensating for the disequilibrium at the skeletal level, and, importantly, that the dental intra-arch harmony has consequences on dental occlusion [11-15]. Researchers focused on the study of the relationship between the cranio-facial structures and the size of the dental arch in the subjects with malocclusions [16-18], finding that the maxillary dental arcade in class II/1
The dental anomaly has now become a public health problem due to its special features: wide spread in the population, with increasing general trend, aesthetic disturbances that may lead to difficulties in social integration of individuals, a complex etiopathogeny, disturbances in the general state of the organism [1-4[. To establish a proper treatment it is required, first of all, to know the degree of spreading of the disease, the quantitative dimension of the phenomenon, but also the qualitative aspect expressed in the gravity index of the malocclusion.
The need for treatment is correlated with the development of the dentition. In Finland it is found that at the age of 7.23% of children have a malocclusion requiring immediate treatment, and 34% require repeated controls to observe the evolution of anomalies [5]. In Iceland, there is a prevalence of dentomaxillary anomalies of 11% in temporary dentition and 52% in permanent dentition [6].
Establishing an orthodontic diagnosis and treatment strategy involves knowing the characteristics of a dentomaxillary anomaly and also the identifying and quantifying changes in the dental and muscular skeleton [7][8][9][10]. The dental arch is defined by: size and shape. The interest in knowing this sector of the stomatognate system is determined by: the relations established between the dental arcade and the cranio-facial structures, the fact that the dental arcade often reacts, compensating for the disequilibrium at the skeletal level, and, importantly, that the dental intra-arch harmony has consequences on dental occlusion [11][12][13][14][15].
Researchers focused on the study of the relationship between the cranio-facial structures and the size of the dental arch in the subjects with malocclusions [16][17][18], finding that the maxillary dental arcade in class II/1 The large diversity of clinical forms in Angle second class malocclusion explains the interest of researchers and clinicians in identifying changes in the dental arch in subdivisions II/1 and II/2. The purpose of the study is to identify the characteristics of the dental alveolar arch in order to determine the differences between class II/l and II/2 malocclusions. The study was conducted on dental gypsum models of 62 orthodontic untreated patients diagnosed with class II/1 Angle malocclusion, respectively class II/2. The results obtained by us reveals a statistically significant differentiation in the dental arcade, a narrowed maxillary arch at the molar level and elongated at premolar and molar level, in subdivision II/l. Our data are consistent with the results of literature. The knowledge of dental arch features serves to develop a correct and complete diagnosis and also to reach the therapeutic goals and to evaluate post-treatment response in short, medium and long term.
Keywords: dental arcade, malloclusion class II Angle, gypsum model malocclusion is narrower in the dolicocephalus and wider at brahicephalus, while the size and shape of the mandible arch is similar to all three facial types (mesocephalic, brahicephalic, dolicocephalic). Other authors followed the characteristics of the dental arch by comparison between class II/l and class II/2 malocclusion, in subjects who did not perform orthodontic treatments [19][20][21]. While some researchers (19) find intercanines distances in the maxillar and mandible higher than in the class II/2 witness group and lower in class II/1, other researchers [20,21] find in their studies a lesser intercanine distance compared to the average. Other researches refer to the characteristics of the dental arch in class II malocclusion as compared to the dental arch of children without abnormalities, revealing almost insignificant differences [22]. In contrast, Staley [23] finds larger intermolar and canine distances in children normally developed than those with Angle second class.
The large diversity of clinical forms in Angle class II malocclusion explains the interest of researchers and clinicians in identifying changes in the dental arch in subdivisions II/1 and II / 2 as well as the differences that may exist between them.
The purpose of the study is to identify the characteristics of the dento-alveolar arch in order to determine the differences between class II/l, II/2 malocclusions.
Experimental part Materials and methods
The study was conducted on gypsum dento-alveolar models of 62 orthodontic untreated patients diagnosed with class II/1 Angle malocclusion, respectively class II/2 Angle, 40 girls (64.5%) and 22 boys (35.5%).
Regarding the frequency, according to the two subdivisions of the 2nd Angle class, the distribution was: 35 subjects with class II / 1 Angle (56.5%) and 27 subjects with class II / 2 Angle, (43.5 %) with an average age of 10.76 class II / 1 and 10.167 class II/2 ( fig. 1).
The dental-alveolar arcades were made by the same doctor, and the molding and processing of the dental model by the same dental technician.
The measurements were made by two independent examiners, the differences being identified by a third examiner who also determined the average error.
The ideal values for the width and length parameters of the dental arch were calculated and the differences between measured and calculated values were made.
The database was computerized. Statistical processing was done using SPSS 16.0 programm (Statistical Package for Social Sciences).
We used descriptive statistical analysis methods for presenting the two clinical forms, including analysis of the central trend of distribution and variant or dispersion indicators.
In relation to the descriptive statistical analysis of the obtained results, we have previously verified the nature of the distribution of the values of the tested parameters.
If the values of the tested parameters followed the normal law, we used the t test to analyze the differences between the two subdivisions, and when the measured parameter values did not follow the normal law, we used the nonparametric Mann-Whitney test. In conclusion, the maxillary arcade is narrowed at the premolar level more in II/l than in II/2, but insignificantly statistically. The difference from the ideal norm is 2.7805, with a standard deviation of 2.7134. In subdivisions the average was 3.5671 and 1.663 in subdivision II/2, with standard deviations of 2.7399 and 2.2945.
Results and discussions
There are statistically significant differences between Class II/1 and Class II/2 Angles in the molar width (p = 0.034); the arcade is narrowed in class II/1.
There are statistically significant differences between subdivisions of class II/1 and class II/2 Angles in arcade length at both premolar (p = 0.005) and molar (p = 0.000): in class II / l the arcade is longer. The global average of the batch at the lower premolar level was 37.77, with a standard deviation of 3.3318 ( fig.9). The average in subdivisions were 38.92 in II/1 and 35.9688 in II/2, with standard deviations of 2.8419 and 3.3189 ( fig.9).
The difference that ensures the equilibrium of the mandibular arch at the global level in premolar area is 39.16, and the equilibrium values in subdivisions II/1, II/2 are 39.475 and 38. 847, respectively.
In the class II/l subdivision was -0.955, in class II/2 being -2.342, with standard deviations of 1.6699 and 2.4871.
The arcade width at the mandibular premolar level is statistically significant p = 0.004, lower in class II/2.
The width at the molar level
The difference from the ideal value indicates an average of 1.9186, with a standard deviation of 4.4781. In class II/1 the average difference is -2.2562 and in Class II / 2 it is -1.4290 with standard deviations of 4.6772 and 4.2423.
b) The length of the mandibular arch at the premolar level
The global average of the batch at the lower premolar level was 16.90, with a standard deviation of 1.655 ( fig.10). In subdivisions the average was 17.12 in II / l and 16.562 in II/2, with standard deviations of 1.6411 and 1.6720 (fig.10).
The difference that ensures the equilibrium of the maxillary arcade in the lower premolar area on a global level is 17.77, with a standard deviation of 1.477, and the equilibrium values in subdivisions II /1, II/2 are 17.73 and 17.79, respectively.
The difference from ideal values is -0.7751 on the global lot, (fig. 41). In the class II/l the difference is -0.697 and -0.897 in II/2, with standard deviations of 1.7884 and 2.1888.
The differences are smaller between II/1 and II/2, with a discreet shortening in II/2 at the premolar level.
The length of the mandibular arch at the molar level The difference from the ideal value is 2.0841 on the whole lot, with a standard deviation of 2.6358. In subdivisions II/l the difference was 1.6172 and 2.6995 in subdivision II/2, with a standard deviation of 2.6559 and 2.5374. The depth of the palatine veil indicates an average of 8.1441 in the whole study group, with a standard deviation of 3.2043. In subdivision II/1, the average of palatine veil depth was 8.7941 and 7.2600 in II/2, with a standard deviation of 3.3555 and 2.8141 respectively.
From the point of view of the depth of the palatine veil, no significant statistic differences exist between Class II/ 1, II/2.
Results and discussions
The results obtained by us reveals a statistically significant differentiation in the dental arcade, the group investigated by us reveals a narrowed maxillary arch at the molar level and elongated at premolar and molar level, in subdivision II/l. Our data are consistent with the results of literature [19][20][21][22]. At the same time it confirms McNamara's opinion, who believes that in class II/l malocclusion there is a transversal component, which will also influence the treatment algorithm [24].
From the therapeutical point of view, the conclusion regarding the narrowing of the maxillary arch in class II/1 agree with the relation of jaw expansion/disjunction, in order to harmonize the dental springs for obtaining an eugnate occlusion. As far as the mandible arch is concerned, it shows more stability compared to the maxilla, which is highlighted in the specialized literature [16].
There is a decrease in the width and, significantly, in the premolar length, as evidenced by Pancherz's studies [25]. The shortening of the mandibular arcade in the caninepremolar region is considered a consequence of the high degree of overcoat, which produces the inferior retroalveolodention, in class II/2 malocclusion [26].
Dento-maxillary anomaly can have a major impact on the population, due to the damages of the dento-alveolar apparatus, which reflects on the general health status of the population. On the other hand, it is necessary to know the index of addressability of the population towards the dental care services, in general, and towards the orthodontics and dental-facial orthopedics, in particular. Treatment complexity index and treatment priorities can be established taking into account important data, like: the identification of the clinical manifestations of the anomaly, the etiological factors and the treatment needs [27][28][29].
Conclusions
Changes in class II malocclusion demonstrate that alters both dental and alveolar level. The maxillary dental arch is narrowed and elongated in subdivision II/l. The mandibular dental arch is narrowed and shortened in the anterior section of the premolar region. The knowledge of dental arch features serves to develop a correct and complete diagnosis and also to reach the therapeutic goals and to evaluate post-treatment response in short, medium and long term
|
v3-fos-license
|
2022-03-17T15:22:02.468Z
|
2022-03-14T00:00:00.000
|
247481634
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/cin/2022/5233845.pdf",
"pdf_hash": "884cbf7fa438af74fa660445193e85ff5206ed5b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46171",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Computer Science"
],
"sha1": "cf013d5b452ce916a11193883f7f0b887a8fa6b6",
"year": 2022
}
|
pes2o/s2orc
|
Coal Mine Safety Evaluation Based on Machine Learning: A BP Neural Network Model
As the core of artificial intelligence, machine learning has strong application advantages in multi-criteria intelligent evaluation and decision-making. The level of sustainable development is of great significance to the safety evaluation of coal mining enterprises. BP neural network is a classical algorithm model in machine learning. In this paper, the BP neural network is applied to the sustainable development level decision-making and safety evaluation of coal mining enterprises. Based on the analysis of the evaluation method for sustainable development of coal enterprises, the evaluation index system of sustainable development of coal enterprises is established, and a multi-layer forward neural network model based on error backpropagation algorithm is constructed. Based on the system theory of man, machine, environment, and management, and taking the four single elements and the whole system in a coal mine as the research object, this paper systematically analyzes and studies the evaluation and continuous improvement of coal mine intrinsic safety. The BP neural network evaluation model is used to analyze and study the intrinsic safety of coal mines, the shortcomings of the intrinsic safety construction of coal mines are found, and then improvement measures are put forward to effectively promote the safe production of coal mines and finally realize the intrinsic safety goal of the coal mine.
Introduction
Coal will still be the main energy source for a long time. At present, the rapid growth of the economy puts forward higher requirements for the development of the coal industry [1][2][3][4][5][6]. erefore, we must strengthen safety production and ensure the sustainable, stable, and healthy development of the coal industry. However, the coal industry is a high-risk industry. High gas and gas outburst coal mines account for about half of China's coal mines. Coal mine safety is the top priority of the whole industrial safety production work. Coal mining enterprises have the characteristics of many personnel, scattered operations, many equipment and facilities, wide distribution, bad natural conditions, many unsafe factors, complex working environments, and difficult managements. e workplace is constantly changing [7]. e risk factors of natural disasters and production accidents always affect and restrict the safe production of coal mines. On the other hand, in recent years, with the continuous changes in the internal and external environment faced by coal enterprises and the continuous deepening of the reform of large and medium-sized stateowned enterprises, the operating conditions of coal enterprises have fluctuated. On the whole, they are in the process of continuous adaptation and re-adaptation, organization and reorganization, and innovation and re-innovation [8][9][10][11]. Coal resources are nonrenewable resources. Coal mining is bound to be restricted by the remaining reserves in the mining area, and coal enterprises will face resource depletion sooner or later. erefore, the problem of sustainable development of coal enterprises is becoming increasingly prominent. erefore, it is very necessary to construct a coal mine safety evaluation model based on the research on the evaluation of the sustainable development level of coal enterprises. Academic circles and decision-making departments at home and abroad have made a lot of exploration, especially in the evaluation of the sustainable development level of coal enterprises. Machine learning can be regarded as a task. e goal of this task is to let machines (computers in a broad sense) acquire human-like intelligence through learning. A neural network is a method to realize machine learning tasks. Talking about the neural networks in the field of machine learning generally refers to "neural network learning." It is a network structure composed of many simple units. is network structure is similar to the biological nervous system, which is used to simulate the interaction between organisms and the natural environment. An artificial neural network (ANN) is an information processing system imitating human brain model [12,13]. It has good abilities of self-learning, self-adaptation, associative memory, parallel processing, and nonlinear transformation.
How to effectively curb the occurrence of major mining accidents is the biggest problem to be solved in China's coal mine production. Coal mine safety theory is put forward in this environment. e coal mine underground is a complex and changeable man-machine environmental system. is paper attempts to evaluate the sustainable development level of coal enterprises by establishing a multi-layer forward neural network model based on the error back propagation algorithm (BP algorithm). It can avoid complex mathematical derivation and ensure stable results in the case of sample defect and parameter drift, It can also effectively avoid the classical sustainable development evaluation methods, such as the analytic hierarchy process [14][15][16], fuzzy mathematics [17][18][19][20][21][22], and principal component analysis [23,24] and cannot avoid the role of people's experience and knowledge and the personal subjective intention of decision-makers, which is of great benefit to solve the overall decision-making planning of coal enterprises. is paper will use the system theory to take the coal mine man-machine-environment-management system as the research object, establish the coal mine intrinsic safety evaluation system and evaluation model, comprehensively construct the coal mine intrinsic safety system through the specific and in-depth analysis of various factors of manmachine-environment-management, provide the basis for coal mine safety production and management, and improve the safety production level of the coal mine industry.
Model Building.
According to the meaning of sustainable development of coal enterprises and the principle of index system design, combined with the existing achievements and the research on the specific situation of coal enterprises, an index system including 5 criteria layers and 17 specific indicators is constructed, as shown in Figure 1.
According to the evaluated problems, combined with the multi-layer forward neural network model based on the error backpropagation algorithm (BP algorithm), the neural network model for sustainable development of coal enterprises is established, as shown in Figure 2. e model is divided into two modules: the former is the normalization module, and the latter is the BP neural network (BPNN) module [25][26][27][28]. e BPNN module in the above model adopts a three-layer BP neural network, including an input layer, a hidden layer, and an output layer P. e input of a neural network is required to be in [0 and 1], so the original data of each evaluation index shall be normalized before network learning and training. e specific normalization rules are shown in Table 1. In this way, the network input value corresponding to each evaluation index in the sample can be determined by normalization.
Network Training and Learning.
e original data are sent to the normalization module after preprocessing. e normalization module will normalize the input data according to the rules in Table L to obtain 17 normalized values, and then input the normalized values into the BPNN module. According to the above analysis, the number of fuzzy neurons in the input layer of the BPNN module is 17; that is, the input signals x 1 , x 2 ,...,x 17 correspond to 17 normalized values; the number of output neurons is 1, i.e., output o, which corresponds to the sustainable development level of coal enterprises. e number k of neurons in the hidden layer was adjusted by the learning process to 35. e learning process of the BP neural network is also the process of network parameter correction. e network learning system adopts the method with teachers, and the correction of network parameters adopts the gradient method. It is assumed that there are n system sample data: O a , O a , a � 1,2, ..., n. Here, the subscript a represents the sample serial number, O a is the sample output, and O a is the actual output. x ia is input variable, i � 1,2, ..., 17. e input variable will be assigned to the m-th neuron of the hidden layer as its input according to the following formula: where w im is the weight of the input layer neuron i and the hidden layer neuron m. e most commonly used transfer function of BP neuron is the sigmoid function: (2) According to the sigmoid function, it is obtained that the function of the output O m ′ of the hidden layer neuron m with respect to the input x m ′ is Similarly, the input and output of each unit of the output layer can also be obtained, which will not be described in detail here.
rough a certain number of network training processes, it is actually to modify the network parameter to determine the most appropriate weight, so as to minimize the residual error between the actual output O a and the sample output O a obtained by forward operation according to equations (1) and (3) for all n sample inputs. e residual error is as follows:
Normalization module BPNN module
Input layer Hidden layer Output layer Computational Intelligence and Neuroscience 3 e correction of weight and threshold is realized by the gradient method of the back propagation algorithm. t represents the time of iterative correction, and b k and b o represent the neuron thresholds of the hidden layer and output layer, respectively, then the parameter correction rule of the BP neural network is (1) e connection weight from the input layer to the hidden layer is where i � 1, 2, ..., 17; k � 1,2, ..., 35; w ki is the connection weight from the input node xi to the hidden layer node Rk; and ηis the learning rate. (2) Hidden layer neuron threshold is where k � 1,2, ..., 35 and η′ is the learning rate. (3) Connection weight from hidden layer to output layer is where k � 1,2, ..., 35; ck is the weight from the rule layer node Rk to the output layer node O; and η'' is the learning rate. (4) Output layer neuron threshold is where η‴ is the learning rate.
After training and learning, the evaluation network can output the evaluation value to measure the level of sustainable development, which ranges from [0,1]. In order to clarify the sustainable development level of coal enterprises, the sustainable development status is divided into four levels: the first level is sustainable development, and the score range is 0.85 < β ≤ 1. O; the second level is primary sustainable development, and the score range is 0.70 < β ≤ 0.85; the third level is the transition from traditional development to sustainable development, and the score range is 0.50 < β ≤ 0.70; and the fourth level is traditional development, and the score range is 0 < β ≤ 0.50. In this way, the sustainable development level of the enterprise can be clearly obtained from the network output value. In each evaluation work, no matter whether the evaluation result is recognized by experts or not, it can be used as a new learning sample to make the BP neural network evaluation system learn and improve continuously, so as to make it make a more accurate evaluation.
Comprehensive Evaluation Model of Coal Mine Safety
is study establishes the safety evaluation index system of each element from the four elements of man, machine, environment, and management [29,30]. e construction of a coal mine safety evaluation model needs to organically combine the four elements of man, machine, environment, and management. erefore, man, machine, environment, and management can be regarded as four primary indicators. Among the secondary indicators, human intrinsic safety indicators are divided into physical status, psychological status, safety education status, and safety technology status. Equipment safety indicators are divided into equipment reliability and production system factors. Environmental essential indicators can be divided into two Combining the four elements organically, a coal mine safety evaluation classification model is constructed, which can be divided into four primary evaluation indexes and 14 secondary evaluation indexes, as shown in Figure 3. e scoring standard for coal mine safety evaluation in Table 2 is established with reference to the national guiding principles of intrinsic safety.
Fuzzy Evaluation of Coal Mine Safety Based on BP Neural Network
Fuzzy neural network (FNN) is a new and better system combining neural networks and fuzzy logic systems [31][32][33]. e system not only has the advantages of a neural network, that is, it has the function of self-organizing and adaptive learning, but also makes up for the deficiency of a neural network, that is, it can directly deal with structured knowledge, e weights without clear network meaning in the traditional neural network give the physical meaning of the rule parameters in the fuzzy system, which is convenient to use the rule parameters to study things.
Fuzzy Neural Network Learning Algorithm.
It is assumed that n and m are the numbers of input units and hidden units, respectively. X � (x 1 , x 2 ,. . ., x n ) is the input layer input of the fuzzy system. After fuzzy processing of membership function, R � (r 1 , r 2 ,. . ., r n ) is obtained, which is the input vector of the neural network. Z � (z 1 , z 2 ,. . ., z n ) is the hidden layer output vector and Y � (y 1 , y 2 ,. . ., y n ) is the system output vector. W j � (w 1 , w j2 ,. . ., w jn ) is the weight vector between the j-th neuron of the hidden layer and the neurons of the input layer. e weight vectors among all neurons of hidden layer and all neurons of input layer can form a weight matrix as follows: . ., v jn ) is the weight vector between the j-th neuron of the hidden layer and the neurons of the output layer.
e weight vectors between all the neurons of the hidden layer and all neurons of the output layer can form a weight matrix as follows: It is assumed that net k j represents the net input of the j-th neuron in layer k andnet k j represents the net output of the j-th neuron in layer k. When the BP algorithm is adopted, the inputoutput mapping relationship of the network is given as follows: Hidden Layer Output Layer
Application
Steps. After determining the basic structure of the training sample and model, the network training and model application are carried out according to the following steps shown in Figure 4.
Case Study
17 evaluation indexes of 8 enterprises reflecting the state of sustainable development are selected as learning samples. All samples have been normalized according to the rules in Table 1, as shown in Table 3. e above samples are trained through the network, and the network evaluation results are obtained, as shown in Table 4. It can be seen that the network output values of enterprise 1 and enterprise 2 are between (0.85 and 1.00), Based on the evaluation results of sustainable development level, the safety evaluation of enterprise 1 is now carried out. According to the intrinsic safety evaluation index system established above, data of the coal mine site are collected as shown in Table 5. Among them, the first 10 rows are the known safety assessment data of the first 10 months, the corresponding actual safety assessment value is used as erefore, it is established that the intrinsic safety degree of coal mine enterprise 1 is level I. At the same time, it also shows that the effectiveness of the neural network model applied to intrinsic safety evaluation is limited to space, and only the system intrinsic safety data is listed here as an example.
Conclusions
is paper establishes a three-layer BP neural network evaluation model to evaluate the sustainable development level of coal enterprises and obtains the sustainable development status of each enterprise. From the output layer neurons to the input layer neurons, the connection weights are corrected layer by layer, and the error back propagation correction is continuously implemented in the process of network training and learning, so as to reduce the error between the desired output and the actual output and improve the accuracy of the network response to the input mode. e evaluation results are completely consistent with the actual situation. e advantage of this method is that it avoids the subjectivity and complex mathematical derivation in the traditional evaluation methods and can still get stable and correct results in the case of missing samples and parameter drift. It will provide scientific and theoretical guidance for the scientific decision-making of sustainable development of coal enterprises and has certain research value.
Based on the sustainable development evaluation of coal mines, the establishment of intrinsically safe coal mine is the development and sublimation of the existing safety management mode and coal mine safety quality standardization. It systematizes the new concept of coal mine intrinsic safety, and the established coal mine intrinsic safety evaluation system and evaluation model are applied to the coal mine site. It can provide theoretical basis and technical support for the safety management of coal mining enterprises, effectively improve the level of coal mine safety production, eliminate hidden dangers of accidents, prevent and control accidents, standardize and improve various safety management systems, and improve the safety production situation of coal mines.
Future research will focus on two aspects: (1) optimizing the processing process of the algorithm proposed in this paper to further improve the accuracy and efficiency of the algorithm; and (2) using big data technology to analyze and process the text data recorded in the process of coal mine production and comprehensively and systematically analyze the text data of coal mines to improve the risk precontrol ability of coal mine safety production.
Data Availability e dataset can be accessed upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
|
v3-fos-license
|
2024-02-16T06:17:11.735Z
|
2024-02-15T00:00:00.000
|
267680551
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12471-023-01849-1.pdf",
"pdf_hash": "f9fda732fca8a378649a7d9b8d1eaa423505513c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46173",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d4d0aa504d93a2b603af284ca79ff2735bc3cdc9",
"year": 2024
}
|
pes2o/s2orc
|
Interventions to prevent postoperative atrial fibrillation in Dutch cardiothoracic centres: a survey study
Introduction Postoperative atrial fibrillation (POAF) is a common phenomenon following cardiac surgery. In this study, we assessed current preventive strategies used by Dutch cardiothoracic centres, identified common views on this matter and related these to international guidelines. Methods We developed an online questionnaire and sent it to all cardiothoracic surgery centres in the Netherlands. The questionnaire concerned the management of POAF and the use of pharmaceutical therapies (beta-blockers and calcium antagonists) and non-pharmaceutical methods (posterior left pericardiotomy, pericardial flushing and epicardial botulinum toxin type A injections). Usage of electrical cardioversions, anticoagulants and left atrial appendage closure were also enquired. Results Of the 15 centres, 14 (93%) responded to the survey and 13 reported a POAF incidence, ranging from 20 to 30%. Of these 14 centres, 6 prescribed preoperative AF prophylaxis to their patients, of which non-sotalol beta-blockers were prescribed most commonly (57%). Postoperative medication was administered by all centres and included non-sotalol beta-blockers (38%), sotalol (24%), digoxin (14%), calcium antagonists (13%) and amiodarone (10%). Only 2 centres used posterior left pericardiotomy or pericardial flushing as surgical manoeuvres to prevent POAF. Moreover, respondents expressed the need for guidance on anticoagulant use. Conclusion Despite the use of various preventive strategies, the reported incidence of POAF was similar in Dutch cardiothoracic centres. This study highlights limited use of prophylactic amiodarone and colchicine, despite recommendations by numerous guidelines, and restricted implementation of surgical strategies to prevent POAF. Supplementary Information The online version of this article (10.1007/s12471-023-01849-1) contains supplementary material, which is available to authorized users.
Introduction
Postoperative atrial fibrillation (POAF) is frequently seen after cardiac surgery and is associated with heart failure, longer hospitalisation, stroke and increased mortality [1,2].In this respect, POAF could serve as a relevant discriminative marker for future cardiovascular risks [3].Early POAF is defined as new-onset AF that occurs within the 30-day postoperative period and usually has a self-limiting course over 5-7 days.
Local management protocols consist of treatment strategies once patients present with POAF.However, there is an increasing interest in ways to prevent this complication altogether instead of merely treating it.Still, the debate remains whether one should aim to eliminate POAF completely, considering POAF could serve as a potential marker for atrial myocardiopathy rather than a cause of detrimental outcomes.
According to the European Society of Cardiology (ESC) in collaboration with the European Association for Cardio-Thoracic Surgery (EACTS), the incidence of AF after cardiac surgery is 15-45% [4].To prevent POAF, the ESC/EACTS Guidelines favour the use of perioperative beta-blockers and/or amiodarone and correction of electrolyte imbalances.Posterior left pericardiotomy and bi-atrial pacing are also recommended.The Canadian Cardiovascular Society recommends a ventricular response rate-or rhythm-control strategy [5].Prophylactic sotalol or amiodarone were suggested in case other beta-blockers are contra-indicated [6].Colchicine is the only anti-inflammatory agent recommended to prevent POAF.Other recommended preventive agents are ranolazine and digoxin [7][8][9].
Non-pharmacological preventative measures include posterior left pericardiotomy, which entails cutting the posterior pericardium.This allows drainage of excess blood and fluid, prevents inflammation and reduces the tendency of AF to occur [10][11][12].Alternative methods are active tube clearance to minimise common chest tube occlusion [13], and perioperative pericardial flushing by continuously rinsing the pericardial space with irrigation solution [14].Botulinum toxin type A (BoNT/A), injected into the atrial or epicardial fat pad, causes temporary neuromodulation and is associated with fewer occurrences of POAF [15].Left atrial appendage closure (LAAC) is a procedure that does not prevent AF but could reduce thromboembolic events [16].Although studies have shown LAAC may actually increase the risk of POAF [17,18], the LAAOS III trial demonstrated it prevented stroke among patients with AF, both in the presence and absence of anticoagulation [19].
The aim of this study was to examine the ways preventive strategies for POAF are implemented in Dutch cardiothoracic centres and to evaluate their effectiveness.Usage of predictive risk scores were examined, e.g. the POAF risk score, which was developed to pre-dict the probability of POAF [20].Application of LAAC was also enquired.
Methods
An online survey was developed for all 15 Dutch cardiothoracic centres.The questionnaire comprised 27 open-ended and closed questions about preventive strategies for POAF in patients undergoing on-pump sternotomy.
We assessed the POAF incidence, anticoagulant strategy, perceived complications and usage of the POAF risk score and different tools for POAF prophylaxis among centres.These tools included pharmaceutical and non-pharmaceutical interventions (posterior left pericardiotomy, perioperative pericardial flushing and BoNT/A injections).Respondents were asked about LAAC and electrical cardioversion (ECV) use and whether they had suggestions for qualitative improvement.
The surveys were sent via e-mail to the local Cardiothoracic Surgery Registration Committee members of all Dutch heart centres.Non-responders were sent reminders.Data were stored in a password-protected environment.
Incidence of postoperative atrial fibrillation
Of the 15 Dutch cardiothoracic centres, 14 (93%) responded to the survey, but not all centres provided an answer to each question.Completion rates per question are outlined in Table S1 in the Electronic Supplementary Material.Thirteen centres reported a POAF incidence, ranging from 20 to 30% (median 27%).The reported incidences per centre are presented in Fig. 1.
Preoperative prophylaxis
Nine of the 14 centres (64%) used local protocols for POAF prophylaxis and treatment, consisting of admin- istration of prophylactic beta-blockers and an anticoagulant regimen once POAF occurred.Two centres specified the prescribed dose of beta-blockers (i.e. 25 mg twice daily or 80 mg once daily).One centre also described perioperative electrolyte regulation in its protocol.One centre followed European guidelines instead of a protocol.Six centres (43%) prescribed pharmaceutical prophylaxis preoperatively.This included non-sotalol beta-blockers (4/6; 67%), calcium antagonists (1/6; 17%), sotalol (1/6; 17%) and other current medication (1/6; 17%).Three centres prescribed prophylaxis to all patients, 2 centres prescribed prophylaxis only to patients who were already on beta-blockers or calcium antagonists, and 1 centre prescribed an anti-arrhythmic drug to patients already being treated with the specific drug.Colchicine was not used at all.The reported prescription of preoperative medications for AF prophylaxis in 6 centres is outlined in Fig. 2.
Non-pharmaceutical prevention
Of the respondents, 12 (86%) believed posterior left pericardiotomy can prevent POAF, but only 1 centre used this intervention.Two centres (14%) were positive about this technique.The major reason for nonuse was a lack of positive evidence (11/14; 79%).Similarly, most centres (11/14; 79%) believed perioperative pericardial flushing potentially reduces POAF, but only 1 centre used it in a research setting and none of them in a clinical context.All 11 centres found the evidence to be insufficient, and 2 centres (14%) stated there were no perceived benefits.One centre was positive about its effectiveness.Eight centres (57%) were familiar with BoNT/A injections in the atrial fat pad, but none used it to prevent POAF due to a lack of scientific evidence.
Left atrial appendage closure
Most centres (10/14; 71%) routinely performed concomitant LAAC for patients with pre-existent AF.Of them, 30% was aware this entailed procedure may increase the risk of POAF.Of the remaining 4 centres not routinely performing LAAC, 3 were not aware of this risk and 1 centre was.Indications to perform LAAC varied.Eight of the 14 centres (57%) expected to alter current indications in response to the LAAOS III trial results [19].Two centres (14%) performed LAAC exclusively in surgical ablation, but one of them seldomly performed LAAC for patients with chronic AF, 3 centres reserved LAAC for patients with pre-existent AF undergoing open-heart surgery, and 1 centre did not state an indication.
Postoperative prophylaxis and treatment
All 14 centres administered postoperative medication.Their first line of medication was standard AF prophylaxis, while the second and subsequent lines of medication were administered in the clinical phase.The majority (12/14; 86%) prescribed non-sotalol beta-blockers first, and 8 of them (57%) specified the use of metoprolol.Only 2 centres (14%) prescribed sotalol as standard prophylaxis.
Digoxin, amiodarone and calcium antagonists were only used therapeutically when patients presented with POAF.All centres had a second line of medication, which comprised sotalol (43%), digoxin (21%), amiodarone (14%), non-sotalol beta-blockers (14%) and calcium antagonists (7%).Eight centres had a third line of treatment, which mostly consisted of a calcium antagonist (50%).One centre administered either amiodarone or digoxin as third-line medication.Three centres had a fourth line of treatment.One centre prescribed either amiodarone or sotalol as its fourth line.Furthermore, 2 centres prescribed combination therapy of non-sotalol beta-blockers and digoxin as its third-or fourth-line treatment.The reported first and subsequent lines of medication for the prevention and treatment of POAF are outlined in Fig. 3.
Overall, non-sotalol beta-blockers and sotalol took up 38 and 24%, respectively, of all medication prescribed postoperatively for POAF.Digoxin, calcium antagonists and amiodarone comprised 14, 13 and 10% of the prescriptions, respectively.
Anticoagulant usage
All centres prescribed anticoagulants whilst treating POAF, but the indication differed.Six centres (43%) based their indication area once POAF occurred on national guidelines, 2 centres (14%) used the CHA2DS2-VASc score and national guidelines, another 2 centres only used the CHA2DS2-VASc score, 3 centres (21%) followed local protocols, and 1 centre used both its local protocol and the CHA2DS2-VASc score.
The question about the annual number of patients discharged with vitamin K antagonists (VKAs), was answered by 13 centres (93%): 4 (31%) replied this number was unknown and they could not provide a description of the trend, 7 (54%) reported an annual incidence of 0-56% (median 21%), and 2 (15%) could not provide an exact number but either stated a decrease in VKA prescriptions or an increase in direct oral anticoagulant (DOAC) prescriptions.The centre that did not prescribe VKAs to patients with POAF administered DOACs.
With regard to prescription duration, 1 centre was not aware how long its patients took VKAs.The majority (12/14; 86%) included the duration of VKA prescription in the referral letter to the Dutch Thrombosis Service.One centre prescribed anticoagulants indefinitely.None of the centres arranged a follow-up for their patients pertaining to VKA usage.
Electrical cardioversion
Only 3 of the 14 centres (21%) did not perform standard ECV if chemical conversion failed, although 2 of them performed ECV on indication or in hemodynamically impaired patients.Twelve centres answered the question about the annual number of ECVs: 3 (25%) did not know the exact number and 9 (75%) provided an estimate, which ranged from < 10 to 200. Figure 4 shows the reported annual number of ECVs performed per centre.
Perceived complications
Thirteen respondents (93%) noticed longer hospitalisation for patients with POAF.Additionally, 6 centres (43%) replied these patients generally had other complications.Anaemia and pleural effusion each comprised 21% of the perceived complications, pneumonia made up 14%, and the incidence of sepsis, excessive pericardial fluid, hypoxia, overfilling, renal damage and neurological complications was 7% each.
The perceived complications associated with POAF are outlined in Fig. 5.
Five respondents (36%) did not notice POAF occurred more often after specific procedures, but 9 (64%) stated they did.Of those that did, 5 centres reported observing POAF mainly after mitral or any valvular surgery, 1 centre noticed POAF after coronary bypass, and the remaining 3 saw POAF more frequently after complex procedures, longer perfusion and clamping duration, and impaired left ventricle function.
Postoperative atrial fibrillation risk score
None of the centres used the POAF risk score [20].However, when asked about their willingness to implement it, 7 (50%) were positive, 5 (36%) were negative, and 2 (14%) were indifferent.Reasons for unwillingness were insubstantial support, no effective prevention, and no added benefits either due to low POAF rates at their centre or because the centre administered prophylaxis to all patients.Of the 13 centres that further elaborated on their answer, 12 were willing to use a similar risk score if proven sufficiently effective.One centre was indifferent, as it considered all patients undergoing surgery as high-risk.
Recommendation on improvements
Half of the respondents (7/14) answered the question on what could be an improvement for Dutch centres regarding POAF management.They mainly wanted better national agreements or a consensus on preferred treatment for more uniform management.Others expressed wanting better designed guidelines for anticoagulant prescription and increased guidance on duration, especially after referral to the Thrombosis Service.
Discussion
In this survey study, the incidence of POAF reported by Dutch cardiothoracic centres was 20-30%.Despite the use of different preventive strategies, the reported incidences were similar.Respondents highlighted the need for more uniform treatment.The use of nonpharmaceutical preventive interventions was limited as respondents were awaiting results of upcoming trials.The majority observed longer in-hospital stay for patients with POAF and noticed POAF was accompanied by other complications, primarily anaemia, pleural effusion and pneumonia.Furthermore, centres embraced concomitant LAAC as a relevant stroke prevention measure in patients with pre-existing AF.Moreover, the discharge of patients taking VKAs may need better follow-up to prevent unnecessary prolongation of medication usage.A remarkable aspect of our study was the varying number of ECVs, suggesting that additional guidelines that stipulate the exact indications are needed.Considering the lack of evidence on optimal anticoagulant strategies for incident and transient AF, it is important to acknowledge the need for further research, which should provide clear guidance on the optimal use and duration of anticoagulants in these specific patient populations.
Preoperative prophylaxis was implemented in 6 centres, of which 3 solely prescribed preoperative prophylaxis for patients already on therapy.The other 3 centres prescribed preoperative prophylaxis to all patients.All 14 centres administered postoperative prophylaxis.Respondents noticed POAF occurred more often after valvular surgery.Valvular AF is a common indication for valve surgery [1].Although a link cannot yet be established, POAF following valvular surgery may be more common, possibly due to preexistent advanced atrial cardiomyopathy-and this requires further study.
Colchicine was not prescribed pre-emptively, which is remarkable given the potential benefits [9].However, there was also mention of gastro-intestinal side effects [9], and use of this drug will therefore require fine-tuning of the specific dose needed.Amiodarone administration was limited among medical centres, comprising 10% of the postoperative medication prescriptions.Furthermore, amiodarone was only used for treating purposes and not as a preventive drug.Notably, amiodarone is only recommended unless beta-blockers are contra-indicated, according to several guidelines [5,21,22].Therefore, beta-blockers may often not be contra-indicated in the clinical setting.The 2020 ESC/EACTS Guidelines suggest combining amiodarone and beta-blockers, as this has an increased effect on reducing POAF [4].In light of the newly found results, amiodarone should be considered more as a means to prevent POAF.Amiodarone does carry risks that, again, necessitate finetuning [7].A lower cumulative dose (< 3000 mg) may still be effective while avoiding adverse effects, according to the ESC/EACTS Guidelines [4].
Reasons for not using the POAF risk score were lack of perceived benefits and effective preventive measures.The latter implies that, even if the risk score is reliable, there are currently no consistent prevention methods, which essentially undermines the prominence of predicting POAF.
Study limitations
Several limitations need consideration.Firstly, we may not have detected all cases of POAF due to, for example, different durations of telemetry monitoring per centre or transfer of patients before the occurrence of POAF, resulting in underdiagnosis.
Additionally, 1 centre did not answer the questionnaire, and not all centres that responded answered each question.Continuous reminders aimed to maximise response and completion rates could have resolved this issue.Moreover, 1 centre may have had registration bias, given they did not score preoperative AF until 2019.Respondents were not asked if electrolyte imbalances were routinely regulated; however, we can assume Dutch centres view this as a routine intervention that is proven effective [11,23,24].
Recommendations
As aforementioned, although POAF is a short-term incident, it is still clinically relevant due to its association with long-term complications [1,2,25].Prophylactic administration of amiodarone and colchicine is suggested for POAF prevention, as well as the use of posterior left pericardiotomy.Many other strategies can be assessed, such as active tube clearance, (bi)atrial pacing, and mapping and ablation of autonomic ganglia [7,13,26,27].Even though usage of such methods may be low in Dutch centres, it may prove valuable to obtain opinions on the matter via a future questionnaire.Another suggestion could be a nationwide/multicentre prospective implementation trial for non-pharmaceutical interventions, as this will facilitate the use of non-pharmacological prophylaxis and provide research results and direct experience with these interventions amongst practitioners.
It is unknown whether it is necessary to completely eliminate POAF, given its potential role as an indicator of underlying cardiovascular risks [3], and it is uncertain whether preventing POAF reduces the risk of long-term cardiovascular events.Rather Interventions to prevent postoperative atrial fibrillation in Dutch cardiothoracic centres: a survey study 179 than solely targeting the elimination of POAF itself, it may be more clinically relevant to focus on preventing long-term cardiovascular complications in patients with POAF.This could involve emphasising preventive measures and management of risk factors for future cardiovascular events.Furthermore, patients scheduled for elective cardiac surgery could be screened using speckle-tracking echocardiography, a diagnostic tool for disclosing atrial cardiomyopathy.Patients with this condition are at risk for developing cardiovascular incidents, which are now paradoxically associated with POAF [28].
Conclusion
This study demonstrated limited use of preoperative POAF prophylaxis and non-pharmaceutical measures in Dutch cardiothoracic centres.POAF was associated with other complications, mainly pneumonia, pleural effusion and anaemia, and was perceived to occur more often after valvular surgery.There was consensus on the need for better national guidelines to come to a uniform approach to prevent POAF and the need for regulated management of anticoagulant use and increased guidance on prescription duration for patients.Adjustment of current protocols via an implementation trial was considered necessary to facilitate-and implement-all aspects needing change highlighted in this study.
Furthermore, it is essential to highlight the insight that a proportion of patients scheduled for cardiac surgery may already have a preclinical state of atrial cardiomyopathy.Its diagnosis could act as a marker for adverse outcomes and may be used to enhance preventive measures and risk management in these patients.
Fig. 3
Fig. 3 Reported prescription of postoperative medication for prevention and treatment of postoperative atrial fibrillation in Dutch cardiothoracic centres.Prophylaxis is first line of medication only
Fig. 4 Fig. 5
Fig. 4 Reported number of electrical cardioversions performed yearly in Dutch cardiothoracic centres
|
v3-fos-license
|
2024-03-29T05:11:27.506Z
|
2024-03-01T00:00:00.000
|
268728805
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1099-4300/26/3/238/pdf?version=1709881075",
"pdf_hash": "84ddc96bfcfd88f84d884fa49b827fb4ae236114",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46174",
"s2fieldsofstudy": [
"Physics",
"Philosophy",
"History"
],
"sha1": "84ddc96bfcfd88f84d884fa49b827fb4ae236114",
"year": 2024
}
|
pes2o/s2orc
|
It Ain’t Necessarily So: Ludwig Boltzmann’s Darwinian Notion of Entropy
Ludwig Boltzmann’s move in his seminal paper of 1877, introducing a statistical understanding of entropy, was a watershed moment in the history of physics. The work not only introduced quantization and provided a new understanding of entropy, it challenged the understanding of what a law of nature could be. Traditionally, nomological necessity, that is, specifying the way in which a system must develop, was considered an essential element of proposed physical laws. Yet, here was a new understanding of the Second Law of Thermodynamics that no longer possessed this property. While it was a new direction in physics, in other important scientific discourses of that time—specifically Huttonian geology and Darwinian evolution, similar approaches were taken in which a system’s development followed principles, but did so in a way that both provided a direction of time and allowed for non-deterministic, though rule-based, time evolution. Boltzmann referred to both of these theories, especially the work of Darwin, frequently. The possibility that Darwin influenced Boltzmann’s thought in physics can be seen as being supported by Boltzmann’s later writings.
Introduction
The things that you are liable To read in the Bible, It ain't necessarily so.
-Ira Gershwin
There is no 10,000 kg sphere of pure gold anywhere in the universe.But that fact's universality does not make it into a law of nature because while there does not happen to be such a sphere, there could be.Physical laws seem to require a special sort of necessity, what philosophers of science term "nomological necessity", that tells us what must or cannot happen [1].Ludwig Boltzmann's formulation of the Second Law of Thermodynamics [2] was contentious in part because it challenged the traditional understanding of the nature of that necessity in physical law.
Contemporaries like James Clerk Maxwell, Josef Lochschmidt, and Ernst Zermelo objected to Boltzmann's approach in order to save the traditional account of natural law.If Boltzmann's proposal clashes with the commitment to traditional necessity, they held, then the proposal should be jettisoned.Boltzmann, on the other hand, opted to revise our understanding of what should be expected from physical laws.
What accounts for Boltzmann's willingness to be so philosophically radical?The reason may lay in a combination of two factors: (1) Boltzmann's brand of realism-he was an "entity realist", believing that atoms do exist, but not a "nomological realist", believing our best scientific theories merely provide a Bild, a useful picture that should not be seen as literally true; and, (2) Boltzmann may have employed Darwinian evolution to provide a Bild to understand the microscopic world in which random changes are responsible for the time evolution of a system that is time-irreversible in practice, but not in principle.
Thermodynamic entropy famously emerges from Sadi Carnot's work on steam engines, the machines that powered the industrial revolution.They ran on coal, which had to be mined.Those coal mines not only provided fuel, but also unearthed the geological strata that led to modern geology.Within the layers of rock were fossils, often of animals different from their modern counterparts and in locations where the animals did not seem to belong.This set the stage for new biological theories of speciation, and ultimately, Darwinian natural selection.
Huttonian geology and Darwinian evolution are both scientific theories that differentiate temporal directions without the necessity traditionally required in physics, and both were big news in the scientific community when Boltzmann was working on entropy.Geology and biology thus may have provided models for Boltzmann's thinking about his statistical picture of thermodynamics.While Boltzmann's overture is no smoking gun, that is, there is no direct reference to Darwin in Boltzmann's works in which he develops his understanding of entropy, there are plenty of places both during and after his seminal pieces on statistical mechanics where Boltzmann does not only refer to, but actively employs, Darwin's thought for a range of purposes.Based upon them, a circumstantial case can be made that Boltzmann used biological evolution as a Bild through which to understand the time evolution of thermodynamic systems.
Maxwell's Models and Boltzmann's Bilder
Maxwell and Boltzmann held complicated relations to scientific realism, the view that our current best theories reflect reality.Maxwell was famous for his mechanical models, physical analogies that were useful heuristics, but never intended to describe an underlying reality [3].Playing a similar role in Boltzmann's understanding of the scientific method was his notion of Bild, that is, pictures or models used to make sense of systems, but not necessarily to provide accurate accounts.Indeed, he held that scientists should pursue a range of these models, as each may be fruitful in a different way.
Yet, while both employed explicitly anti-realist approaches in doing science, both also harbored realist ambitions.If a notion employed in the models developed by scientists, say, of molecules, was sufficiently successful in accounting for a wide enough swath of phenomena without glaring anomalies, then we would have warrant for considering that notion to refer to an actual component of the universe.In both holding most scientific work to be of mere instrumental value, while still allowing that sufficient predictive and explanatory success provided metaphysical license, Henk de Regt [4] calls Maxwell and Boltzmann "flexible realists".
For Maxwell, absolute truth was restricted to the Divine [5]; only God could know the truths of the universe with certainty.Humans could, at best, develop a rough intuitive sense of the way the world worked, and this was aided by our ability to construct mental metaphors, models that frame the system under investigation in terms of an analogy to another system we understood better."By a physical analogy I mean that partial similarity between the laws of one science and those of another which makes each of them illustrate the other (quoted in [6] (p. 208))".Maxwell (and many who followed him) dedicated great effort to constructing cognitively and actually building mechanical models of abstract entities like the magnetic field and the luminferous aether.
As those models became able to account for increasingly greater numbers of observable phenomena, and to predict new ones that had not been previously suspected, the question of the realistic interpretation of the models naturally emerged.
"The question of the reality of analogies in nature derives most of its interest from its application to the opinion, that all phenomena of nature, being varieties of motion, can only differ in complexity, and therefore the only way of studying nature, is to master the fundamental laws of motion first, and then examine what kind of complication of these laws must be studied in order to obtain true views of the universe.If this theory be true, we must look for indications of these fundamental laws throughout the whole range of science, and not least among those remarkable products of organic life, the results of cerebration (commonly called 'thinking').In this case, of course, the resemblances between the laws of different classes of phenomena should hardly be called analogies, as they are only transformed identities (quoted in [5] (p.76))".
Maxwell thus argues that when a model is sufficiently successful, we cannot but begin to see it as being actually descriptive of the underlying reality.We should never forget that it is a model and not take the model to be a full and complete description, but we must make some limited inference in our grasping of things as they really are.
Boltzmann was deeply influenced by Maxwell's physics, but also his epistemology.Enticed by the success of Maxwell's method of theorizing, he, too, hews to a scientific methodology that employs a sort of model at its heart.But where those models were almost exclusively mechanical for Maxwell, Boltzmann moves to a notion that was reflective of what was happening in Austrian culture at the time.
The notion of "Bild" was very much in the air around Boltzmann [7].On the one hand, it is the term that was used for a photograph, a new technology that led to a wave of philosophical conversation in the culture around reality and representation.Are the properties of the photograph, for example, absolute truths of the world?Is it perspectival or absolutely objective?Could it be used as evidence in a court?In science?What can be inferred about the subject of the photograph from the image itself and with what degree of certainty?
At the same time, it was also the root of the term "bildung", which referred to the process of self-creation through education and culture.In a class-conscious society, as the urban centers of the late Austro-Hungarian Dynasty were, there was a deep connection of the notion of "Bild" to what one should believe as a result of a process of discovery.It connoted a relation of the proper orientation of self to the social world.
These senses are embedded in Boltzmann's Austrian twist on Maxwell's notion of a mechanical model, turning it into more abstract notion of a scientific Bild.While Boltzmann uses the term in a multiplicity of ways [7], it is central to his approach to scientific methodology.A Bild is a model that includes elements beyond the observable, mentally developed cognitive constructs.These additional theoretical elements from the mind of the theorist create an explanatory construct which can be used not only to visualize what a system might look like that gives rise to the observed phenomena in the way a Maxwellian mechanical model does, but also to suggest future phenomena that might be accounted for as well.
In "On the Development of the Methods of Theoretical Physics in Recent Times", Boltzmann points to Wilhelm Weber's electro-magnetic theory which, although later discredited by Maxwell's theory, nonetheless suggested the hitherto undiscovered Hall effect [8].Because even a Bild like Weber's that turned out not to work beyond the phenomenon it was designed to account for and yet could have such progressive elements, and because a Bild should be thought of as a heuristic tool and not an accurate description of the underlying reality, there is an advantage in having theoreticians create a multiplicity of different Bilder.
In "On the Fundamental Principles and Equations of Mechanics" [9], Boltzmann writes, "If in this way we have grasped the task of thought in general and of science in particular, we obtain conclusions that are at first sight striking.We shall call an idea about nature false if it misrepresents certain facts or if there are obviously simpler ideas that represent these facts more clearly and especially if the idea contradicts generally confirmed laws of thought; however, it is still possible to have theories that correctly represent a large number of facts but are incorrect in other aspects, so that they have a certain relative truth.Indeed, it is even possible that we can construct a system of pictures of experience [Bildern der Erscheinungen] in several different ways (pp.105-106)".
This multiplicity of distinct models from the same initial set of observations should be seen metaphorically along the lines of mutations in a Darwinian context.Different alterations will have potentially different advantages in terms of the "selection pressures" that are generated by additional experimental discoveries.Indeed, Rosa et al. [10] try to develop a Darwinian epistemology along this line.As such, we see in Boltzmann, as with Maxwell, that a sufficiently successful model should be taken to provide us with some sense that we are developing a picture that is in some limited way representative of the underlying reality.
Maxwell and Boltzmann both allow inferences from sufficiently successful models.The nature of that inference is what is called "entity realism".If there is an ineliminable element of a model that shows itself to be explanatory and predictive in a wide enough range of situations without significant failure, then there is reason to think that there is a correlate in the real system to that part of the model.In other words, models that are widely applicable without anomaly can give us reason to believe in something we cannot directly observe.
However, this realism does not extend to the model as a whole.While successful models can give us warrant for belief with respect to the furniture of the world, we must always remember that this is a model.Hence, the theory itself will always be at best an approximation, a mere analogy.The parts may point to real things, but the laws are not actual universal truths.While Maxwell and Boltzmann accepted a sort of entity-realism, they both rejected nomological realism, the view that our best current scientific theories tell us how the underlying reality actually operates.
The Reality of Molecules
This shared methodological approach led to a temporary disagreement between Maxwell and Boltzmann concerning an inference to the reality of molecules.
Michael Faraday was famously untrained in mathematics and thereby utilized mental pictures as analogies in working out his advances in electricity and magnetism.Maxwell, who was mathematically masterful, translated Faraday's insights into equations, much to Faraday's delight [5].Maxwell continued to use Faraday's approach of mechanical models prolifically, developing a range of intricate physical analogies.He treated electricity as an incompressible fluid and the ether as a set of cogs.
But perhaps most importantly, in a series of papers beginning in 1860 with "Illustrations of the Dynamical Theory of Gases" [11], Maxwell constructed mechanical models of gases, treating them as collections of particles that began as impenetrable spheres that interacted only by contact but which became decreasingly idealized as he went on.With each new element added to the mechanical model, more empirical thermodynamic phenomena could be derived.
Maxwell was under no illusion that he was proving anything, but rather saw himself as providing evidence in favor of the kinetic theory of heat [6].This evidence was the result of our ability to account for an increasing number of observable phenomena and regularities and suggested that this progress would likely continue with the further development of the molecular model.Maxwell famously wrote [11] "If the properties of such a system of bodies [as he assumed] are found to correspond to those of gases, an important physical analogy will be established, which may lead to more accurate knowledge of the properties of matter (p.377)".
The success of Maxwell's research program impressed Boltzmann, who himself began to contribute to it.While both were thoroughly committed to the mechanical theory of heat, in 1871 Maxwell was more reticent to attribute reality to the molecules as a result of an anomaly neither could account for: specific heat.
The ratio of specific heat at constant volume to specific heat at constant pressure could be experimentally determined.Maxwell's initial attempt with his simplified picture of molecules failed.This was unsurprising, but when he made the model more realistic, accounting for rotational and translational energies, even while considering the molecules to be polyatomic, Maxwell still could not solve the problem.Boltzmann followed with his attempt, but to no avail [4].
Their inability to account for the correct measured values of specific heats led Maxwell to withhold a realist understanding of the work, referring to it as "the greatest difficulty the molecular theory has yet encountered (quoted in [4] (p. 212))".Boltzmann contended that the overall progress of the research program justified a realistic interpretation of atoms, where the problem made Maxwell pull back.
Ultimately, Boltzmann did solve the problem by creating the "dumbbell model" of diatomic molecules, which allowed for the rotational and translational motion, but because of the bond connecting the atoms, eliminated one degree of freedom.This changed the theoretical predictions in a way that allowed it to match the measured values.At that point, Maxwell, Boltzmann, and most others-with notable exceptions like Ernst Mach and Henri Poincaré-accepted the existence of atoms as having been demonstrated.His atomic Bild gave warrant for an assertion of the reality of its central entity.As Boltzmann would later write, "[C]ontemporary atomism provides a perfectly accurate picture [vollkommen zutreffendes Bild] of all mechanical phenomena and, because of the self-contained nature of this field, there is little expectation that we will discover phenomena that do not fit in the frame of this picture [Rahmen des Bildes] [12] (p.150)".
How the H Did Boltzmann Decide to Move to a Statistical Law?
Boltzmann accepted the reality of atoms as he worked on the problem of specific heats, also focusing on making sense of other aspects of macroscopic thermodynamics in terms of the mechanical model, specifically, the concept of entropy.How could the macroscopic quantity governing the conversion of energy to work in engines be understood in the microscopic context?There was a macroscopic principle, the Second Law of Thermodynamics, that needed accounting for in terms of Boltzmann's Bild.
Starting in 1872 with his paper "Further Studies on the Thermal Equilibrium of Gas Molecules" [13], Boltzmann sought to model entropy on molecular systems governed by Newtonian mechanics and thereby develop a notion of microscopic entropy that would mirror the macroscopic notion by increasing in systems not in equilibrium and remain constant for those that were.The 1872 paper proposes a theorem involving the quantity that he would term H, which is the negation of entropy.
Boltzmann followed with a series of papers working on this problem, leading up to the 1877 paper "On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium" [2], in which he derives what is known as the Boltzmann distribution.In fact, he derives it twice in the paper: once using discrete mathematical means and again employing the continuous means of differential equations.The former, found in Section I of the paper, begins with the "chunking" that Boltzmann included, dividing the molecules into velocity classes and assuming every molecule in that class to have the same velocity, allowing him to deal with a finite number of velocity-classes.(This move, most famously, influenced Max Planck with its idea of quantification which led to his solution of the blackbody radiation problem).As Kim Sharp and Franz Matschinky, in their introduction to their translation of [2], note, "This assumption does not correspond to any realistic model, but it is easier to handle mathematically (p.1976)".In Section II of the paper, Boltzmann then repeats the process, but with continuous energy distributions.
This redundancy might seem curious, but as Nadine de Courtenay [14] argues, "Boltzmann was one of the first physicists to recognize that mathematical language was not an inconsequential means of expressing physical processes (p.50)".
Ernst Mach's insistence on differential equations in physics, for example, was not an innocent choice, Boltzmann argues, but begs the question in favor of his anti-atomism.Mach's positivism, his epistemological view that we should only believe that which is observable, led him to reject the existence of atoms as well as other in principle unobservable concepts like Isaac Newton's absolute space.Mach insisted that we replace the sorts of explanations in science that smuggle in unobservable entities and simply see differential equations as the last word, a method that is metaphysically empty, but scientifically full.But this embrace of differential equations as the language of all physics was not philosophically neutral.Mach may have claimed to be trying to rid physics of metaphysical baggage, but he was actually stacking the philosophical deck by choosing to work only in smooth mathematical universes in making his models of the world [14].Constraining physical laws to the form of differential equations invisibly imported an anti-atomistic metaphysic.Mathematical language is not mere metaphysically neutral formalism, according to Boltzmann, and Mach's use made him the metaphysician he railed against.
In choosing to work in both discrete and continuous languages, Boltzmann in [2] was dong two things.First, he was playing it metaphysically honestly, showing that his result was not dependent on a particular picture of the underlying world.This is not to say that he was not committed to atomism.
He was, but in showing that the result was not dependent upon discrete or continuous foundations, he had a second goal: to show that the physics was capable of being a bridge to connect the macroscopic world, in which the microscopic could be treated as if it were smooth, and the microscopic world, which could not.As such, the behavior of the resulting notion of entropy, and the statistical tools he would build around it, would provide a picture that holds true in both frames of reference.It worked for the macroscopic world that was seen as if it was continuous and for the microscopic world that had to be treated as discrete.By having a single treatment that is invariant under the change from discrete to smooth, the work had to be seen as bridging the intellectual chasm between the macroscopic thermodynamic and the microscopic atomistic.
But there was an aspect of his result that undermined this bridge.The resulting view of entropy was statistical.What began in the 1872 paper as an epistemic probability, a statistical generalization resulting from our inability to account for the multitude of atoms in a small amount of a gas, turned into a fully stochastic approach by 1877 in which the entropy was no longer tied to properties of the individual atoms in the gas, but rather now became a measure of the accessibility of abstract ensemble states."Boltzmann's explanation was to consider the increase in entropy as a result of a statistical process, involving the calculation of probabilities, and not as a result of a dynamic process described only by mechanics [10]".
Boltzmann did not give up his commitment to the atomic hypothesis, but moved his thinking from the Maxwellian approach of deriving the macroscopic directly from mechanical aspects of the microscopic to a higher-level picture.In shifting the object of the law from standardly physical quantities like duration and velocity to properties of the constructed phase space, Boltzmann was radically revising how to think of the system and how to understand the macroscopic law governing entropy.The Second Law of Thermodynamics was no longer deterministic, but a statistical generalization that made what we see not necessary, but highly likely.That move, of course, generated serious objections.
Defending Necessity
So, according to Boltzmann's approach, entropy tends to increase, thereby distinguishing future from past, but does not always necessarily increase, and that probability is intrinsic to the system, not a mere result of our ignorance.Critics, notably Maxwell, Josef Lochschmidt, and Ernst Zermelo, objected to different parts of the position.Maxwell to the nature of the probability, Lochschmidt to the lack of time-reversibility, and Zermelo to the lack of necessity.
Maxwell did not object to the statistical move per se, but tried to show that the system is not stochastic.The uncertainty involved remains of the epistemological sort.His eponymous demon is a tiny intelligence in control of a valve.The demon is capable of determining the velocity of all molecules of a gas and capable of opening or shutting the vale at will.By choosing to only open the valve for high-velocity molecules, the demon thus becomes capable of sorting the molecules and thereby creating increased order, that is, of decreasing entropy.Entropy thereby does not necessarily increase, but our statistical sense that it generally does is a result of our not being demonic, that is, of our having less cognitive capacity than the fanciful being.Because its ability to alter the amount of entropy in the system is accomplished through the intelligent processing of information, then, the appeal to probability is epistemic and not metaphysical.In other words, the statical nature of the Second Law would be a result of our ignorance, not a feature of the world itself [7] (p.1278).
Josepf Lochschmidt, was a dear and close friend of Boltzmann.They both accepted the existence of atoms and the supposition that they were governed by Newtonian mechanics.But if this is true, Lochschmidt contended, then the properties of the laws governing the small would have to be the same as those governing the large since the ensemble is nothing more than a collection of necessarily determinable states.The rules governing the parts-which are deterministic and time-reversible-should not differ from those governing the whole.
Yet, Boltzmann's statistical approach creates an asymmetry between future and past.This, Lochschmidt argues, is problematic."[I]n any system, the entire course of events becomes retrograde if at a certain moment the velocities of all its elements is reversed (quoted in [15] (p.200))".Given that Newtonian mechanics is time-reversible, reflecting the velocities of all molecules would follow Newton's laws and decrease entropy, but this reversed system is as much a model of Newton law as the non-reversed one.As Flamm points out of the reflection, "This procedure is equivalent to time reversal", that is, the backward running film of the universe decreases entropy, but still fits the underlying mechanics Boltzmann uses.
It should be noted that this was also a line that was used against Darwin's theory of evolution.Critic William Hopkins [16] wrote, "a phenomenon is properly said to be explained, more or less perfectly, when it can be proved to be the necessary consequent of preceding phenomena, or more especially, when it can be clearly referred to some recognized cause; and any theory which enables us to be to do this may be said in a precise and logical sense, to explain the phenomenon in question.But Mr. Darwin's theory can explain nothing in this sense, because it cannot possibly assign any necessary relation between the phenomena and the causes to which it refers them (p.267)".
Zermelo's objection took a different route.If the entropy of a system is based upon the distribution of positions and velocities, and these obey Newtonian principles, then given enough time, the system will eventually find itself back in the original orientation.But, since there was an increase in entropy moving away from the initial state, there would have to have been a decrease in order for the original state to recur.
The Second Law of Thermodynamics is a purported law of nature.Laws of nature are more than mere rules of thumb.They say what must happen.Making a law of nature into a statistical generalization is to undermine it as a law of nature.In physics, during that period, Hermann Bondi [17] contends, "it was widely thought that the perfect predictability of Newton's solar system 'clockwork' was what any physical science, nay what any human endeavor should aim to achieve (p.159)".Boltzmann's understanding of entropy denied it and therefore, to many, it seemed not to be good science.As Bondi reported Ernest Rutherford as having said, "If you get a statistical answer, you asked the wrong question".
Boltzmann wrestled with all three of these concerns, acknowledging that they were, indeed, concerns that needed to be taken seriously.And yet, he did not waver from his approach.Formulating different versions of responses to them, he always stayed true to his approach.He was willing in the move from the micro-to the macroscopic to accept a deeply stochastic picture of nature which differentiated future from past, but which did not possess the sort of necessity that had been considered an essential aspect of physical law.What accounts for this digging in of his intellectual heels in the face of strong objections to the different aspects of his view from people he respected?
One possibility is another successful Bild, another cognitive framework that allowed him to make sense of this unusual world in a way that made sense to him.That heuristic could have been provided by Darwin's theory of natural selection.
Darwin in Germany and Austria around Boltzmann
Darwin's Origin of Species first appeared in German translation in April 1860, translated by Germany's most prominent paleontologist, Heinrich Georg Bronn, whose own research had been proceeding parallel to Darwin's [18].However, Bronn passed away two years after the translation's publication.As a result, Darwinism in Germany required someone else to act as its spokesperson.Stepping into this role was the young, handsome, and charismatic Ernst Haeckel who famously spoke to the German Society of Naturalists and Physicians at Stettin in September of 1863, and with an address of historical significance, the twenty-nine year old zoologist launched the era of Darwinism in Germany."With the translator Bronn", he told the assembled crowd, "I see in Darwin's direction the only possible way to come close to understanding the great law of development that controls the whole organic world [18] (p.157)".
The approach that Haeckel develops, and which becomes greatly influential in the German-speaking world during Boltzmann's time, however, is less Darwinian and more in line with that of Jean Baptiste Lamarcke.We see this sort of Lamarckian understanding of Darwin in the writings of Hermann von Helmholtz [19].Boltzmann studied with Helmholtz in Berlin in 1871, just before Boltzmann's initial work on the H theorem, and we find in Helmholtz's personal notebook from that period (notes which he used to give his lectures) passages like the following: "Recapitulation of Darwin's hypotheses: The law of heredity demonstrates a range of variations in all classes of organisms although the fewest differences arise among species in the wild.What remains to be determined is the limits of this change.This is where the following comes into consideration: that a much greater influence is to be expected from natural breeding than from artificial selection.The hypothesis of an independent origin of each species becomes incredibly unlikely as a result of (a) the homology among different species (b) Metamerism within the same species (c) Paleontological developments, and (d) the geographical affinity of like organisms to live together".[20] The reference here to "metamorism" shows the ways in which Helmholtz saw Darwin as connected to his own research program on color perception and thereby would have likely been a topic Helmholtz would have been thinking about during the period when Boltzmann was in contact with him.
Boltzmann left Berlin for a position back in Vienna which he occupied until 1876, just before the publication of his statistical approach to entropy.At that time, his colleague was Ernst Mach, who had been installed in the newly created chair in "the history and philosophy of the inductive sciences".In this position, which Boltzmann himself would later occupy, Mach became a public spokesperson for his philosophical position of positivism.While the scientific and epistemological disagreements between Mach and Boltzmann are extremely well-trod ground, one point of agreement between them was their mutual enthusiasm for Darwinism (see [21]).Indeed, Mach asserts that it must have its imprint on all of human thought.For example, in his inaugural address "On Transformation and Adaptation in Scientific Thought" [22] for his position at Prague (the post he held before moving to Vienna), Mach writes, that "[K]nowledge, too, is a product of organic nature.And although ideas, as such, do not comport themselves in all respects like independent organic individuals, and although violent comparisons should be avoided, still, if Darwin reasoned rightly, the general imprint of evolution and transformation must be noticeable in ideas also (quoted in (pp.217-218))".
While we can trace Boltzmann's public and published references to Darwin at least as far back as 1886, nine years after his transformative approach to entropy, it is clear that Boltzmann would have been surrounded by discussion of Darwin's theory from the wider culture and from important intellectual figures with whom he was interacting at the time he was working on this new understanding of the Second Law.
Darwin in Boltzmann
There is no smoking gun.We do not have Boltzmann in a correspondence or a footnote citing Darwin's influence on his thought in developing his statistical understanding of the Second Law of Thermodynamics.However, what we do have is a range of testimony from Boltzmann about a wide range of other ways that Darwin influenced his thinking at the time and afterward, some closer and others farther from this topic.We can also find analogues in the treatment and evidence of natural selection that mirror the arguments raised against Boltzmann's approach to the Second Law of Thermodynamics, and in the analogous situations, the concerns cease to be serious concerns.Putting this together, we may not be able to demonstrate that Boltzmann was thinking of the time evolution of ensembles of atoms along the lines of the adaptations of species in ecosystems, but it would fit comfortably within the larger confines of how we do know that Boltzmann dealt with other issues in epistemology and science.
Indeed, Darwin occupied a privileged place in the science of this time.In 1886, nine years after the statistical turn and while Boltzmann is working on the argument addressing Loschmidt's objection in print in his article "Neuer Beweis zweier Sätze über das Wärmegleichgewicht unter mehratomigen Gasmolekülen", he is called to give a talk to the Austro-Hungarian Imperial Academy on the Second Law of Thermodynamics [23]; he began by reflecting on the centrality of science in general on human progress."If we regard the apparatus of experimental natural science as tools for obtaining practical gain, we can certainly not deny its success.Unimagined results have been achieved, things that the fancy of our forebears dreamt in their fairy tales, outdone by the marvels that science in concert with technology has realised before our astonished eyes.By facilitating the traffic of men, things and ideas, it helped to raise and spread civilization in a way that in earlier centuries is paralleled most nearly by the invention of the art of printing.And who is to set a term to the forward stride of the human spirit!The invention of a dirigible airship is hardly more than a question of time.Nevertheless I think that it is not these achievements that will put their stamp on our century: if you ask me for my innermost conviction whether it will one day be called the century of iron, or steam, or electricity, I answer without qualms that it will be named the century of the mechanical view of nature, of Darwin [23] (p.15)".
Boltzmann thought Darwin more important than air travel.It should not be lost that this is in a paper discussing the notion of entropy and that instead of calling Darwin's work "the theory of evolution" or "speciation by natural selection", instead Boltzmann chooses a name that explicitly parallels the phrase he used for the kinetic theory of gases-the mechanical view of heat.Indeed, Boltzmann's discussion seeks to erase the distinction between the explanatory sciences like physics and the merely descriptive historical sciences."Since the mighty upswing of geology, physiology and so on, but above all since the general acceptance of the ideas of Darwin, these sciences boldly undertake to explain the forms of minerals and of organic life [23] (p.16)".With the "mighty upswing" of geology and the great success of Darwin, there were examples of a mechanical approach to the world in which the development of a scientific system is the result of accidental, not deterministic factors, a fact that was not lost on Boltzmann.
Boltzmann not only admired the theory of evolution; he was clearly reading Darwin closely.Boltzmann reflects on the most profound questions of humanity "whence do we come, whither shall we go" and asserts that "essential and undeniable progress" has been made in "the present century, thanks to most careful studies and comparative experiments on the breeding of pigeons and other domestic animals, on the coloring of flying and swimming animals, by means of researches into the striking similarity of harmless to poisonous animals, through arduous comparisons of the shape of flowers with that of the insects that fertilize them (p.14)".This is a list, almost chapter by chapter, of the evidence that Darwin sets out for his theory in The Origin of Species.Clearly, Boltzmann not only read the book, but knew its structure and argumentation intimately.
The influence of Darwin can be seen in multiple ways in the thought of Boltzmann: two epistemological, one metaphysical, and two scientific.
Darwin as an Answer to Kantianism
The structure of the human mind was a lively question among physicists of Boltzmann's time.The advancements of science required a foundation in epistemology.We had to know what knowledge was in order to continue to increase it.The prevailing trend in Continental philosophy in the 19th century moved away from the scientific and toward the romantic.The idealism of Friedrich Hegel ruled supreme in German-speaking philosophy departments.As such, philosophers seemed of little use when facing the challenge of non-Euclidean geometry or the rise of atomism, which posited the existence of the unobservable.Some, like Boltzmann's contemporary at Vienna, Mach, took on a thoroughgoing empiricism, arguing that only what could be observed should play a part in our understanding of the world.Diametrically opposed to the Naturphilosophie, this meant the elimination of God and other purely speculative metaphysical entities like the spirit.Influenced by the work of Gustav Fechner, Mach [24] contended that we could do away not only with the idea of the soul, but with the notion of mind altogether, replacing it with a purely materialistic psychophysics.This monism stood as the key move in establishing a general stance: "all metaphysical elements are to be eliminated as superfluous and as destructive of the economy of science (xxxviii)".Having done away with the mind/soul, Mach also sought to cleanse science of Newton's absolute space and absolute time, atoms, and any other theoretical construct that was not reducible to sense perceptions.
Others did not take Mach's positivist route, but sought the last major philosophical figure who did take science seriously, which sparked a rebirth of Kantianism in Germany.Immanuel Kant argues in his Critique of Pure Reason [25] that certain mathematical and scientific notions, such as Euclidean geometry and Newtonian mechanics, are what he terms "synthetic a priori".The logical distinction between analytic and synthetic propositions distinguishes between those like "bachelors are unmarried" and "bachelors are slobs", wherein analytic sentences like the former have a predicate that is contained in the meaning of the subject, whereas synthetic sentences like the latter have a predicate that is not a part of the meaning of the subject.The a priori/a posteriori distinction is epistemological in separating those propositions that could be known without perception from those that require observation.Before Kant, it was supposed that all analytic propositions are a priori and all synthetic propositions are a posteriori, but Kant argues that there is a subset of propositions that are synthetic a priori, i.e., that we know without experience, but which are not merely definitional.Consider the Euclidean axioms.We know that you can draw a circle of any size around any point, but this truth is not merely one that is arrived at through unpacking the notions of circle and point.It is a synthetic proposition in possessing content beyond the definition of the words making it up.
Kant's question, then, is "How are a priori synthetic judgements possible?"His answer is that they are an innate part of the structure of the human mind.They function as the categories by which the mind takes the raw manifold of perception, such as the blur of colors taken in by the eyes, and out of them constructs complex perceptions.The sensory organs provide the content of what we observe, but the necessary categories of the mind provide the form and when the activity of the mind acts upon the raw data fed in from the senses, the result is the observations we make of the world around us.
Because they are innate, they are universal.All humans begin with the same conceptual foundations.Observation can add to it, but all people start with the same set of categories to construct the world.And since these are the structural elements implicit within our observations, no possible experience could contradict them.They are apodictic, viz., necessarily true and in principle unfalsifiable.Because these are the ideas that build the world out of our perceptions, no perception could possibly falsify it, since it built those perceptions.
A number of major figures who championed a revived version of the Kantian synthetic a priori as the basis for their justification of scientific truths were scientists Boltzmann held in the greatest respect.Heinrich Hertz, for example, writes in the prefatory note to his book The Principles of Mechanics: Presented in a New Form [26]: "The subject-matter of the first book is completely independent of experience.All the assertions made are a priori judgments in Kant's sense.They are based upon the laws of the internal intuition of, and upon the logical forms followed by, the person who makes the assertions; with his external experience they have no other connection than these intuitions and forms may have (p.45)".Indeed, Hertz and Boltzmann had a lively correspondence Hertz contending with Kant that the laws of thought must be synthetic a priori [27].
Boltzmann's response in [28] is to counter this brand of neo-Kantianism, which he thinks is so absurd as to be a joke, with Darwin.
"What then will be the position of the so-called laws of thought in logic?Well, in light of Darwin's theory they will be nothing else but inherited habits of thought.Men have gradually become accustomed to fix and combine the words through which they communicate and which they rehearse in silence when they think, as well as the memory pictures of those words and everything in the way of internal ideas used for the denoting of things, in such a manner as to enable them always to intervene in the world of phenomena in the way intended, and in inducing others to do likewise, that is to communicate with them.These inventions are greatly promoted by storing and suitable ordering of memory pictures and by learning and practicing speech, and this promotion is the criterion of truth.This method for putting together and silently rehearsing mental images as well as spoken words became increasingly perfect and has passed into heredity in such a way that fixed laws of thought have developed. ..One can call these laws of thought a priori because through many thousands of years of our species' experience they have become innate to the individual, but it seems to be no more than a logical howler of Kant's to infer their infallibility in all cases. ..According to Darwin's theory this howler is perfectly explicable.Only what is certain has become hereditary; what was incorrect has been dropped.In this way these laws of thought acquired such a semblance of infallibility that even experience was believed to be answerable to their judgement.Since they were called a priori, it was concluded that everything a priori was infallible (pp.194-195)".
Boltzmann is adopting an early version of the view that would later be famously held by Noam Chomsky, that a result of evolution is an inherent linguistic faculty that is innate and results in a particular intrinsic grammar beneath all human language-use [29].This explains the correct elements of the Kantian synthetic a priori, Boltzmann contends, without committing us to the flawed apodictic aspect.
Boltzmann's epistemological approach was pragmatic, seeing the forming of beliefs based on observations as evidence as a successful adaptation of our ancestors.The human mind is an artifact of evolution and so, therefore, must the structures by which it determines what we ought to believe based upon what it is fed through the sense organs.Darwin, instead of Kant, forms the basis for his understanding of why the human mind is capable of scientific theorizing and testing.
What is most important in the current context, though, and should be emphasized, is that the move Boltzmann is making here epistemologically is to undermine the apodictic nature of Kantian synthetic a priori truths, that is, the necessity of these propositions, preferring instead a view in which they are the result of a Darwinian evolutionary process by which they arise accidentally through a historical process.This is very much homeomorphic to the move Boltzmann makes in 1877 with respect to the Second Law of Thermodynamics, taking it from a proposition whose necessity must be asserted to the result of probabilistic historical processes.
Darwin as a Metaphor for the Scientific Method
Darwin provides not merely the basis on which we should understand why humans are capable of science, but also provides a way of understanding the means by which scientific results become rationally accepted by the scientific community.Boltzmann writes in [30] "[N]o theory is absolutely true, and equally hardly any absolutely false either, but that each must gradually be perfected, as organisms must according to Darwin's theory.By being strongly attacked, a theory can gradually shed inappropriate elements while the appropriate residue remains (p.153)".
Evolution is not only a theory, but a theory that provides an image for how theories emerge, survive, and grow.Rosa et al. in [10], expand upon this metaphorically Darwinian approach in their article "Constructivism and Realism in Boltzmann's Thermodynamic Atomism".
Darwin as Support for the Materialist Worldview
Boltzmann's atomistic worldview not only provided him with a picture of the microscopic workings of thermodynamic systems, but provided a thoroughgoing ontology (that is, a catalogue of everything that exists in reality).Heat had been considered a substance from Aristotle to the advocates of phlogiston theory.The mechanical theory of heat allowed for the simplification of our catalogue of things in the universe, and Boltzmann saw the mechanical theory of heredity as doing the same.
In the German-speaking world of the 19th century, one of the most important scientists connected with evolution was Haeckel, most remembered for his aphorism "Ontogeny recapitulates phylogeny".Coming out of the Naturphilosophie movement which leaned heavily on research in embryology and morphology paired with a robust religious metaphysic, Haeckel developed an explicitly evolutionary theory, albeit one that was more in line with that of Lamarck than Darwin's approach.But Haeckel [31] added a German philosophical twist to his view, placing Geist (spirit or soul) at the heart of the biological.
"No reproach is more frequently made against the science of to-day, especially against its most hopeful branch, the study of development, than that it degrades living Nature to the level of a soulless mechanism, banishes from the world the ideal, and kills all the poetry of existence.We believe that our unprejudiced, comparative, genetic study of soul-life gives the lie to that unjust accusation.For if our uniform or monistic conception of Nature is rightly founded, all living matter has a soul, and that most wondrous of all natural phenomena that we usually designate by the word spirit or soul is a general property of living things.Far other than believing in a crude, soulless material, after the manner of our adversaries, we must rather suppose that the primal elements of soul-life, the simple forms of sensibility, pleasure, pain, the simple forms of motion, attraction, and repulsion, are in all living matter, in all protoplasm.But the grades of the up-building and composition of this soul vary in different living beings, and lead us gradually upwards from the quiescent cellsoul through a long series of ascending steps to the conscious and rational soul of man (p.173)".
Contrary to the more mechanical picture of Darwin, Haeckel is explicit in his insertion of the metaphysical into the material.Life is a combination of matter and soul, body and spirit.
Boltzmann rejected the metaphysical dualism inherent in the position, asserting that all phenomena from astronomy to psychology could be accounted for in terms of the behavior of the material constituents of the world.The biggest challenge for this sort of metaphysical materialism, of course, is human consciousness.But Boltzmann [12] sees Darwin as having given us the tools for that.
"The brain we view as the apparatus or organ for producing word pictures [Bilder], an organ which because of the pictures' great utility for the preservation of the species has, comfortably with Darwin's theory, developed in man to a degree of particular perfection, just as the neck of the giraffe and the bill of the stork have developed to an unusual length (p.69)".
The human mind is no more mysterious than any other of a range of biological curiosities.Humans are just animals and our intelligence is just one more example of a notable adaptation.From [32], "From a Darwinian point of view we can grasp furthermore what is the relation of animal instinct to human intellect.The more perfect an animal, the more it shows incipient traces of intellect alongside instinct (p.138)".
A Darwinian Explanation of the Development of Photosynthesis, Mind, and Life Itself
Life requires decreasing entropy, as such an essential element of life is the collection of free energy for the purpose of doing the work needed to maintain life.Using his understanding of radiative entropy, Boltzmann was able to make sense of the process according to Englebert Broda [21].Boltzmann contends that plant photosynthesis arose and developed during a Darwinian struggle for improved supply of free energy: "The dependence of plant life on light had been discovered in London by Jan Ingen-Housz, a Dutchman mostly living in Vienna, in 1779.However, Ingen-Housz did not know why exactly light is needed.Indeed he could not know it as conservation of energy was unknown in his time, and so the need for a particular source of energy did not seem to exist.The second step had been taken by the discoverer of energy conservation (First Law of Thermodynamics), Julius Robert Mayer, in 1845.He wrote "The plants absorb a force, light, and produce a force: chemical difference".However, in his ignorance of the Second Law Mayer could not make a distinction between useful and useless forms of energy.This distinction, in respect to photosynthesis, was left to the physicist who understood the Second Law better than anybody else, and who explained it in atomistic terms, to Ludwig Boltzmann.In 1884 he had introduced the notion of the entropy of radiation (p.62)".This sort of combination of his approach to thermodynamics and Darwin's natural selection, Broda shows, is not limited to photosynthesis, but also gives an account of the arising of other bioenergetic processes such as fermentation and respiration.Indeed, Broda points to passages in Boltzmann's later lectures in which he makes this form of argument for the arising of consciousness and life itself.
Darwin as a Model for Thermodynamics
Again, while we do not have Boltzmann asserting the connection between Darwinian evolution and the time evolution of thermodynamic systems as a part of his thinking in the 1877 move to a statistical understanding of entropy, we do have instances of Boltzmann publicly connecting the two after the fact.Boltzmann employs Darwin's thought as a Bild, a heuristic model, for the development of statistical mechanics.There is significant textual evidence that the two are connected in Boltzmann's thought after the development of his statistical notion of entropy, which allows for an inference that Darwinian evolution may have served as a structural model for Boltzmann in developing his statistical understanding of thermodynamics.
In [12], Boltzmann connects mental phenomena, which he understands as material interactions that are the result of human evolutionary history, with the physical phenomena of electricity and, more importantly, for this point, heat."Mental phenomena may well be much more remote from material ones than thermal or electric from purely mechanical ones, but to say that the two former are qualitatively while the latter three are only quantitatively different seems to me mere prejudice (72)".While Boltzmann used the term "mechanical" in different ways in various contexts, here, he is clearly meaning something along the lines of being governed by the laws of mechanics (as opposed to Maxwell's equations); as such, he is not creating a mechanical picture of mind in the sense that the mind is a result of Newton's laws of motion, but he is creating a mechanical theory of mind which does not require any sort of non-material soul.
It is mere prejudice, a fallacy to be eliminated, to distinguish between mental and thermal systems.They are not identical, but of the same sort, Boltzmann contends.As such, the type of arguments that support evolutionary outcomes would not be of a different epistemic species from the sort from those we give in making arguments about thermodynamics.
Indeed, we see in his "Reply to a Lecture on Happiness Given by Professor Ostwald" [33], an evolutionary discussion that maps very closely to a discussion of molecules in a gas."As regards the concept of happiness, I derive it from Darwin's theory.Whether the in the course of aeons the first protoplasm developed 'by chance' in the damp mud of the vast waters on the Earth, whether egg cells, spores, or some other germs in the form of dust or embedded in meteorites once reached Earth from outer space, is here a matter of indifference.More highly developed organisms will hardly have fallen from the skies.To begin with there were thus only very simple individuals, simple cells or particles of protoplasm.Constant motion, so-called Brownian molecular motion, happens with all small particles as is well-known; growth by absorption of similar constituents and subsequent multiplication by division is likely explicable by purely mechanical means.It is equally understandable that these rapid motions were influenced and modified by their surroundings.Particles in which the change occurred in such a way that on average (by preference) they moved to regions where there were better materials to absorb (food), were better able to grow and propagate so as soon to overrun all the others (176)".
The approach to evolution reduces the generally macro-level interaction of members of species with their environment, giving rise to selection pressures which affect the relative success of mutations, and places it in a microscopic framework in which atomic-level interactions, like Brownian motion, are in play and driven "by chance".Framing the process stochastically while speaking of the average speed of small particles in rapid motion is directly parallel to the sort of calculations Boltzmann was undertaking in understanding the time evolution of gases with notions like the mean free path.The convergence of vocabularies and concerns between the two is not only striking, it is clearly intentional.Boltzmann is explicitly connecting evolution with thermodynamics.
It should be noted that the Professor Ostwald to whom Boltzmann is responding is Wilhelm Ostwald, a Nobel laureate who was one of the last major opponents of atomism, preferring a system based on energy as a foundational concept.From his [34], "What we hear originates in work done on the ear drum and the middle ear by vibrations of the air.What we see is only radiant energy which does chemical work on the retina that is perceived as light.When we touch a solid body we experience mechanical work performed during the compression of our fingertips. ..From this standpoint, the totality of nature appears as a series of spatially and temporally changing energies, of which we obtain knowledge in proportion as they impinge upon the body and especially upon the sense organs which fashioned for the reception of the appropriate energies (159)".
Ostwald was seeking to do away with particles, making energy the primary metaphysical constituent of the universe.In his evolutionary example, he is taking organisms-a clear instance of things-and intuitively elevating them so that the evolutionary example rhetorically supports his atomism.
It should be noted that Boltzmann uses in his example here, in the influence of Brownian motion.Ironically, it was Jean Perrin's experimental work on Brownian motion that led Ostwald to finally relent and reject his energetics theory for atomism.Perrin's experiments verified the theoretical work of Albert Einstein [35].Boltzmann could not have known at the time that Brownian motion would have this effect on Ostwald or that Einstein would explain it in terms of atomic interaction several years later.Indeed, as John Blackmore [36] persuasively argues, it is unlikely that Boltzmann was aware of Einstein's work on his matter.
We see a similar move on the part of Boltzmann in [32]: "We must mention also that most splendid mechanical theory in the field of biology, namely the doctrine of Darwin.This undertakes to explain the whole multiplicity of plants and the animal kingdom from the purely mechanical principle of heredity, which like all mechanical principles of genesis remains of course obscure (p.132)".
Note the phrasing "that most splendid mechanical theory in the field of biology".Boltzmann is creating a class of mechanical theories, of which the mechanical theory of heat is the most obvious example, but then including with a celebratory nod a member of the group in biology.
But the most important element comes a couple of pages later when Boltzmann highlights an unexpected element of the evolutionary process: the expected unexpected, that is, the instances in which random mutations give rise to unfit organisms."It is wellknown that Darwin's theory explains by no means merely the appropriate character of human and animal bodily organs, but also gives an account of why often inappropriate and rudimentary organs or even errors of organization could and must occur (136)".
Darwinian evolution is driven by random mutations (although they did not understand the genetic force driving it at the time).That randomness will not always give rise to more fit organisms, but will be expected to create dead ends at an expected statistical rate.Yes, we generally focus on the developments that drive the process forward, but given that it is a random process there will also be expected cases of the undesirable and indeed the unexpected.
Evolution can run backward.The eye can develop, but then if a subpopulation takes to inhabiting caves, selection pressures could undo what eons of selection processes had constructed.It is the general case, but not absolutely necessary, that members of species become more complex to fit their environment.This is an instance of Darwinian evolution enacting its own version of Zermelo's objection.It is perfectly possible that a strange set of circumstances could take a species and do and then undo all of the changes.Evolution is a time-oriented theory, yet there is a miniscule possibility that it will return that on which it works to its original stage.It is highly unlikely, but theoretically conceivable.That does not mean that evolution has not been working.It is just an unusual possibility given the statistical nature of the process.
But, if we accept this likelihood in the biological sense, Boltzmann seems to be saying, why would we have any problem with the analogue in the thermodynamic case?Clearly, we should not, and if Boltzmann was using Darwinian evolution, the mechanical theory of inheritance, as a Bild underlying his construction of statistical mechanics, then we have a perfectly reasonable explanation for why none of the concerns and objections raised would have seemed to be concerning to Boltzmann.
Conclusions
In 1877, Ludwig Boltzmann made a stunning shift.His new version of the Second Law of Thermodynamics was a statistical regularity, lacking the standard sort of nomological necessity that had been traditionally asserted as an essential property for a law of nature.It was a bold move that was novel in physics, but not in science writ large.Huttonian geology and especially Darwinian evolution, two theories that were tremendously prevalent in scientific discourse at the time, possessed structural similarities.In the decades after his shift in the understanding of entropy, Boltzmann refers to both of these theories.This is especially true with respect to Darwin's work, of which Boltzmann was a vocal supporter.Indeed, we see Boltzmann model elements of his approach to the scientific method on Darwin.Evolution plays a central role in his understanding of the acquisition of human knowledge, of the emergence of life and consciousness, and Boltzmann considers ways in which his work on entropy must be a part of biological evolution itself.
All of this was after 1877, but given Boltzmann's time in Berlin with Hermann von Helmholtz, whose work bridged the physical and biological and who was thinking and speaking about Darwin, there is reason to believe that Darwin was on Boltzmann's mind just before his bold new understanding of entropy emerged.
After his view solidified, Boltzmann used Darwinian analogies to explain his work to the public.Boltzmann saw the connection between his approach and Darwin's to be similar enough to be able to use the latter as a Bild, a conceptual picture capable of making the latter more clear to the mind.
As a result of all of this circumstantial evidence, there seems to be warrant for considering the possibility that Darwin helped Boltzmann make the switch that appears in 1877.Again, there is no smoking gun here, but we do have a variety of different sorts of evidence garnered around a hypothesis.The inference based upon that is similar to Darwin's own approach to argumentation in The Origin of Species, an inductive method of inference he learned from his own teacher William Whewell [37], who called such an approach to reasoning "consilience".
What strikes the contemporary mind as odd about this claim is that a biological system could be used as a model for a physical system.Going back to August Comte and through the Logical Positivists (who develop in part from the ideas of Philipp Frank, Boltzmann's own student), the reductionist picture of science has psychology reducing to biology, which is just complicated chemistry, which in turn is nothing but physics.As such, when we look at the origin of ideas in the most basic of the sciences, physics, then surely the only influences are to be found in the discourse among physicists.But recall what Boltzmann himself thought his own era was the age of-the age of Darwin.While it may seem to those who have internalized the reductionist scheme to put the intellectual cart before the scientific horse, Boltzmann's own epistemology freed the scientist to find an appropriate Bild wherever one could.
If this inference is correct, then, Darwin and Boltzmann become the scientific Porgy and Bess with the reworked Gershwin's lyric: "The phase space sectors where you'd find a system's vectors. ..it ain't necessarily so".
|
v3-fos-license
|
2021-05-04T22:05:27.628Z
|
2021-04-05T00:00:00.000
|
233590298
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1155/2021/6639218",
"pdf_hash": "c368a6b32886ecee8b9e019f8e5c7e92f2652aea",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46175",
"s2fieldsofstudy": [
"Business",
"Engineering",
"Economics"
],
"sha1": "224771e296ed12d6d684a8f2ab4b5c310662b5f0",
"year": 2021
}
|
pes2o/s2orc
|
Two-Phase Optimization Models for Liner Shipping Network Based on Hub Ports Cooperation: From the Perspective of Supply-Side Reform in China
From the perspective of supply-side reform in China, it is hard for COSCO Shipping, a merged company with a strong shipping capacity, to abandon the container shipping market. Meanwhile, the new company could cooperate with new strategic ports along the Maritime Silk Road in liner service. Against this backdrop, this paper aims to optimize the liner shipping network (LSN) from strategic, tactical, and operational levels and help the merged shipping company adjust its operational measures according to market changes. )e optimization towards different levels of decision-making process is a new research of highly practical values. Specifically, this paper created two-phase optimization models for LSN based on the selection of hub ports. In Network Assessment (NA) phase, the LSNs of two types of hub ports selected are designed and assessed on strategic and tactical levels, and the primary and secondary routes are identified; in Network Operation (NO) phase, the “path-based flow” formulations are proposed from the operational level, considering operational measures including demand rejection and flow integration.)emodels in both phases are mixed-integer linear programming (MILP), but are solved by different tools: CPLEX for the NA phase models and the Genetic Algorithm (GA) for the NO phase models due to the computational complexity of the latter problem. )en, a computational experiment is performed on the LSN of COSCO Shipping on the Persian Gulf trade lane. )e results have proved the effectiveness of the methodology and inspired important countermeasures for the merged shipping company.
Introduction
e global demand for container shipping had been rapidly increasing from the birth of the containership in the 1950s to the outbreak of the subprime crisis [1]. Due to the limited shipbuilding capacity, however, the container shipping suffered from a long-lasting capacity bottleneck, which was not resolved until about 1995. Since then, expansion of shipping capacity has grown explosively and maintained a continuous lead over the demand increase. After 2004, the shipping capacity utilization rate, i.e., ship loading rate, exhibited an obvious decline, heralding the dawn of the "oversupply" period in the container shipping [2]. Since the global recession that began in 2008, the demand growth of shipping industry has slowed and fallen more in line with GDP growth. In 2019, the worrying trend of the falling tradeto-GDP ratio still continues. Both the US-China trade war and the global sulfur limit implemented by International Maritime Organization (IMO), the regulatory authority for international shipping, put forward potential threats to the demand side of shipping industry [3]. It is predicted that shipping oversupply will persist and be an even greater cause for concern [4].
In order to deal with the oversupply issue, governments and shipping industry have been making efforts to conduct supply-side reform. e supply-side reform consists of a series of parallel measures and regulations, including annual capacity limits and mergers of shipping companies. e most intuitive way is to directly control the growth of freight capacity. For example, the Chinese government is imposing a gradually stringently macrocontrol to maritime freight capacity. Currently, any expansion of fleet that transport bulk liquid hazardous goods need to be scrutinized [5]. It can be expected that the control of containership capacity will be put forward in the upcoming future in order to eliminate the gap between supply and demand of maritime industry.
In comparison with the annual capacity limit that seems in lack of mature practice, mergers and alliances is an obvious trend in recent years leading to the concentration of shipping capacity. ere have been several successful cases of mergers in the maritime industry. e largest five carriers handled 27% of all TEUs in 1996, 46% in 2008, and 64% in 2017 [6]. A typical example is the merger between China Ocean Shipping Company (COSCO) and China Shipping Company (CSCL) in 2016, marking a major move in the supply-side reform of China's shipping industry [7]. e two leading shipping companies integrated into COSCO Shipping Group (COSCO Shipping), which has become the world's 3 rd largest shipping company in 2019 [8].
e rationale of the COSCO/CSCL merger is entirely sound as they both have designed many similar services, and the unnecessary competition has deteriorated their financial performance. Besides eliminating competition, there are more benefits awaited the shipping companies through optimizing their LSNs after mergers, which is investigated in this paper. In practice, after mergers, the LSNs of the acquired shipping companies need to firstly go through strict assessment, then considering adjusting the services. e Network Assessment (NA) phase and Network Operation (NO) phase differ greatly in the content and process of the decision-making of the shipping companies [9]. Both phases are necessary to be considered for merged shipping companies to obtain sustained competitiveness [10].
In this paper, two-phase optimization models are proposed to investigate the decision-making process in NA and NO phases, aimed at maximizing the actual profits of a shipping company in the context of supply-side reform, for the LSN based on strategic ports, investigating the decisionmaking process in NA and NO phases. Various factors are considered to better reflect the NA phase and NO phase in practice, such as the cooperation with different hub ports, the transshipment of cargoes, the rejection of unprofitable demand, and the fluctuation of demands and freight rates. e remainder of this paper is organized as follows: Section 2 reviews the relevant literature and summarizes the contributions of this study; Section 3 presents a clear description of the problem; Section 4 establishes the two-phase optimization model; Section 5 details the GA-based algorithm for the LSN in NO phase, alongside CPLEX, enabling the solutions for LSNs in NA phase; Section 6 carries out a computational experiment on the LSN of COSCO Shipping; Section 7 wraps up this paper with some meaningful conclusions.
Literature Review
ere are three decision-making levels for the shipping companies to design LSN: strategic, tactical, and operational [11]. At the strategic level, the shipping companies often make long-term decisions that may cover a planning horizon of up to 30 years. Containership deployment is concerned with the structure (size) and scale (number) of containerships [12,13]. Another strategic decision is route design. e aim of route design is to determine which ports the containerships should visit and in what order [14]. Strategic decisions clearly affect the decision-making at the tactical levels by defining the boundaries for these decisions. At the tactical level, the focus lies in frequency determination [15], sailing speed optimization [16,17], and schedule design [18,19]. Tactical level decisions are made every three to six months in view of changing demand for container shipping [20,21]. At the operational level, the shipping companies determine whether to accept or reject freights [22], how to flow accepted freights [23], and how to reroute or reschedule containerships to cope with unexpected market changes [24]. ere is some interplay between the decisions made at the three different levels [25].
Most existing literature on the optimization of the LSN is devoted to the strategic and tactical levels. Wang and Meng [13] give a literature survey on liner fleet deployment. Ronen [26] pioneered the study on ship deployment and route design in 1983. Later, Rana and Vickson [27], Fagerholt [28], Christiansen et al. [29], Gelareh and Pisinger [30], and Sheng et al. [31] deepened the research based on these strategic decisions. Meng et al. [32] and Dulebenets et al. [33] reviewed the past research on container scheduling problems. Dulebenets [34], Wang et al. [35], and Alharbi et al. [18] studied the ship schedule problems considering port time windows. Because of the high costs of containership deployment and route design, and the complexity of the scheduling problems, the latest literature mainly applies operations research methods to address the strategic and tactical problems in LSN design. In recent years, much attention has been paid to the operational optimization of the LSN. Some scholars highlighted freights booking. In essence, the demand for container shipping bears on the decision-making of all stakeholders, including the ports and the shipping companies. For instance, Brouer et al. [36], Song and Dong [37], and Daniel et al. [38] presented the freights booking decisions generated from LP models where the freight flows are treated as a continuous decision variable. Liu et al. [39] and Wang et al. [40] pointed out the possibility of increasing the port handling rates while optimizing ship fuel cost at the same time. e cooperation between shipping companies and port operators was also investigated by Venturini et al. [41] and Dulebenets [42] from multiobjective perspectives. For some other scholars, containership rerouting was regarded as a special problem of operational optimization [43]. e LSN design problem is NP-hard with computationally challenge [44], and we cannot expect to find a polynomial-time algorithm that will produce the optimal solution for a general LSN design problem unless P�NP. Considering that the LSN design problem is already NP-hard, efficient heuristic-rules-based methods might be expected to address large-scale realistic systems [45]. From the above discussion, it is clear that the strategic and tactical decisions are often an input to the operational optimization.
e idea of combining different levels of decision-making has been absorbed in some studies in Journal of Advanced Transportation recent years, known as two-phase optimization. By generating the set of routes firstly, the container flows can be optimized based on the given set of routes in the second phase [46,47]. e operational optimization of the LSN can also be viewed as the fine-tuning and correction of the strategic and tactical solutions [48]. Despite the aforementioned advancements in the research on the LSN design problem, there are still some practically significant issues that have seldom been addressed. For example, liner shipping consolidation through mergers and the macrocontrol of excessive new capacity are regarded as key challenges for maritime industry in 2019; however, it has been ignored by researchers so far [49]. is research fills in the gap in the existing literature and makes contributions to the research in LSN design problem as follows. Firstly, we investigate the LSN design problem for shipping companies under the context of supply-side reform. Various measures of supply-side reform are considered in this paper, including the macrocontrol of capacity and the mergers of shipping companies. e decisionmaking process is divided into NA phase and NO phase, and two-phase optimization models for the LSN are developed accordingly. Secondly, we look for alternative solutions to the LSN design problem in the NO phase with a GA-based algorithm. e proposed method can efficiently solve the "path-based flow" formulations. irdly, this paper gives out several countermeasures of shipping companies from the perspective of supply-side reform in China, e.g., the selection of hub ports, demand rejection, and the idea of flow integration. In addition, the scenario analyses reveal how shipping companies can flexibly adjust their operational measures according to the actual market indicators such as demand and freight rates.
Problem Description
We consider the LSN optimization for a shipping company in the context of supply-side reform, typically a merger or acquisition. NA and NO phases after a merger are analyzed: selecting the most profitable route in the NA phase from all the similar preset routes that have been designed by different acquired shipping companies, and figuring out the optimal plan of flowing cargoes in the NO phase according to the actual shipping market. e objectives of both phases are to maximize profits. Detailed information about the two phases is stated in Section 3.1 and Section 3.2, respectively. e elements of LSN are defined as follows to avoid ambiguity: (1) Port calls: a typical liner shipping route usually contains at least several fixed ports calls, thus also named as multiport calling (MPC) service [50]. (2) Hub ports: when operating along a liner service, the containerships are allowed to call twice at hub ports, but only once at any other ports. As commonly observed in practice, each route is limited to one single hub port. e shipping companies can cooperate with different hub ports, which can be classified as traditional hub ports (THPs) and emerging hub ports (EHPs). In addition, hub ports are able to transship cargoes due to better facilities.
(3) Routes: the route in the LSN may have 10-20 legs, where a leg is a directed arc between two consecutive ports [51,52]. (4) Cargo flows: cargo flow refers to the move of cargoes on a leg. A flow path is the directed path consisted of all the legs between the origin port and the destination port. (5) Demands: there are several pairs of origin and destination (O-D pairs) of cargoes along a route, generating shipping demands. e market changes are represented by the variation of demands and freight rates for container shipping [53]. Shipping companies can hardly control the freight rates (e.g., CCFI and SCFI). e only thing they can do regarding the shipping market is to decide whether satisfy or reject the demands, which can be called as "cherry-picking" [54].
e LSN Design Problem in NA Phase.
Suppose two shipping companies, represented by A and B, respectively, are merged into a new shipping company C. In the NA phase, there are already similar routes established by the acquired shipping companies A and B. Such similar preset routes may be initiatively designed to satisfy the demand in the same regions, which leads to unnecessary competition. Despite the similarities, the selection of hub ports contributes to the differences among the routes. For instance, A has established a cooperative relationship with traditional hub ports (THP); i.e., the containerships operated by A are allowed to call twice at the THP. However, B noticed that the shipping demands generated from Emerging Hub Ports (EHP) are growing rapidly, thus is more willing to cooperate with EHP [55]. e differences of the preset routes result in different profits. erefore, for shipping company C that can either cooperate with THP or EHP, it is necessary to assess the profitability of the preset routes in order to make adjustment plans. e assessment is based on the prediction regarding the quantities of demands Q od and freight rates e od in the next 10-30 years, according to experts' knowledge of the market and the development of maritime policies. For any cooperation strategy with hub ports, the decision-maker can construct a model with predicted demands input to design the corresponding LSN. e results of the assessment indicate cooperating with which types of hub ports (THP or EHP) are more likely to be profitable. Here, for simplicity, we define the more profitable route as primary route and the less profitable one as secondary route.
en, shipping company C should adjust the container flows to the primary Journal of Advanced Transportation routes, as the thought of aggregating flows on fewer routes in Krogsgaard et al. [56]. In other words, the secondary route will no longer need to flow cargoes to save operation cost.
e LSN Operation Problem in NO Phase.
e assessment results in the NA phase based on predicted demand give out a rough principle that more cargoes should flow on the primary route. In the NO phase, in order to start operation in practice, shipping company C needs to depict more detailed plans on how to adjust cargo flows, which involve how to pick up, unload, and transship containers at any port of call according to the actual market situation.
As shown in Figure 1(a), two similar routes have been designed according to different preferences of hub ports and named as primary route and secondary route based on predicted demands in the NA phase, respectively. e different legs of the two routes are painted in red. Here, the od should transship at a hub port to s 1 od . By adopting the idea of "flow integration," the shipping company C can aggregate the cargo flows to more profitable route.
In NO phase, the decision-making is based on actual demands and freight rates, which may have a deviation ΔQ od and Δe od from prediction. It should be noticed that the demands and freight rates are time-varying; hence, it is necessary to make timely and pertinent adjustment to the LSNs in order to achieve low-cost operation. In addition, when operating the LSNs, shipping companies prefer to reject the unprofitable cargoes if allowed [57], e.g., the shipping demand (o 1 , d 1 ) in Figure 1(b). In this paper, the fluctuation of market indicators is specifically analyzed in Section 6. "Flow integration" and "demand rejection" are reflected in the model in Section 4 with an aim of maximizing profits, making the operation of LSNs more flexible. In conclusion, for each O-D pair, shipping company C in the NO phase needs to figure out how many containers to be transported through s 1 o d and s 2 o d and how many containers to be rejected.
Mathematical Model
e assumptions of the models are listed here as follows: (1) Without considering the impact of natural disasters and local wars on the LSN, any demand between an O-D port pair is a long-standing issue that changes with the global trade. (2) Without considering the difference between types of containerships, the voyage expense incurred by containership deployment is fixed, and all containerships sail at the agreed speed [58]. (3) ere is no limit on the loading/unloading capacities of all ports, that is, any port can handle the maximum containership capacity. e terminal handling charges are fixed on each port, but vary among all ports [59]. (4) e emission regulations of MARPOL-VI and EU-ETS on ports and containerships are not considered, as their impacts are restricted to certain areas and are negligible for long-haul liner services [60].
Formulation for LSN Design Problem in NA Phase.
e LSN design problem in the NA phase based on hub ports selected as THPs is formulated as Model (I). e notations used the model in the NA phase are shown in Table 1. Here, we consider that the government may control the fleet expansion in order to resolve oversupply in maritime industry. Hence, we introduce a parameter Ω to represent the possible maximum limit of containership capacity that can be deployed for a voyage circle imposed by the government.
Having defined the notations, we have Model (I) as follows: i∈N Journal of Advanced Transportation Objective function (1) maximizes the predicted profits of the LSN based on the THPs. Constraints (2) and (3) specify that the containership is allowed to call only once at all ports other than the THPs, that is, these ports have only one incoming leg and one outgoing leg. Constrains (4)- (7) can be combined to define that the number of incoming legs and outgoing legs for each THP is either one or two. Constraints (8) guarantee that the number of legs that enter a THP is equal to the number of legs that leaves a THP. Constraints (9) guarantee that the difference of the cargo flows between incoming legs and outgoing legs for every port is equal to the quantity of demand surplus/deficit. is is ensured by. Constraints (10) require that the flows on the outgoing leg satisfy the total quantity of the demand from any port o ∊ O as an origin port, and as indicated for any port d ∊ D as a destination port by constraints (11). Constraint (12) stipulates that the whole transit time for all legs in the LSN must obey the fixed transit time. Constraint (13) states that the flows on every leg should not exceed the maximum containership capacity controlled by the government. Constraint (14) rules that the flows on the leg must be carried by enough containerships. Constraints (15)- (17) define the domain of the decision variables.
Unlike the set of the THPs in constraints (4)- (7), the number of incoming legs and outgoing legs for the EHP is determined by e LSN design problem in the NA phase based on hub ports which are the EHPs is given as Model (II).
Formulation for LSN Operation Problem in NO Phase.
e LSN design problem in the NO phase to determine the optimal cargo flows is formulated as Model (III). As defined in Section 3, the flow path of demand generated from an O-D pair on the primary route is s In Model (III), we define c i as the loading/unloading cost of port i ∊ N. e decision variables in the NO phase are listed as follows: s.t.
x ot o∈N d∈N t∈N x ot x ot Objective function (22) maximizes the actual profits of the shipping company by demands rejection and flow integration, i.e., minimizes the difference between the operation costs and the temporal revenues. e operation costs in the NO phase refer to the total loading/unloading cost along the design path, which is incurred once at the origin and destination ports and twice at the ports of call. Similar to related studies with two-phase optimization, the operation costs in the NO phase only consist of the variable costs related to cargo flows, excluding the voyage expenses considered in the NA phase because the voyage expense of LSN is fixed once the LSN is established. Constraints (23) require that the accepted demand, i.e., the total cargo flow on the outgoing leg for the origin port (including cargo flows on different flow paths s 1 o d and s 2 o d ), should not exceed the overall demand of each O-D port pair. Constraints (24) and (25) stipulate that the flow on any leg should not surpass the maximum limit of containership capacity for a voyage circle. Constraints (26)-(29) ensure the balance between the flow on incoming legs and outgoing legs for any port along the Parent 1
Solution Approach
e resulting models (I)∼(III) are all MILP problems. Models (I)∼(II) will be solved by the standard solver such as CPLEX [61], but we cannot guarantee that CPLEX would find the optimal solution for Model (III) because of the 5-and 6-index formulation required to represent the flow of every path in NO phase. Consequently, we propose using a GA-based algorithm because of several reasons: unlike other metaheuristics such as simulated annealing [62] and tabu search [63] that work with a single solution, GA deals with a population of solutions, and the GA has been successfully applied to previous applications involving LSN design problems [64,65]. e proposed solution approach can be stated as follows: CPLEX explores the space of containership deployment and route design and finds feasible solutions. From every solution, a valid LSN configuration is derived. Once a valid configuration is found, the problems of selecting the demands and switching the paths are solved for this configuration by the GA-based algorithm, and the optimal flows and paths are found, for that network configuration. By this algorithm, a set of candidate solutions (populations) is retained in each iteration (a.k.a. generation or trial), and the best populations are identified based on the principle of "survival of the fittest" through genetic operations as selection, crossover, and mutation, forming a new generation of candidate solutions. is process is repeated until reaching the maximum number of iterations Gmax. Featured by the introduction of an efficient solution representation, the proposed GA-based algorithm is described in Figure 2, and the specific steps are detailed in the following analysis.
Step Step 2. Fitness function: each solution satisfying the constraints is deemed as a chromosome. is paper attempts to minimize the difference between the operation costs and the temporal revenues. Here, the fitness function is set up based on the reciprocal of the objective function in equation (19). e fitness values are ranked in ascending order to find the maximum value.
Step 3. Selection: before crossover, two parent chromosomes are selected based on fitness. en, a roulette selection procedure is adopted for our solution framework. First, calculate the fitness f c of each chromosome c by the fitness function. Second, calculate the selection probability P c r � f c / c f c Prc for each chromosome. ird, calculate the cumulative probability q c � c i�1 P c r , where c � 1, 2, . . . , pop size and pop_size is the population size. Fourth, generate a random number r. Finally, if r ≤ q 1 , then select the first chromosome; otherwise, select the i-th chromosome such that q i− 1 < r ≤ q i .
Step 4. Crossover: a single point crossover operator is used. In each crossover, we randomly select a cut-point in the chromosome and exchange the right parts of the two selected parent chromosomes to generate one or more children. e crossover probability is set as P c , such that only P c chromosomes undergo the crossover process. e crossover procedure is repeated until the number of child chromosomes reached pop_size.
Step 5. Mutation: through mutation, a new solution can be derived from an old solution.
e mutation operator is employed in each generation of chromosomes at an equal probability (mutation rate) P m . Specifically, the first term of the chromosome is flipped by the uniform mutation operator, and the second term alters one gene from its original value by the displacement mutation operator. An example of the crossover and mutation procedures is shown in Figure 4.
Step 6. Infeasible solution disposing: after crossover and mutation, if the solution to a chromosome is infeasible, the above steps are repeated from Step 2 until the terminal condition is satisfied. In the initial population, there might be some chromosomes that fail to obey one or more constraints. Obviously, the solutions naturally satisfy constraints (24)-(27) by the "path-based flow" coding. If a solution is found to be infeasible, it is necessary to verify it against constraints (20)- (23). If constraints (20)- (23) are not satisfied, the chromosome's fitness value should be lowered by the violation degree to the constraints.
Computational Experiment and Discussion
To assess the performance of the proposed algorithm on solving different test problems, the well-known standard dataset of the Persian Gulf trade lane that consists of 14 ports of COSCO Shipping in 2018 is used in the experiments. All data are generated from real information without distorting the original structure. e voyage distance (d i 1 i 2 ) of any leg is measured by the BLM Shipping (see Figure 4). Here, we adopt the containership named M7 with containership capacity π � 10000 (TEU). To calculate the voyage expense, we assume that the total fixed cost related to chartering and maintaining a vessel and providing salaries and insurances for seamen is 8000000 (USD/YEAR) [58]. e fuel cost is 167.454 (USD/NM) at the sailing speed of 22 (NM/HOUR) [66]. e results of models (I)∼(II) are calculated by ILOG-CPLEX 12.5. Given the fixed limit of annual containership capacity controlled by the government, if the transit time of a voyage circle W is reduced, the service frequency of containership within a year will increase, and thus, the maximum containership capacity for a voyage circle Ω will fall, exerting a pressure on the shipping capacity for COSCO Shipping.
30 different {W, Ω} combinations are tested. e results are listed in Table 2. Here, for simplicity, the route design based on cooperation with THPs is called as G 1 , while the route design based on cooperation with EHPs is called as G 2 . To compare the maximum predicted profits in NA phase, the G 1 and G 2 results of COSCO Shipping are shown in Figure 5 when the combination is selected at {W � 155, Ω � 662466}. e total profit is fixed and predicted against the demands and freight rates between the origin and destination ports. Actually, the optimization of G 1 and G 2 is aimed at minimizing the installation cost. rough comparison, it is concluded as follows. First, in G 1 , each containership calls twice at all the THPs. Similarly, containerships call twice at all the EHPs in G 2 . By calling twice at hub ports, the voyage distance per leg can be shortened and save fuel cost. Second, contrary to the stereotype that calling at the THPs minimizes the installation cost, the total cost of G 1 is greater than that of G 2 .
e LSN in NO Phase.
After comparing the predicted profits, we took G 2 as the primary route, while G 1 as the secondary route. e LSN in the NO phase is called as G 3 for simplicity. e parameters for model solution are set as follows: the maximum number of iterations Gmax � 8000, the population size pop_size � 100, the crossover probability P c � 0.90, and the mutation probability P m � 0.01. en, the convergence of G 3 in different scenarios (see Figure 6) is run on Matlab R2013a on a Lenovo laptop with Intel ® Core ™ i5-6500 Processor (3.20 GHz; 8 GB RAM).
In the NO phase, the actual profit of COSCO Shipping is 907399279.57 (USD) when ΔQ od ∊ [− 4617, 5192] (TEU) and Δe od ∊ [− 368.07, 0] (USD/TEU). Table 3 shows how COSCO Shipping adjusted G 3 based on the primary route and the secondary route. e overall demand acceptance rate is 86.85%, indicating that demand rejection is necessary when maximizing profits.
In addition to ΔQ od and Δe od , containership deployment and route design also influence the shipping capacity utilization rate of COSCO Shipping, making it difficult to observe how the shipping company selectively accepts the demand. Hence, the acceptance rates of the demand between different O-D pairs are contrasted in detail, revealing that the demand variation ΔQ od has a decisive impact: the COSCO Shipping accepts more demand at higher ΔQ od , while rejects more at lower ΔQ od . erefore, the demand variation has a greater impact than the freight rate change on the decision-making of demand acceptance. Furthermore, without considering the profitability of accepting the demand of certain O-D pairs, the high demand acceptance rate concentrated on the demand that must flow through the hub ports {4, 6, 7, 9}, as highlighted in bold format in Table 3. In addition, the primary and secondary routes, respectively, carried 67.5% and 32.5% of the total demand accepted by COSCO Shipping. e result proves that the primary paths are fundamental to the LSN optimization, while the secondary paths are a reasonable complement to the merged paths. Under Scenarios 1-3, the actual profits of COSCO Shipping are 902148715.92(USD), 896171319.02(USD), and 900705361.54(USD), respectively, down by 0.58%, 1.24%, and 0.74% from those in Scenario 0 (see Figure 7). In general, the decline in ΔQ od and Δe od only causes minor negative impacts on the actual profits. It is hard to say that the fluctuations of market indicators have few relationships with the actual profits of shipping companies. In fact, without the LSNs optimization measures such as demands rejection and flow integration, the negative impacts can be very significant. erefore, it is safe to say that the negative impacts of ΔQ od and Δe od on the actual profits can be ameliorated by LSNs optimization measures. In other words, the decision-making process comprising NA phase and NO phase proposed in this paper can efficiently help the merged shipping companies reduce the negative impacts of depressed market.
e LSN in NO
Under Scenarios 1-3, the overall demand acceptance rates of COSCO Shipping are 90.91%, 89.33%, and 90.79%, respectively, up by 4.68%, 2.86%, and 4.54% from those in Scenario 0 (see Figure 8). By comparing the demand acceptance rate in Scenarios 0 and 1, one can find that the shipping company may accept more demand when the overall demand level decreases, which seems to be contradictive with the observation in Section 6.1. However, if we compare the demand acceptance rate in Scenarios 2 and 3, it can be revealed that the observation in Section 6.1 that shipping company accepts more demand at higher ΔQ od and only holds when the overall freight rate level is low. Generally, in depressed market where both quantities and freight rates of demands are lower, the merged shipping company should reject more demand. erefore, the demand rejection decisions should be adjusted according to both demands and freight rates. e shipping must focus on the survey of market indicators based on the historical data (as well as experts' knowledge of the market and management policies).
Finally, the results indicate that the shipping companies should attach more importance to EHPs when designing and optimizing the LSNs. On the one hand, EHPs are more likely to generate demand because they usually locate in rapidly developing economies. Scenario 3 assumes an increase of [5%, 15%] in the demands that take the EHPs as the origin and destination ports. e results show that the EHPs contributed to the 1.44% growth in demand, which leads to a 0.51% increase in the actual profits of shipping companies. On the other hand, shipping companies should increase the acceptance rate for the demands taking the EHPs as the origin and destination ports, as shown in Table 4.
Conclusion and Future Research
is paper aims to help COSCO Shipping address the LSN design problem with several hub ports to cooperate in regions along the Maritime Silk Road from the perspective of supply-side reform in China. For this purpose, we proposed two-phase optimization models for the LSN from strategic, tactical, and operational levels. Unlike traditional optimization approaches, our work divides the decision-making process into Network Assessment (NA) phase and Network Operation (NO) phase and considers external factors like market changes and hub port cooperation. In addition, our analyses highlighted two crucial operational measures: demand rejection and flow integration. e optimization models for both phases are MILPs. e models in the NA phase are programmed in CPLEX, and those in the NO phase are solved by a GA-based algorithm. In light of the assessment of designing LSNs by cooperating with different types of hub ports based on predictions in the NA phase, a "path-based flow" model in the NO phase is specially developed and a set of easy-to-implement GAbased algorithm is designed to compute optimal solutions efficiently. en, a computational experiment is performed on the Persian Gulf trade lane of COSCO Shipping. e experimental results prove the effectiveness of the GA and inspire the following countermeasures.
Firstly, when designing LSNs based on the cooperation with hub ports in the NA phase, the merged shipping company should increase the number of legs in the designed LSNs, e.g., calling twice at hub ports, in order to save the total installation cost. More importantly, the total installation cost could be further reduced by adjusting the selection of hub ports from THPs to EHPs. Secondly, the shipping company should reject more cargoes when the actual market is not satisfied, i.e., both quantities and freight rates of demands are lower. e scenario analyses show that the LSNs optimization measures including demands rejection and flow integration can efficiently help the shipping companies reduce the negative impacts of depressed market.
irdly, the shipping company should increase the demand acceptance rate for the demands taking the hub ports, especially the EHPs as the origin and destination ports. In general, both the design and operation of LSNs should be flexibly adjusted according to demand prediction. If some ports are expected to generate greater demands than others, adjusting the hub of LSNs and accept more demand related to these EHPs could achieve better performance.
It must be noted that this study does not tackle all the decision-making problems at strategic, tactical, and operational levels of LSPs in NA and NO phases. To further optimize the LSNs, the future research will dig deep into the following issues: better prediction of future demand helps identify the emerging ports and optimize the LSNs; greater understanding of LSN structures, which consist of butterfly services, pendulum services, and even more complex services, helps explore more flexible and cost-efficient solutions; the operation adjustment after shipping company mergers or forming alliances deserves more attention.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.
|
v3-fos-license
|
2018-12-16T09:58:55.530Z
|
2017-01-01T00:00:00.000
|
54941982
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/amse/2017/6702183.pdf",
"pdf_hash": "b972f0ffebae334d471f390e6ba90c6306f72e41",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46176",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"sha1": "b972f0ffebae334d471f390e6ba90c6306f72e41",
"year": 2017
}
|
pes2o/s2orc
|
Optimal Shape Control of Piezoelectric Intelligent Structure Based on Genetic Algorithm
Shape variation induced by mismachining tolerance, humidity and temperature of the working environment, material wear and aging, and unknown external load disturbances have a relatively large influence on the dynamic shape of a mechanical structure. When integrating piezoelectric elements into the main mechanical structure, active control of the structural shape is realized by utilizing the inverse piezoelectric effect. This paper presents a mathematical model regarding piezoelectric intelligent structure shape control. We also applied a genetic algorithm, and given a piezoelectric intelligent cantilever plate with both ends affected by a certain load, optimal shape control results of piezoelectric materials were analyzed from different perspectives (precision reference or cost reference). The mathematical model and results indicate that, by optimizing a certain number of piezoelectric actuators, high-precision active shape control can be realized.
Introduction
A variety of high-speed and high-precision mechanical structures require extremely precise shape and position when in operation, and slight variations greatly influence the dynamic performances of these structures [1,2].For instance, mismachining tolerance, humidity and temperature of the working environment, material wear and aging, and unknown external load disturbances may affect the shape of a structure.Consequently, active shape control and compensation are required in real operation.Piezoelectric elements are integrated into the main structure, the inverse piezoelectric effect is utilized, and piezoelectric elements are deformed via an external voltage, which is passed on to the main structure, thus realizing active control of structural shape.
Due to structure complexity and the inherent electromechanical coupling effect of a piezoelectric intelligent structure, studies on structural shape control are typically based on the mathematical mode of the intelligent structure using a finite element method.Following conclusions toward the theory of intelligent structure, Wada et al. [3] proposed a shape-variable and self-adaptive intelligent structure framework to meet future requirements of structures like space stations, aircrafts, and satellites.Based on a hierarchical theory, Donthireddy and Chandrashekhara [4] established a finite element numerical model of laminated piezoelectric beams, studied the shape control problem of these beam, and analyzed the influence of stacking sequence and boundary conditions on shape control.Varadarajan et al. [5] performed shape control of laminated composite plates with piezoelectric actuators, and using the minimum error function of ideal shape and control shape, voltage distribution was optimized and an active feedback control algorithm was established.Conversely, Chee et al. [6,7] established the static shape control equation of intelligent composite piezoelectric beams and plates based on a high-order displacement function, and classic beam and plate theory, respectively.Furthermore, they applied the generalized function to analyze the specific control effect.Lastly, Lin and Nien [8] established the finite element model of laminated plate shape control using piezoelectric actuators, considered the influence of actuator and sensor locations on shape control, calculated structural stress, and concluded that internal stress of the piezoelectric actuator has huge influence on shape control.
It is important to optimize the design of embedded depth, thickness, and location of piezoelectric elements, as well as 2 Advances in Materials Science and Engineering control voltage, for realizing the shape control effect.Based on a nonlinear constitutive piezoelectric equation and according to the error function and control energy, Sun and Tong [9] utilized the finite element method and Lagrange multiplier method to define an optimization algorithm for piezoelectric actuators on the control voltage of static nonlinear deformation control, and further demonstrated the effectiveness of this algorithm through simulation experiments.On the other hand, Barboni et al. [10] used a method combining a dynamic influence function and closed-loop feedback to study the optimal location of a pair of piezoelectric actuators, thus maximizing piezoelectric intelligent beam displacement.Another study considered the influence of the location, dimension, and voltage of piezoelectric actuators on shape control, and performed multi-objective optimization of cantilever shape control, minimizing beam deflection under the external load effect [11].Conversely, Zhang et al. [12] studied the integrated optimization configuration problem of piezoelectric actuators and sensors in a flexible structure system by applying the genetic algorithm, while Da Mota Silva et al. [13] used the variance between the preset and real displacement of a specific node to study the static shape control of self-adaptable structure, established an infinite model of the structure system, and determined the optimal driving voltage of piezoelectric elements by applying the genetic algorithm.Experiments verified the applicability and effectiveness of this algorithm.In contrast, based on the shear deformation beam theory and linear piezoelectric theory, Hadjigeorgiou et al. [14] established a numerical model of piezoelectric beams, and utilized the genetic optimization algorithm to optimize the voltage of piezoelectric actuator elements.Numerical results indicated that fewer piezoelectric driving elements achieved the same expected deformation effect following optimization.
In this paper, the piezoelectric intelligent structure was applied to perform active shape control, and the effect location, geometric parameters, and working voltage were solved based on the expected shape or displacement value.Thus, this is a typical multiobjective optimization problem.First, according to the shape function of the piezoelectric intelligent structure and the finite dynamic function, the relationship between the shape or displacement of the structure and its piezoelectric properties, function location, geometric parameters, or input voltage under the influence of constant mechanical load was deduced.Then, from different perspectives (precision preference or cost preference), proper object function was selected to perform multiobjective optimization of the number, function location, and working voltage of the piezoelectric driving elements, thus providing optimal shape control results that satisfy precision requirements.
Piezoelectric Intelligent Structure
Shape Control Theory where The relationship between natural coordinates and geometric coordinates is where and are the length and width of rectangular element, respectively.The shape function is expressed according to After substituting (2), (3), and (4) into (1), the displacement of any point inside the finite element is expressed as follows: where = { 1 , 1 , 1 , 1 , . . ., 4 , 4 , 4 , 4 }, and (, ) represents the shape functions of a mode in every degree of freedom, which are listed below: The linear constitutive equations of the piezoelectric material can be written as [15] where c E is the elastic stiffness matrix, the superscript denotes transpose, E is the electric field vector, D is the electric displacement vector, M S is the dielectric constant matrix, and is the piezo stress/charge constant.
The finite element dynamic equation of the piezoelectric intelligent structure is expressed as follows: where q and are the displacement vector and potential vector of each mode in the overall structure, respectively; M, C, and K are the overall weight, resistance, and stiffness matrix of the integrated piezoelectric intelligent structure, respectively; K = K is the overall force-electricity coupled stiffness matrix of the integrated piezoelectric material; K is the overall dielectric stiffness matrix of the piezoelectric material; and F and Q are the structure's external force load vector and electric load vector, respectively.The detailed derivation of these element matrices can be found in [16].
Taking into consideration the active static shape control of the piezoelectric intelligent structure, the dynamic response term in the equation is neglected, and (8) then becomes (9).This can then be rewritten as (10):
Shape Control Mechanism of the Intelligent
Structure.The static condensation method was applied to simplify (10) to obtain where From ( 11), the external load of the piezoelectric intelligent structure system consists of 2 parts: the mechanical load F and electric load F .The electric load is determined by the dielectric properties, geometric parameters, and input voltage of the piezoelectric driving element.Under the condition that the mechanical load remains constant, because of the inverse piezoelectric effect, the displacement of the structure under control can be changed by changing the driving element input voltage changes.
The external electric field of the piezoelectric intelligent structure's sensing unit is zero, and therefore, the potential output is obtained by the second term in (10): where −K −1 K is defined as the displacement sensitivity matrix.
Expressions for stiffness matrix K , force-electric coupling stiffness matrix K , and unit dielectric matrix K in the finite unit of the piezoelectric intelligent structure are as follows: where matrices [ ] and [ ] consist of partial derivatives of the displacement shape function f and potential shape function f of the unit along the displacement and potential directions, respectively, According to the boundary conditions, ( 14) are integrated to obtain the overall stiffness matrix K , force-electric coupling stiffness matrix K , and dielectric matrix K .From ( 14), the force-electric coupling stiffness matrix K and dielectric matrix K of the system are not only influenced by parameters of the material itself, but also closely related to the location and number of piezoelectric driving units.Therefore, by changing the location, number, and input voltage of piezoelectric driving units, the overall displacement q in (11) may be changed to achieve shape control.A basic piezoelectric intelligent structure consists of a main structural layer, an upper layer, and a lower layer.If 1 piezoelectric layer is used as the sensing unit, the other layer is used as the driving unit, the output potential of the sensing unit is enlarged using the feedback control system, and the output voltage required by the driving unit is calculated via the feedback gain and control law.Then, active static shape control of the piezoelectric intelligent structure is fulfilled.The static shape system of a simple piezoelectric intelligent structure is shown in Figure 1.
Specific Configuration of the Active Shape Controlled Genetic Algorithm
The location and working voltage of piezoelectric actuators and sensors are the optimization goal of static shape control in a piezoelectric intelligent structure.Based on the finite element model of the overall structure, the finite elements shown in Figure 1 are discretized.Assume the piezoelectric actuators/sensors in the upper and lower surfaces of the finite elements in a unit are independently controllable, with the minimum variance of the actual and expected displacement of the structure being the target function, and with the location, number, and driving voltage of the piezoelectric plate being the variables of optimal design.Then, a computational program for the genetic optimization algorithm was compiled to seek the optimal solution that satisfies the expected shape of the piezoelectric intelligent structure.
Among the design variables, the location and number of piezoelectric plates are binary variables, while the driving voltage is a continuous variable.In this paper, a parameter cascade method was applied to code the location and driving voltage of the piezoelectric plate, and further compile codes to form the individual coding that represents the entire parameters.
Assume variables X 1 and X 2 represent the location and driving voltage of the piezoelectric plate, respectively.The
Discrete variable
Binary coding
Continuous variable
Float coding binary-encoded method and the float-encoded method were applied to encode X 1 and X 2 , respectively, and then where X 1 is a discrete variable.Using the binary-encoded method, 11 = 0 represents a situation where the unit finite element is not equipped with actuators/sensors, and 11 = 1 represents a situation where the unit finite element is equipped with actuators/sensors.In addition, represents the number of finite elements in the piezoelectric intelligent structure plate.Assuming the intelligent structure is equipped with pairs of actuators/sensors, then Here, X 2 is a continuous variable, float-encoded variable that directly reflects the actual value of the design variable and has the same length as variable X 1 .In addition, 2 is any real number within the given range of driving voltage: [−, ]. Figure 2 illustrates the individual string structure that applies the parameter cascade code.
The genetic strategy was determined according to the form of the individual parameter cascade code, and the optimization model of piezoelectric intelligent structure shape control was established.The optimization algorithm system expression is as follows: where (X 1 , X 2 ) is the target function of the genetic algorithm optimization, (X 1 , X 2 ) is the optimized restriction function, and () is the individual fitness function.
In real optimization of static shape control with different optimization objects, the corresponding optimization configuration differs.Taking into consideration the economy of controlling the cost, number, and voltage of actuators/sensors requires optimization.A minimum number of piezoelectric actuators/sensors and minimum driving voltage were used to obtain an ideal control shape.The genetic algorithm configuration is then described using a bi-objective optimization model: where is the number of structure units; is the number of structure nodes; V is the optimal displacement of each node; V is the targeted displacement for each node; and [−, ] is the space defined by the driving voltage.Furthermore, the target function of the genetic optimization algorithm 1 (X 1 , X 2 ) represents the configuration number of piezoelectric plates, 2 (X 1 , X 2 ) is the sum of absolute driving voltages of each plate, the restriction function (X 1 , X 2 ) provides variance between the optimal and targeted shapes that satisfies control precision , and the fitness functions 1 ( 1 ) and 2 ( 2 ) are the differences between target functions 1 , and 2 and their theoretical maximum values, respectively.Another optimization configuration method uses a given number of piezoelectric actuators/sensors, and optimizing the configuration location and driving voltage of the piezoelectric plate to achieve the optimal shape control effect within the precision requirement range.The specific configuration method of the genetic algorithm is as follows: where is the given number of piezoelectric actuators/sensors, while the other variables are the same as those in (19).Under this configuration, the target function of the genetic algorithm 1 (X 1 , X 2 ) becomes the variance of the optimal shape and target shape, the restriction function (X 1 , X 2 ) becomes the number of piezoelectric plates equal to the given value, and the fitness function 1 ( 1 ) becomes the reciprocal of 1 + 1 (X 1 , X 2 ).
Referring to the specific operational procedure of the genetic algorithm and based on the finite element model of piezoelectric intelligent structure, a computational procedure was compiled using Matlab, and a series of genetic algorithm optimization procedures were designed to realize active control optimization of the piezoelectric intelligent structure.
Numerical Analysis
Given a static shape control of a cantilever plate with the upper and lower surfaces painted with piezoelectric intelligent layers, the control theory of the finite shape and optimization control of the genetic algorithm were verified.The finite element model of the structure, which was divided into 32 finite rectangular plate elements, is shown in Figure 3.The upper and lower surfaces of the structure were symmetrically attached with piezoelectric thin film PZT-5H (1 mm thickness).The main structural material in the middle layer was Al, which had a thickness of 2.5 mm.The property parameters of each material in the piezoelectric intelligent structure are listed in Table 1.Lastly, the middle point of the structural free end was affected by a constant load , and the shape control target was defined as zero displacement of the free end.Structure optimization was then performed.function.The shape control precision requirement was set as less than 1 × 10 −3 mm 2 , the maximum number of generation as 1000, the crossover possibility of the genetic algorithm as = 0.625, and the mutation probability as = 1/64.Then, the optimization calculation of the genetic algorithm was performed.
Configuration Analysis of the Genetic
Firstly, we give random 32 binary-encoded discrete design parameters as the initial value.The displacement of the free end can be calculated through (10), and the restriction function (X 1 , X 2 ) provides variance of actual displacement and expected displacement at the free end.If the shape control precision requirement is more than 1 × 10 −3 mm 2 , mutation will be applied to change the design parameters G1 and G2 to the new generation according to the mutation probability.The new generation design parameters will be used to start the new iteration.
After 1000 iterations, the optimization configuration that satisfied the shape control precision requirement was obtained.Figure 4 shows the performance tracking process of the restriction function value.The restriction function value rapidly converged and approached the required precision value after 350 iterations.After 1000 iterations, precision reached 9 × 10 −4 mm 2 .Figures 5 and 6 show the iterative evolution of the target function value and the fitness function value, respectively.Similarly, in the early stage of genetic optimization, the target function value immediately converged, and after 645 iterations, the number of configuration design parameters evolved from 32 to 6, less than one-fifth of the initial design, which greatly saved the control cost.A configuration information diagram of each piezoelectric actuator in the iterative evolution process is shown in Figure 7, and Table 2 shows the working voltage of each piezoelectric actuator under the best configuration.It should be noted that variance of actual displacement and desired displacement at the free end was set as the restriction function, and then the symmetry of numerical results along the central axis of the beam could not be expected in genetic optimization.For example, the values of voltages between elements 10 and 11, or 17 and 20 are derived to be positive or negative in Figure 7 and Table 2. Figures 8, 9, and 10 show shape diagrams of the piezoelectric intelligent cantilever at initial status, during evolution, and after optimization, respectively.
Configuration Analysis of Genetic Algorithm: Optimizing Shape Control Precision.
With the number of actuators fixed, the optimum values of actuator location and working voltage were obtained via the optimization of the genetic algorithm to achieve the best control effect.The optimized genetic algorithm was performed according to (19), the parameter cascade encoding method was adopted, and 32 binary-encoded discrete design variables were chosen to represent configuration information regarding the location and number of piezoelectric actuators.The 32 float-encoded continuous design variables represented the working voltage within the range of [−100, 100].The shape control precision was selected as the target function (sum of squares of displacement and target displacement of 5 nodes in the free end after optimization), and the shape control precision requirement was set as less than 1 × 10 −5 mm 2 .Given 10 piezoelectric actuators, the restriction function value was 16.Lastly, we assumed the maximum number of generations as 2000, the crossover possibility of the genetic algorithm as = 0.625, and the mutation probability as = 1/64.Then, the optimization calculation of the genetic algorithm was performed.
Figure 11 shows the target function over 2000 iterations.The function rapidly converged during the initial iterative stages and approached the required precision after 400 iterations.Figure 12 shows the iterative variance of the corresponding fitness function.For a certain number of actuators, a relatively high control precision can be reached with proper optimization configuration.After 2000 iterations, the shape control precision of the piezoelectric actuators reached 5.21 × 10 −6 mm 2 .Figure 13 shows the configuration diagram of the piezoelectric actuators at different evolution periods during iteration, while Figures 14 and 15 provide shape deformation diagrams of the piezoelectric intelligent plates during the optimization control process.Lastly, Table 3 shows the working voltage of each piezoelectric actuator under optimal configuration following iteration.
Conclusions
(1) Based on the dynamic finite element equation of piezoelectric intelligent structure, the static shape control principle of treating the piezoelectric material as an actuator was analyzed, demonstrating that the shape control effect is not only influenced by the parameters of the material itself, but also closely related to the location and number of piezoelectric driving elements.(2) By utilizing operation characteristics of the genetic algorithm (such as the embarrassingly parallel, random, and self-adaptive), each finite element of the piezoelectric intelligent structure was separated, and the computational and basic operational procedures of the optimal shape control genetic algorithm were designed to find the optimal solution that satisfied the expected shape.(3) Based on the modified parameter cascade encoding method, we used the binary-encoded method and floatencoded method to code the location and driving voltage of the piezoelectric plates, respectively.We proposed a genetic algorithm optimization mode for shape control, and from the perspective of cost control economy and shape control precision, we then designed a computational program for the genetic algorithm.(4) Given a piezoelectric intelligent
Figure 1 :
Figure 1: Active static shape control system of a piezoelectric intelligent plate.
Figure 2 :
Figure 2: Individual bit string structure of parameter cascade coding.
Figure 3 :
Figure 3: Finite element model of cantilever intelligent structural plate.
Figure 4 :
Figure 4: Variation of restriction function value.
Figure 5 :
Figure 5: Variation of target function value.
Figure 6 :
Figure 6: Variation of the fitness function value.
Table 1 :
Property parameters of cantilever intelligent structural plate.
Table 2 :
Optimal working voltage of piezoelectric actuators under optimal configuration.
Table 3 :
Optimal working voltage of piezoelectric actuator.
cantilever with a load on 1 end, static shape optimization control results of piezoelectric materials based on the genetic algorithm were analyzed.Conclusively, our mathematical model and results indicated that active shape control with
|
v3-fos-license
|
2022-06-04T05:12:20.578Z
|
2022-06-02T00:00:00.000
|
249312337
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10639-022-11085-6.pdf",
"pdf_hash": "37119981a058904c6237a492897c04d0b4895d70",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46177",
"s2fieldsofstudy": [
"Education",
"Sociology",
"Computer Science"
],
"sha1": "37119981a058904c6237a492897c04d0b4895d70",
"year": 2022
}
|
pes2o/s2orc
|
Exploring factors influencing pre-service and in-service teachers´ perception of digital competencies in the Chinese region of Anhui
The emergence of the Covid-19 pandemic has accelerated the wave of digital social transformation worldwide and pushed the “Accelerator Key” for the digital transformation of education in 2020. This transformation has also impacted in an all-around way in China. Taking Anhui province as a case study, this research explores socio-demographic factors influencing the digital competence level of pre- and in-service teachers of primary and secondary education in China. The quantitative methodological approach emphasizes the study subjects’ perception of their digital competencies in three factors: basic technology literacy, technical support learning, and technical support teaching. The study involved 250 pre-service teachers and 248 in-service teachers. The main findings are: (1) participants have good consciousness and attitudes towards using ICT in daily work, but their educational practice is weak; (2) in-service teachers have a digital competence level generally higher than pre-service teachers’, which might be their professional practice promote them to reflect on perceptions and attitudes regarding technological education; (3) for in-service teachers, there are significant differences between their digital competence level and age, years of teaching experience, educational background; (4) current ICT courses have no influencing on in-service teachers’ digital competence level, implying that current ICT training system may have problems. The study provides insights to improve pre-service teachers’ digital competence education in universities and develop well-designed in-service teachers’ ICT training courses.
Introduction and conceptual framework
Over the past five years, the Chinese digital economy has developed prosperously. Many people solve their needs for daily life using technology, and high-tech exploration has accelerated as well. Based on the report from China Internet Network Information Center (CNNIC) (2021), until the end of 2020, China has achieved full coverage of Internet infrastructure, where the proportion of Chinese users accessing the Internet through their mobile phones reached 99.7%. Besides, the size of Internet users has grown steadily, that Internet penetration has reached 70.4%, which most users belong to the aged 20-29 (19.9%), 30-39 (20.4%), and 40-49 (18.7%). In the same year, the Internet penetration rate of minors reached 94.9%, and the proportion of underage netizens who use the Internet to study was 89.9% (Youth Rights Protection Department of the Central Committee of the Communist Youth League, 2021). During the pandemic period in early 2020, the average time spent online per netizen in China increased significantly by 30.8 h in a week. Even after the pandemic, the per capita weekly time spent online still was 26.2 h (Youth Rights Protection Department of the Central Committee of the Communist Youth League, 2021).
The emergence of the epidemic not only has accelerated the wave of digital social transformation in an all-around way in China, but it also has pushed the "Accelerator Key" for the digital transformation of education (García-Peñalvo, 2021;Huang, 2020;Yan et al., 2021;Zhu, 2020). According to the Ministry of Education of the People´s Republic of China (2021), one of their development goals in 2021 is to accelerate the high-quality development of education informatization, actively develop "Internet + Education," and comprehensively guarantee the network security of the education system. In this case, the Ministry of Education focuses on the informatization to promote new educational facilities, research, and build a high-quality education support system. On the other hand, with the objectives of improving the principal's information leadership, the teacher's information teaching ability, and the training team's information guidance ability, Opinions on the Implementation of the "National Primary and Secondary School Teachers' Information Technology Application Ability Improvement Project 2.0" (2019) have been put forward before the pandemic.
As Ilomäki et al., (2011) mentioned, digital competence is an evolving policyrelated concept, which has been used by OECD (2018), EU (2013) and UNESCO (2018) policy papers. European Commission (2018) defined digital competence involves the confidence and critical use of Information Society Technology (IST) for work, which is grounded on basic skills in ICT for the use of computers to retrieve, assess, store, produce, present, and exchange information, and to communicate and participate in collaborative networks via the Internet. DigComp frameworks (Carretero et al., 2017;Ferrari et al., 2013;Vuorikari et al., 2016) were formulated following this concept. This framework has been applied at larger scales, particularly in the context of education and training and lifelong learning, as an assessment tool of digital competence.
The terms "Teacher's ICT competency" or "Teacher's IT competency" Rao et al., 2019;Tang et al., 2019;Yao et al., 2019;Zhang et al., 2019;X. M. Zhang et al., 2019) have been used most frequently by the researchers or policymakers in China, which is a concept initially based on ICT Competency Framework for Teachers (UNESCO, 2011). Since the diagnostic information provided in the existing theoretical frameworks of digital competence in the Chinese environment seems insufficient or inadequate to support the current development status of IT applications in China education, Chinese scholars frequently cite and use theoretical frameworks from foreign countries or regions in recent years.
This study aims to measure pre-service and in-service teachers' digital competence levels and explore the relationship between the influencing factors and their digital competence level using a theoretical framework validated in the Chinese context. The results of this study will yield insight to work on pre-service teachers' digital competence education in universities and developing well-design in-service teachers' ICT training courses. This study is conducted in an important eastern economic development region: Anhui province.
The paper has been organized in the following way: The next section is the literature review, including an overview of teachers' digital competence in China. The third section is the study's methods, describing participants, the instrument, data analysis methods, and the results of reliability and validity of the questionnaire. Then, the results of this study and its related discussion have been presented respectively in the fourth and fifth parts. Finally, the last section summarizes the main conclusions of the study.
Status of teacher's digital competence in China
Since 2015 teachers' digital competence has been an important research topic in China. It is generally agreed that the informatization level of whole states is unbalanced among eastern, central, and western regions (Fan & Song, 2016;Zhao & Qian, 2018). The eastern area has a higher informatization level than the central and western areas. However, the development speed of informatization in the western and central areas is faster than in the eastern area, and the informatization level in the central area tends to catch up with the eastern area (Kuang et al., 2018). In the same way, teachers' digital competence level in the western and central areas is generally inferior to those in eastern regions, above all, in teaching practice with ICT tools (Wang & Ren, 2020;Yang & Hu, 2019).
As Li, Wu, et al. (2016b) mentioned, the value of digital teaching facilities and teaching resources is seriously underestimated due to the lack of experience and knowledge in using advanced IT to integrate it into teaching. Primarily, teachers do not make full use of the latest resources available on the Internet to deepen students' learning content, nor do they have the advantages of technology-based information retrieval and processing to propose more activities that promote students' interest, participation, and depth in learning. Moreover, teachers have insufficient competence to design and organize activities based on technology for students to carry out cooperative learning in the classroom (Tang et al., 2019;Yao et al., 2019).
Factors influencing teacher's digital competence in China
For the last twenty years, several review studies have shown that various factors influence teachers' use of ICT (Drent & Meelissen, 2008;Mumtaz, 2000;Spiteri & Chang Rundgren, 2020). Due to the passage of time and the development of society, the factors that affect teachers' use of information technology (IT) are also changing. First of all, Kong & Zhao (2017) and Wang & Ren (2020) concluded that technical foundation, school system, teacher training, and environment have a significant direct or indirect impact on the teachers' digital competence. Then, some Chinese scholars investigated influencing factors based on the technology acceptance model (TAM). Zhang et al., (2015), Xu & Hu (2017) and Li et al., (2017) reported that student interaction feedback as an external factor could directly affect teachers' IT application behavior. On the other hand, Zhang et al., (2018) and Li et al., (2018) found that group influence, performance expectations, and convenience conditions as natural influencing factors can affect teachers' IT application behavior, but self-efficacy is a vital factor. Other researchers indicated that age, years of teaching experience, and teaching subjects of teachers have significant differences in their level of digital competence. For instance, Li et al., (2016) reported that teachers' age is an internal factor that significantly impacts their level of digital competence. Additionally, some researchers have been interested in the topic of teacher training for integrating technology into the teaching process Huang et al., 2016;Li & Huang, 2018;Wu & Yang, 2016).
For pre-service teachers' digital competence, it has also received attention in recent years (Li et al., 2019a;. Concerning the research works for pre-service teachers' digital competence, there has been an insufficient development on their digital competence. Firstly, there is still a gap in the IT hardware environment, hardware and software equipment, and independent campus network, including the deficiency of IT teachers and the lack of access to educational information resources (Zhou et al., 2016). According to Zhou et al., (2017), pre-service teachers' digital competence is low in three issues: the willingness to apply IT to optimize teaching, the ability to design and organize applications ability, and the professional development awareness.
Previous studies have demonstrated several influencing factors for Chinese preservice or in-service teachers' digital competence. However, no one studied for Anhui province specifically nor compared work for these two groups. Thus, the objectives of this study are to assess and analyze Chinese pre-service and in-service teachers' perception of digital competence and explore the relationship between socio-demographic factors (age, educational degree level, ICT courses, years of teaching experience) and their digital competence level in Anhui province. In this regard, we propose the following research questions: 1. What is the status of pre-service and in-service teachers' perceptions of digital competency in China? 2. Which analyzed factors influence the level of digital competence of pre-service/ in-service teachers? Furthermore, which are the stronger ones that can influence the level of digital competence of pre-service/in-service teachers?
Method
This study proposed a diagnostic evaluation from a quantitative paradigm with a non-experimental-cross-sectional design. We explored relationships between the socio-demographic factors and pre-service and in-service teachers' perceived digital competence level, explicitly examining three areas: basic technological literacy, technical support learning skills, and technical support teaching skills.
Participants
The sample was retrieved online from both pre-and in-service teachers in China's Anhui province between February and May 2021. A non-probabilistic sampling procedure (voluntary response sample) was applied. Thus, we initially contacted via WeChat those members of the population for whom we had contact information. Finally, total of 498 answers were collected. Most participants (116) are from Hefei, the capital and largest city of Anhui Province (Fig. 1).
Fig. 1 Geographical distribution of the sample
The sample was divided into in-service teachers (n = 248) and pre-service teachers (n = 250). For in-service teachers, there is 136 female (54.84%) and 112 male (45.16%) participants; for pre-service teachers, there is 122 female (48.8%), and 128 are male (51.2%) participants. Therefore, both groups have a balanced gender distribution. Figure 2 shows the educational background of pre-and in-service teachers. Most participants have bachelor's degree (56% of in-service teachers and 48% of preservice teachers) and very few have Ph.D. (2% of in-service teachers and 1% of in-service teachers). Table 1 shows the results of descriptive statistics of pre-and in-service teachers' age, in which the mean of pre-service teachers' age is 21.55 and the mean of inservice teachers' age is 31.82. Moreover, the mean of in-service teachers' teaching experience is 7.92 years. According to p 25 and p 75 , half of the teachers have experienced between 3 and 10 years.
Instrument
According to the previous literature review studies, we considered using the instrument proposed by Yan et al., (2018). It has formed by three fundamental measured factors (Basic Technology Literacy, Technical Support Learning, and Technical Support Teaching), and each factor consists of three dimensions (Fig. 4). This instrument is based on the Chinese theoretical framework "Information Technology Application Ability Standards for Primary and Secondary School Teachers (Trial)" (Ministry of Education of the People´s Republic of China, 2014), and it is validated for Chinese pre-service teachers. Since there is no suitable ICT assessment tool for current preservice teachers in China, evaluating their digital competence is challenging, and training units are challenging to improve their digital competence level. Yan et al., (2018) designed and validated this instrument to effectively diagnose pre-service teachers' self-perceived digital competence to provide a scientific basis for pre-service teachers' digital competence training. Hence, the scale included in the questionnaire used in this study was translated and validated from this instrument. The questionnaire used consisted of two parts: (1) socio-demographic initial questions; and (2) sixty subjective five-level Likert response questions (strongly agree [5], agree [4], no agree, neither disagree [3], disagree [2], and strongly disagree [1]). This study sought to determine the accuracy and validity of subjective self-assessment of digital competence for study subjects through socio-demographic questions. In determining the impact of socio-demographic and experience variables, the results can point to factors influencing the instruction design and program development for pre-service and in-service teachers.
Data collection
The online questionnaire was conducted according to institutional review board privacy and security before sending it to undergraduate students in the educational field at a large Anhui province public university. The research's objective was explained, and the collaboration of the students (pre-service teachers) was requested by encouraging them to participate in the study. At the same time, the questionnaire was sent to in-service teachers who engaged in primary and secondary school in Anhui province. For data collection, the questionnaire was administered during free time, so its application would not interfere with the usual rhythm of the classes. Finally, the survey was completed by 625 anonymous participants, then 498 participants remained for inclusion in the study after identifying and cleaning the data from uncompleted or low credibility questionnaires.
Data Analysis
All the data obtained for this study were analyzed by SPSS version 26 and JASP version 0.14.1. Firstly, for validating the theoretical structure of the instrument, Confirmatory Factor Analysis (CFA) techniques were used. Then, GFI, SRMR, NFI, RFI, CFI, and a chi-square test were applied for assessing the goodness of fit of the model's well-known indices. The average variance extracted (AVE) and composite reliability (CR) for the general explained variance and the internal consistency.
Descriptive, correlational, and inferential statistics were used to analyze the socio-demographic questions, factors, and dimensions. Lastly, after applying the Shapiro-Wilk test and computing skewness and kurtosis for analyzing the normality Fig. 4 Dimensional structure of the instrument assumption, we applied the Pearson correlation coefficient to compare scale variables and parametric (t-test or one-way ANOVA) or non-parametric (Mann-Whitney or Kruskal-Wallis) tests. The signification level of 5% has been used in all hypothesis contrasts, and the appropriate effect size statistic (Cohen's d, eta squared or rankbiserial correlation) has been included.
Reliability and validity of the questionnaire
For calculating reliability for each of the nine dimensions and the three factors, the Cronbach's Alpha and CR coefficients were used to determine the internal consistency (reliability), CFA with the Diagonally Weighted Least Squares parameter estimation technique was applied to study the factorial validity of the scale. Table 2 shows the results of measurement model fit indices in the CFA, evidencing the model fits of all three factors are good. The p-values of the chi-square tests and the ratio chi-square/degrees of freedom show a good fit. For the interpretation of fit indices, this table shows another fit measure parameter with an excellent index in three factors, in which the values of SRMR were less than 0.05, the values of GFI, NFI, RFI were greater than 0.90, and the vales of CFI were close to 1. So, the global fit of the model in three-factor dimensions was good. Table 3 shows that Cronbach's alpha is greater than 0.8 throughout, indicating the reliability of the table is acceptable. Then CR and AVE for convergence validity in this study have been shown that CR is greater than 0.6 throughout. Four dimensions' (FA2, FA3, FB2, FB3) average variance extracted (AVE) are greater than 0.4, and the factor loading reached good values (higher than 0.50), indicating that the reliability of this model is good. Table 3 Results of CFA, their factor loadings, and reliabilities of the model
Descriptive analysis
The following are the results obtained from the pre-service teachers and in-service teachers. They answered 60 measured questions composed of three core factors: Basic Technology Literacy (17 items), Technical Support Learning (17 items), and Technical Support Teaching (26 items). As mentioned above, to avoid bias, participants responded on a Likert-type scale of 1 to 5. Table 4 shows the descriptive statistical results by dimensions and factors for all participants, in which the values of means, standard deviations, minimum and maximum, the P 25 , P 50 , and P 75 percentiles have been reported.
Regarding pre-service and in-service teachers' digital competence in Basic Technology Literacy, the three dimensions mean values were 3.96, 3.86, and 4.14. For Technical Support Learning, the means of its three dimensions were 3.86, 3.88, and 3.82. For Technical Support Teaching, the means are 3.88, 3.86, and 3.86. Then, the Based on the means of nine dimensions in Table 4; Fig. 5 demonstrates the overall trend of means is decline. As the means were all over 3.8, participants believed they had a considerable good level of basic technology literacy, technical support learning, and teaching. This figure demonstrates that the participants' attitudes towards information ethics and security are above 4.1. Table 5 compares the digital competence levels of pre-and in-service teachers, with significant differences in two dimensions (FA1 Consciousness and Attitude and FA2 Technical Environment). The rank-biserial correlation in these two contrasts evidence has small effect sizes of the differences, indicating that in-service teachers have more robust digital consciousness and better technical environment than preservice teachers. Based on the results from Table 5; Fig. 6 demonstrates ºgeneral tendency of digital competence level of participants in three areas. In-service teachers' digital compe- . 6 Comparing means of three measured factors in two groups tence is better than pre-service teachers, in which both groups have the highest level in terms of Basic Technology Literacy and the lowest level in the sections of Technical Support Leaching.
Educational background
Regarding in-service teachers, there are significant differences between education degree levels and some areas of digital competence ( Table 6). Such as FB3 Research and innovation (p = .007 < .05), FC1 Resource preparation (p-value = 0.010 < 0.05), FC2 Process design (p-value = 0.029 < 0.05), the factors Technical Support Learning (p-value = 0.30 < 0.05) and Technical Support Teaching (p-value = 0.20 < 0.05). These results indicated that in-service teachers with higher education degrees have better digital competence levels in research and innovation, resource preparation, and process design. In general, the higher education level in-service teachers have better digital competence in technical support learning and teaching.
According to the results of Kruskal-Wallis's test, Table 7 shows Dunn's post-hoc test, which has been applied for FB3 Research and innovation, FC1 Resource preparation, FC2 Process design, Technical Support Learning, Technical Support Teaching.
On the other side, Fig. 7 demonstrates that teachers with master's degrees or Ph.D. have the highest digital competence level, especially in the factor of Technical Support Learning (FB1 Self-learning, FB2 Communication and collaboration, FB3 Research and innovation), and the factor of Technical Support Teaching (FC1 Resource preparation, FC2 Process design, FC3 Practice reserve).
For pre-service teachers, there are no significant differences between any areas of digital competence and educational background, in which the p-values of each dimension are all greater than 0.05 (Table 8). Pre-service teachers' education degree level does not influence their digital competence level.
Age (pre-service and in-service teachers) & years of teaching experience (inservice teachers)
For pre-service teachers (Table 9), there are significant differences between age and the areas of Basic Technology Literacy, Technical Support Learning, and Technical Support Teaching. Then, there are significant differences between age and dimensions of consciousness and attitude, communication and collaboration, resource preparation, process design, and practice reserve, which meet the conditions of the p-values less than 0.05 with small effect sizes. These results indicated that older pre- Similar results were obtained for in-service teachers ( Table 9). There are significant differences between age and technical support learning and aspects of technical environment, communication and collaboration, research and innovation. These results suggested that younger in-service teachers noticed a good technical environment, and their self-perception of digital competence in communication and collaboration, research, and innovation are better than the older, as well as in technical support learning part. Secondly, Table 9 shows that there are significant differences between in-service teachers' teaching experience and Technical Support Learning, Technical Support Teaching, as well as dimensions of technical environment, communication and collaboration, research and innovation, resource preparation, process design, practice reserve, which the p-values are less than 0.05 with small effect sizes. These results mean that teachers with more teaching experience have a lower digital competence in mentioned digital aspects.
ICT training courses
The Mann-Whitney test (Table 10) shows significant differences between pre-service teacher ICT training course and self-perception of digital competence in consciousness, attitude, and technical environment. This means that pre-service teachers believed that ICT training courses influence their consciousness, attitude, and technical environment, but it has not helped them in technical practice. Table 11 shows no significant differences between in-service teachers' ICT training programs and any aspects of digital competence, indicating that current ICT training programs have not significantly impacted in-service teachers' digital competence.
Discussion
The focus of this study was not just measuring pre-service and in-service teachers' digital competence level but also an exploration of influencing socio-demographic factors on their perceptions of digital competence in China, which focuses on a group of samples in Anhui province. Its sample can reflect the basic level of Chinese teachers' digital competence. An instrument designed by Yan et al., (2018) that was validated for Chinese pre-service teachers has been applied in this study.
The descriptive results of this study demonstrated that both pre-service and inservice teachers have a good perception of digital competence in the areas of basic technology literacy, technical support learning, and technical support teaching. This finding is in line with the results in studies of Chen et al., (2019), Galindo-Domínguez & Bezanilla (2021) and Valtonen et al., (2021), which respectively demonstrated a similar result that Chinese pre-service and in-service teachers have a good perception of digital competence. Secondly, both groups of participants showed that they have good consciousness and attitude towards using IT for their daily work-life, in which their information ethics and security awareness were quite good. These results were in line with the findings of the earlier studies (Chen et al., 2020;Li et al., 2019b;Ma et al., 2019), but is opposite to the results of Chen, Zhou, Wang, et al. (2020) regarding information security cognition and solving skills. Thirdly, this study also suggested that Chinese pre-service and in-service teacher's technical support practicing is not strong in the teaching and learning aspects, which replicates the findings of earlier studies in other countries (Charbonneau-Gowdy, 2015;Munyengabe et al., 2017;Ogodo et al., 2021;Valtonen et al., 2015;Wikan & Molster, 2011).
This study found that in-service teachers had higher perceived digital competence than pre-service teachers in three measured areas. For consciousness and attitude, and technical environment, in-service teachers show a significantly higher level than preservice teachers, which Chen et al., (2019) suggested that increasing the frequency of ICT use would probably enhance teachers' digital competence. The findings of this study show that though current university ICT course significantly predicted preservice teachers' perception, it did not affect their educational practice. Firstly, these results prove the governmental achievements in information construction for k-12 education. Secondly, we indicated that for in-service teachers, the frequent professional practice might promote them to reflect on attitudes regarding technological education to aid them in adjusting their digital competence, skills, and knowledge for technical teaching requirements.
Factors influencing pre-and in-service teacher's digital competence have been investigated. Firstly, for in-service teachers, this study finds that compared with older, younger teachers have a higher digital competence level in terms of technical support learning. This result is similar to Barahona et al., (2020) and Li et al., (2016) mentioned; in-service teachers' age significantly impacts their level of digital competence. This suggests that younger teachers generally have a higher digital competence than elderly teachers. On the other hand, this study indicates that in-service teachers with less teaching experience possess higher digital competence levels, contrasting findings from HIinojo-Lucana et al., (2019) and Pozo Sánchez et al., (2020). Secondly, Zhao et al., (2021) found that in-service teachers with higher educational background have better self-perception of the level of digital competence, which is in line with the result of this study that teachers with higher education degree have a better level of digital competence in technical support learning and teaching aspects. This implies that people with higher education may be more willing to learn and use ICT to service their professional practice.
For pre-service teachers, age affects their perception of digital competence, but there are no significant differences between their perception of digital competence and gender and educational background. The relation between age and digital competence level for pre-service indicates that older teachers have a higher perception of digital competence than younger teachers in all three factors. On the other side, this study confirms the findings of previous studies, which indicate that gender as a sociodemographic factor has no impact on in-service teachers' perception of digital competence nor on pre-service teachers' (Cabero Almenara, 2017;Tondeur et al., 2018). However, this finding is opposed to the results of Guillén-Gámez et al., (2021).
Ministry of Education of the People´s Republic of China (2019) promotes the development of teacher IT ability training in various regions through demonstration projects. Each in-service teacher should receive more than 50 h for 5 years, of which at least 50% should be practical application hours. Moreover, a series of governmental documents have been issued with the objectives of improving teacher's digital competence level, such as Guidance from the Ministry of Education on strengthening the application of the "three classrooms" (Ministry of Education of the People´s Republic of China, 2020a), Guide for Online Training of Kindergarten Teachers in Primary and Secondary Schools (Ministry of Education of the People´s Republic of China, 2020b).
Previous studies in different countries indicated that pre-service teachers' ICT training significantly impacts their future ICT use for learning processes and strengthens their instructional practice (Al-Abdullatif, 2019; Aslan & Zhu, 2016;Cabello et al., 2020;Valtonen et al., 2021). For instance, Tondeur et al., (2018) suggest that the self-perception of pre-service teachers' digital competence has a significant impact on their future pupils' ICT use. Since digital competence for teaching is a powerful skill for any education professional, Chinese universities commit to planning, designing, and evaluating digital competence throughout degrees.
Current Chinese teachers' digital competence training is learning from Western countries; a series of reform-minded teaching practices have been applied. Similar to Li, Wu, et al. (2016a), this study relevant that based on the influential policy recommendation documents, the current ICT training programs have no impact on pre-service and in-service teachers' digital competence. This indicated that the reform-minded teaching practice that mentors developed does not necessarily guarantee effective mentoring to support teachers' IT learning and teaching reform. Therefore, further training (higher education or ICT training course) should be guided to make the most of digital tools in their professional practice. As well as Wang (2001) relevant the idea of collaboration in teaching and planning of teaching, teacher educators should pay attention to the influences of digital instructional contexts on mentoring and the kinds of learning opportunities that mentoring creates for teachers in different digital contexts. When designing mentoring programs and arranging mentoring relationships, teacher educators need to consider how to restructure school contexts and help teachers learn how to instruct students.
Conclusions
The Chinese government has already created an excellent digital era in recent years. Until 2020, China has achieved full coverage of Internet infrastructure. Covid-19 has introduced considerable changes to the country's economy and lifestyle, including in the educational field. Though the epidemic was rapidly controlled within two months in China, teachers' digital competence still has achieved great attention in practice during the Covid-19 pandemic. From this perspective, our study focuses on pre-service and in-service teachers from one province to explore the influencing factors on their digital competence perception.
According to the findings of this study, Chinese pre-service and in-service teachers have a good perception of digital consciousness and attitude, particularly in the aspect of information ethics and security awareness. However, both pre-service and in-service teachers believed that their educational practice in technical support teaching and technical support learning parts is insufficient. Besides, in-service teachers demonstrated a higher perception level of digital competence in three areas than preservice teachers. Furthermore, we also found that several factors (e.g., educational background, age, years of teaching experience, ICT training courses, etc.) influence pre-service or in-service teachers' perception of digital competence. First, in-service teachers with higher education have a higher perception of digital competence, particularly in technical support teaching and technical support learning areas. Then, the age and years of teaching experience of in-service teachers were negatively correlated with the perception of digital competence. However, pre-service teachers' age was positively correlated with the perception of digital competence. Therefore, this study indicated that age is a more decisive factor influencing the level of digital com-petence of pre-service and in-service teachers. Based on the findings of this study, we will give the insight to work on pre-service teachers' digital competence education in universities and develop well-design teachers' ICT training courses for in-service teachers.
This study has some limitations. For the data collection, because the sample consisted of primary and secondary education teachers in Anhui province, the results cannot simply be generalized to the whole country. Then, the study conducted an online questionnaire to gather the data, excluding the participants with a low level of digital competence who were not willing to answer the questionnaire. For the study's findings, this study has a limit to investigate how current different training courses impact pre-and in-service teachers' attitudes and behavioral intentions towards the use of ICT. Then, the instrument is designed for pre-service teachers that may be prone to underestimating in-service teachers' digital competence.
This research has thrown up many questions in need of further investigation. Based on this quantitative study that has given the complexity of digital competence and its interrelated factors, other exploratory lines of a qualitative analysis could be considered to contrast these results more profoundly and comprehensively. On the other hand, a longitudinal study analyses the evolution of the in-service teachers' level of digital teaching competence during the long training course. A longitudinal study investigating how pre-service teachers' digital training course influences their future work can investigate the perceptions of the different subjects involved in the study.
|
v3-fos-license
|
2018-12-29T13:31:19.948Z
|
2016-08-09T00:00:00.000
|
132482884
|
{
"extfieldsofstudy": [
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.biogeosciences.net/14/2527/2017/bg-14-2527-2017.pdf",
"pdf_hash": "b76026df7656787833aea70805d3f2987d04c107",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46178",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"sha1": "b76026df7656787833aea70805d3f2987d04c107",
"year": 2016
}
|
pes2o/s2orc
|
Consistent EO Land Surface Products including Uncertainty Estimates
Earth Observation (EO) land products have been demonstrated to provide a constraint on the terrestrial carbon cycle that is complementary to the record of atmospheric carbon dioxide. We present the Joint Research Centre Two-stream Inversion Package (JRC-TIP) for retrieval of variables characterising the state of the vegetation-soil system. The system provides a set of land surface vari5 ables that satisfy all requirements for assimilation into the land component of climate and numerical weather prediction models. Being based on a one dimensional representation of the radiative transfer within the canopy-soil system such as those used in the land surface components of advanced global models, the JRC-TIP products are not only physically consistent internally, but also achieve a high degree of consistency with these global models. Furthermore, the products are provided with full 10 uncertainty information. We describe how these uncertainties are derived in a fully traceable manner without any hidden assumptions from the input observations, which are typically broadband white sky albedo products. Our discussion of the product uncertainty ranges, including the uncertainty reduction, highlights the central role of the leaf area index which describes the density of the canopy. We explain the generation of products aggregated to coarser spatial resolution than that of the na15 tive albedo input and describe various approaches to validation of JRC-TIP products, including the comparison against in-situ observations. We present a JRC-TIP processing system that satisfies all operational requirements and explain how it delivers stable climate data records. As many aspects of JRC-TIP are generic the package can serve as an example of a state-of-the-art system for retrieval of EO products, and this contribution can help the user to understand advantages and limitations of 20
Introduction
This special issue addresses the consistent assimilation of multiple data streams into biogeochemical models.Among the available data streams, long-term high precision observations of the atmospheric carbon dioxide concentration (see, e.g., Houweling et al., 2012) provide an indispensable constraint 25 for the (process parameter) calibration of terrestrial biosphere models in Carbon Cycle Data Assimilation Systems (CCDAS, Rayner et al., 2005).The strength of this constraint is quantified by significant reductions of uncertainty in simulated terrestrial carbon fluxes diagnosed over (Kaminski et al., 2002;Rayner et al., 2005) or predicted after (Scholze et al., 2007;Rayner et al., 2011) the assimilation window.In recent multi-data stream assimilation studies at global scale (Scholze et al.,30 2016; Schürmann et al., 2016) the constraint through the flask sampling network has proven essential to achieve realistic magnitudes of the terrestrial carbon sink.The flask sampling network alone does, however, only constrain a sub-space of the space of unknown process parameters.Thus, additional, complementary, constraints are required to further reduce uncertainties in the system.Such complementarity has been demonstrated for Earth Observation (EO) products (Gobron et al., 2007;Pinty 35 et al., 2011b) of the Fraction of Absorbed Photosynthetically Active Radiation (FAPAR), which provide information on, e.g., the vegetation phenology and colour.The effect on carbon and water fluxes of assimilating FAPAR in addition to atmospheric carbon dioxide samples is, for example, quantified by Kaminski et al. (2012) and Schürmann et al. (2016).
The assimilation of an EO product such as FAPAR requires the capability to simulate (by a so-40 called observation operator) its counterpart from the model's prognostic variables, i.e. the variables that the integration scheme of the model's dynamical equations steps forward in time (Kaminski and Mathieu, 2016).For a land product such as FAPAR, the construction of the observation operator requires to solve the equations for the radiative transfer (RT) within the canopy-soil system.The RT within the canopy is complicated as the leaves, which scatter the solar radiation, are large (compared 45 to the wavelength) and vary in their orientation and optical properties.For large-scale terrestrial models it is (at least computationally) infeasible to resolve the small-scale three-dimensional heterogeneity of the canopy.The most advanced RT representations in such models are one-dimensional approximations relying on so-called two-stream or (two-flux) approaches.
The retrieval of a set of EO products describing the evolution of the canopy-soil system, e.g.leaf 50 area index (LAI) or FAPAR, also has to rely on a RT model, in EO terminology called forward model, to simulate the partitioning of the incoming solar radiation into contributions from the individual radiative fluxes, i.e. those absorbed in, transmitted trough, and reflected by the canopy.In order to exploit the full potential of EO, this forward model should be as close as possible to the RT model used in the observation operator for assimilation.The joint retrieval of a set of EO products with the 55 same RT model is a pre-requisite to assure physical consistency (including conservation of energy) of the retrieved products.The use in a CCDAS requires the retrieval product to be provided with a (typically space-and time-dependent) uncertainty estimate.For assimilation of multiple products quality assured, i.e. they need to be validated against independent information.Finally, the retrieval algorithm must be efficient enough to allow global-scale processing, preferably near real time.
The Joint Research Centre Two-stream Inversion Package (JRC-TIP, Pinty et al., 2007Pinty et al., , 2008) is a retrieval package that fulfils the above conditions.It is built around a two-stream model (Pinty et al., 2006) of the RT in the canopy soil system (see section 2) and applies a joint inversion (Taran-65 tola, 2005) approach (see section 3) that combines the information in observed radiative fluxes with prior information on the model parameters (see section 4.1).Its products are posterior estimates of the model parameters, i.e. effective LAI, spectrally variant background reflectance, effective canopy reflectance and transmittance (where effective indicates model-dependence (see section 2) and all radiant fluxes, including (but not limited to) model counterparts to the ones that have been observed.
70
The retrieved products are available with uncertainty estimates and their covariance (sometimes termed error covariance).The package is highly flexible: It can be operated for any combination of narrowband, broadband, or hyperspectral radiation flux observations (Lavergne et al., 2006) and on all spatial scales above 100 m (when lateral flux components can safely be neglected) even for heigh canopies.The radiative flux that is accessible to observations from space is the reflected sunlight, 75 i.e. the albedo, once a complex series of procedures to remove atmospheric effects has been applied together with performing the required integration over exiting and/or Sun illumination angles.
Hence, for EO applications JRC-TIP is typically set up to use observed albedo as input.Healthy green vegetation is characterised by a strong albedo difference between the visible (VIS) and near infrared (NIR) domains of the spectrum.Accordingly, the system is typically operated on albedo 80 input in these two wavebands.In this configuration it has been applied to broadband albedos derived from MODIS (Pinty et al., 2007(Pinty et al., , 2008(Pinty et al., , 2011a, b), b), MISR (Pinty et al., 2007(Pinty et al., , 2008)), and Globalbedo (Disney et al., 2016).Section 4 describes enhancements of robustness and efficiency through the use of so-called TIP tables, i.e. look up tables of quality-controlled retrievals over a fine discretisation of the input space (Clerici et al., 2010;Voßbeck et al., 2010).Section 4 discusses products from 85 a large-scale processing exercise (Pinty et al., 2011a, b) based on MODIS collection 5 broadband albedo input, with a focus on the reported uncertainty estimates.Validation of JRC-TIP products is described in section 5.
Radiative Transfer Model
The two-stream model at the core of JRC-TIP is described in full detail by Pinty et al. (2006).We, 90 hence, restrict ourselves to a brief summary of the main features.The model is designed to solve the radiation balance for the canopy-soil system (see figure 1).It simulates the solar radiant fluxes scattered by, transmitted through and absorbed in a vegetation canopy that is composed of so-called involving multiple interactions between both the vegetation layer and its underlying background (see figure 2).
This 1-D model provides a solution to the black background problem which follows the twostream formulation established originally by Meador and Weaver (1980).It ensures the correct bal-105 ance between the scattered, transmitted and absorbed radiant fluxes not only for structurally homogeneous but also for heterogeneous canopies.The applicability to heterogeneous canopies relies on the finding by (Pinty et al., 2004, section 3.3) that a solution to a 3-D flux problem satisfying the conditions imposed by a "radiatively independent volume" can always be achieved using a 1-D representation.The model's canopy state variables required for the correct flux representation are, 110 however, so-called effective variables.They deviate from the true canopy variables and are thus only meaningful in the context of this model.These effective variables are, a spectrally invariant quantity, namely the Leaf Area Index (LAI) and, spectrally dependent parameters that are the leaf single scattering albedo wl = rl + tl and the ratio dl = rl/tl (identified here as the asymmetry factor) where rl and tl correspond to the leaf reflectance 115 and transmittance, respectively.The albedo of the background, rg, is itself defined as the true (by contrast to effective) value and retrieved as such.And clearly, for all fluxes true values are simulated.
The possibility to use a (1D) two-stream representation to solve a flux problem irrespective of the 3D complexity of the scene conditions means that the model can be operated in inverse mode to retrieve a set of state variables for the canopy-soil system that allows an accurate flux representa-120 tion.The model is implemented in numerically efficient, modular, and portable form, to simplify its integration into climate and numerical weather prediction (NWP) models.
Inverse Model
JRC-TIP applies the joint inversion approach of Tarantola (2005) (discussed as Bayesian inversion by Rayner et al. (2016) in this special issue): It estimates the state vector (in the following also called 125 parameter vector) from a given set of observations and the available prior information.The a priori state of information is quantified by a probability density function (PDF) in parameter space, the observational information by a PDF in observation space, and the information from the model by a PDF in the joint space, i.e. the Cartesian product of parameter and observation spaces.The inversion combines all three sources of information and yields a posterior PDF in the joint space.(1) Some observational products provide uncertainty ranges and their correlation, i.e. the entire C(d obs ).
If this is not the case, we often assume uncorrelated uncertainties, i.e. zero off-diagonal elements.
The diagonals are populated with the squares (i.e.variances) of the 1-sigma uncertainty ranges, for which we typically proceed as follows: In C(d obs ) we often use values proportional to d with a floor 140 value.As the value in C(d obs ) typically considerably exceeds that in C(d mod ) (see section 5) we neglect the latter.The exception is for small values of d, where the floor value is supposed to represent C(d mod ).Note that, in the typical setup, with d being broadband albedo products, there is no additional contribution from representation error (see, e.g., Heimann and Kaminski, 1999;Kaminski et al., 2010), as the model and the observations are defined on the same space-time grid. 145 For later use it is convenient to have two separate notations for the model simulation of a flux vector from a given state vector x.For simulation of the full vector of all flux components y we use N and when the flux vector is restricted to those components for which we have observations y obs we use M , i.e.
The inverse model is flexible with respect to the number and width of spectral bands that are simulated and the subset of simulated fluxes y obs that are observed.Every combination is feasible; Lavergne et al. (2006) provide examples.
Since the model is only weakly non-linear, we can approximate the posterior PDF by a Gaussian 155 PDF.The corresponding marginal PDF in parameter space is thus also Gaussian, with mean value x and covariance C(x) −1 .The mean x is approximated by the maximum likelihood point, i.e. the minimum of the misfit function: C(x) is approximated by the inverse of the misfit function's Hessian, H, evaluated at x: To understand this relation it is instructive to look at the case of a linear model (denoted by M ): et al., 2006).
From the optimal parameter set we can simulate (see equation 3) all radiant fluxes (including the non-observed ones).To assess the strength of the observational constraint on a simulated radiant 170 flux, we use N , the first derivative of n to propagate the posterior parameter uncertainties forward the uncertainty in simulated vector of radiant fluxes C(y): Equation 7 is particularly useful for comparing the TIP results with independent observations.Evaluating 7 for the prior uncertainty C(x 0 ) instead of the posterior uncertainty C(x), i.e. for a 175 case without observational constraint, yields a prior uncertainty for the flux: For any component of the flux vector we can quantify the added value/impact of the observations by the uncertainty reduction or knowledge gain relative to the prior.
180 where, σ(y i ) and σ(y i,0 ) respectively denote the 1 sigma uncertainty ranges, the squares of which populate the diagonals of C(y) and C(y 0 ).For example, if σ(y) i is 90 % of σ(y i,0 ), then the uncertainty reduction is 10%; i.e. we have increased our knowledge on y by 10 %.
The simultanous retrieval of all state variables and the associated fluxes within a single model assures physical consistency between the derived products.This includes simulated counterparts 185 y obs of the observed flux components.
This inversion approach is relatively generic, i.e. it similarly applies to further RT models in the optical domain (see, e.g., Lavergne et al., 2007;Lewis et al., 2012) or other spectral domains, e.g.
the passive microwave domain (see also Kaminski and Mathieu, 2016).
Equation 4 is minimised by a so-called gradient algorithm that relies on code for evaluation of J b Value adopted for the bare soil case with a correlation between the two spectral domains of 0.8862 set in C(x0).
c Value adopted under occurrence of snow with a correlation between the two spectral domains of 0.8670 set in C(x0).
4 Operational Processing
Prior Information
The radiative flux component that is accessible to observations from space is the reflected flux (albedo).As photosynthesis is driven by absorption in the VIS, our focus is on the flux partition- prior information is provided in table 1.We stress the high prior variance of 25 for the effective LAI, a deliberately conservative assumption that results in a low weight on the prior term in equation ( 4).
Observations
The specific quantities to be discussed in section 4.4 have been retrieved (as described by Pinty products at 1 km resolution (Schaaf et al., 2002).The WSA product uses a synthesis period of 16 days, in which observed reflectances under various illumination angles are used to calculate the spherical integral (isotropic illumination) .The albedo product provides a data set every 8 days such that filtering out every second data set yields a sequence of data sets, in which each member is based
Robustness and Efficiency
In the above-described setup all observations are restricted to broadband VIS/NIR albedo pairs, 230 which can theoretically take values in the two dimensional domain [0, 1]x[0, 1] (albedo plane).Observations retained for processing with JRC-TIP fall either in the 5% or the 7% uncertainty case.
For both uncertainty cases we now apply JRC-TIP over a discretisation the albedo plane with step size of 10 −3 (i.e. a factor of 2.5 below the minimum uncertainty) on both axes (Clerici et al., 2010;Voßbeck et al., 2010).This provides us with a set of 2 x 10 −3 x 10 −3 = 2 million JRC-TIP retrievals 235 which populate the 2D space of theoretically possible albedo input observations that is dense enough for all practical purposes.We note that, in practice, only a subdomain of the albedo plane is covered by observations.figure 4 shows all the location in the albedo plane of all albedo pairs used by (Pinty et al., 2011a) for their processing of the year 2005, excluding those with snow-flag.Switching from a non snow to a snow prior adds a factor of 2, i.e. there are in total 4 million retrievals for the polychromatic leaf scenario.
We denote the above-described set of retrievals as the TIP values for all variables and uncertainties, from the same albedo input with the same uncertainty range.For the standard JRC-TIP retrieval this is only guaranteed when the computing environment remains unchanged.
4. Fourth, a processing system relying on the TIP table is agile.When a compoment of the JRC-TIP retrieval procedure is improved, the only change required in the processing system, is an 265 update of the TIP table.
For albedo input products that provide per-pixel uncertainty ranges, the TIP table uses a finer discretisation and further dimensions in the uncertainty space, but the same general approach applies.
For example, Disney et al. ( 2016) use a two-dimensional uncertainty space: One dimension each for uncertainties in VIS and NIR, an extra dimension for their correlation was not included.While the LAI uncertainty grows steadily with LAI itself, the FAPAR uncertainty exhibits two separated domains with high uncertainty.On the line of constant WSA NIR, one peak is located at WSA VIS around 0.03 and the other peak around 0.13.As pointed out by Pinty et al. (2011b), this reflects the influence of the soil background, which for LAI values in the range from 0.3 to 0.5 is exhibits an 295 equally complex uncertainy structure (panel d of figure 4).In the minimum of the misfit function J of equation ( 4) that is displayed in the bottom right panel of figure 5 by either aggregation order.Another point is that the aggregation also needs to be performed on the uncertainty.This requires a specification of spatial uncertainty correlation and is certainly less complicated at the albedo level than at the level to JRC-TIP products.
Validation
The validation of the JRC-TIP and its generated products is achieved through a variety of comple-325 mentary stages.The first one consists in assessing the performance of the direct model, namely the two-stream model that is further used in inverse mode to generate the JRC-TIP products.This performance can be thoroughly benchmarked against comprehensive 3D Monte-Carlo models for a series of virtual canopies exhibiting different levels of complexity regarding the radiation transfer regime that these canopies can represent (see section 3 of Pinty et al., 2006).The RAdiation transfer Model 330 Intercomparison (RAMI) initiative (http://rami-benchmark.jrc.ec.europa.eu)offers such a platform for a range of simple and very complex canopy scenarios (Pinty et al., 2001;Widlowski et al., 2007).
The 1D model implemented in JRC-TIP was found to be in very good agreement, i.e., better than 3% in most cases, with albedos from accurate and realistic simulations of complex 3D scenarios in both the red and near-infrared spectral regions.
335
While this first set of RAMI excercises addressed the accuracy of simulated albedo, i.e.C(d mod ) in equation ( 1), a further exercise in the RAMI frame (termed RAMI4PILPS) addressed the accuracy and consistency of the absorbed, reflected, and transmitted radiative fluxes retrieved by inverse models of the soil-vegetation-atmosphere transfer (Widlowski et al., 2011).This exercise thus offers a The capability of the JRC-TIP to reconstruct solar fluxes that can be currently measured in-situ 345 by dedicated instruments, e.g., direct or diffuse canopy albedos and transmission, offers a definite solution to assess the performance of the procedure.However, the crux of the matter with such an exercise lies in the large spatial variability of the canopy at various scales such that the spatial and temporal sampling of a given site must be achieved carefully and quite extensively.A first attempt to evaluate the JRC-TIP products generated from MODIS white sky albedo input values over a fluxnet 350 site is described in Pinty et al. (2011c).In this study the authors have capitalized on an ensemble of LAI-2000 measurements systematically acquired over multiple years along a 400 m transsect as well as series of photos taken from a tower emerging the top of this deciduous mid-latitude forest.
We note the one to one relation of the direct transmission, T U nColl , and effective LAI, LAI, through the Beer-Bouger-Lambert Law where µ 0 denotes the cosine of sun zenith angle (Pinty et al., 2006(Pinty et al., , 2009)), i.e. µ 0 = 1 when the sun is at nadir.which is very likely due to snowy background conditions that remained undetected in the MODIS product, i.e. the snow flag was not raised.The inversion procedure, being operated with non-snow priors in this case, needs to minimise the misfit function J (see equation ( 4)) which quantifies the misfit between modelled and observed albedos and the deviation of the parameters from their priors.
380
In order to best fit this high observed albedo in the VIS without being penalised by a high prior term in J, the minimisation procedure increases the background reflectance in the VIS (panel (c) in figure 9) and turns off the vegetation contribution by setting LAI close to zero (panel (b) in figure 9), which explains the high direct transmission derived for this period (figure 8), and also means that there is no absorption of the incoming radiant flux by the vegetation (figure 9, panels (e) and (f)).For 385 the time period in question, the graphs also include, in magenta colour, a second retrieval with snow prior.The corresponding LAI, transmission in the VIS, and the absorption in the VIS and the NIR are then much closer to the values for the preceeding and succeeding periods, and the background reflectance closer to the soil line for snow.We note that our global-scale processing setup scans non-snow retrievals using several conditions for outliers which may then be corrected by a snow Another element of the validation strategy consists in the comparison of JRC-TIP products derived over the same location from multiple albedo input products.For example Pinty et al. (2007) analyse 400 differences between JRC-TIP products derived from MODIS and MISR broadband WSAs.Pinty et al. (2008) apply the same strategy but to high-and mid-latitude sites, as their focus lies on the behaviour of JRC-TIP products in the presence of snow.MISR is an instrument flying on the same platform (called Terra) as one of the MODIS instruments, and the procedure for deriving a WSA comparable to the standard MODIS product is described by Pinty et al. (2007).As MISR observes 405 each pixel from several angles, it can collect a high number of samples and thus provide a good angular integral of reflectance, i.e. a good WSA reference.
Another level of validation is the comparison of JRC-TIP products against products derived with alternative retrieval approaches.An example is presented by Disney et al. (2016), who compare effective LAI and FAPAR products derived by JRC-TIP with the operational MODIS LAI and FAPAR 410 products (Knyazikhin et al., 1998) at site, regional, and hemispheric scales.
A final level of validation is implicitly performed by the product users in their respective applications.Such applications include analyses of the consistency of the long term CDR, and its inter- of the carbon balance as inferred from other data streams (Wolf et al., 2016).A consistency check against other data streams and a process model is provided by simultaneous assimilation of the FA-PAR product with further data streams, in particular atmospheric carbon dioxide record (Kaminski et al., 2013;Schürmann et al., 2016).Consistency to further data streams is also implicitly checked 420 in diagnostic model setups, for example when the FAPAR product is used as a forcing field for simulation of photosynthesis (Chevallier et al., 2016).
Conclusions
The JRC-TIP is a highly flexible retrieval system that delivers a set of radiatively consistent land surface products.These products include all radiant fluxes (absorbed, transmitted, and reflected) For global-scale processing JRC-TIP is operated on broadband albedo products (including snow 440 information) derived from EO with space and time invariant prior (except in the event of snow) such that the retrieved products are exclusively based on the EO input.Owing to this low-dimensional space of observational input an operational system can be set up to retrieve products from a data base (TIP table) of pre-calculated quality-controlled JRC-TIP solutions (including full uncertainty quantification).Such a system is computationally extremely efficient, robust, and agile.By construc- The JRC-TIP methodology is to a large extent generic (see, e.g., Kaminski and Mathieu, 2016) and can be generalised to further RT schemes.This holds in particular for the two-step procedure that first solves for the state variables and from there then simulates a set of target quantities, both
Figure 1 :
Figure 1: Schematic partitioning of the incoming solar radiation in the canopy-soil system.
Figure 2 :
Figure 2: Decomposition of the total flux into three contributing fractions.The two-stream solution applies to the Black background contribution (left hand side).
130
Prior and observational PDFs are difficult to specify.We use Gaussian shapes with respective mean values denoted by x 0 and d and respective covariance matrices denoted by C(x 0 ) (prior parameter uncertainty) and C(d) (data uncertainty).The data uncertainty is the sum of uncertainties Biogeosciences Discuss., doi:10.5194/bg-2016-310,2016 Manuscript under review for journal Biogeosciences Published: 9 August 2016 c Author(s) 2016.CC-BY 3.0 License.due to errors in the observational process, C(d obs ) and errors in our ability to correctly model the observations, C(d mod ): 135 C(d) = C(d obs ) + C(d mod )
) 6 Biogeosciences
Discuss., doi:10.5194/bg-2016-310,2016 Manuscript under review for journal Biogeosciences Published: 9 August 2016 c Author(s) 2016.CC-BY 3.0 License.The Hessian is the sum of two terms, one reflecting the strength of the constraint by the prior information, and the other reflecting the observational constraint.Typically adding the observational 165 constraint increases the curvature of the cost function which via equation 6 translates to a reduction in uncertainty compared to the prior.One of the uncommon counter-examples is provided by (Lavergne 190 and its gradient.Further derivative code is used to evaluate equations 5 and 7. Biogeosciences Discuss., doi:10.5194/bg-2016-310,2016 Manuscript under review for journal Biogeosciences Published: 9 August 2016 c Author(s) 2016.CC-BY 3.0 License.
195ing in this domain of the spectrum.Pinty et al. (2009) demonstrate that under typical, non-snow conditions and with known optical properties at leaf-level the background reflectance largely determines the albedo in the VIS, and the effective LAI the albedo in the NIR.Hence, it is favourable to operate JRC-TIP in both VIS and NIR, with albedo observations in these two broad bands.Including the NIR brings in one additional observational constraint but also adds three spectrally variant state 200 variables to the inverse problem.This is partly compensated by (approximately) known relations of the background reflectance across the VIS and NIR domains.This relation translates to our inversion formalism as a correlated uncertainty and is visualised by the ellipsoids (indicating the 1.5 sigma uncertainty ranges) shown in figure 3. We note that this relation, known as the soil line (see, Biogeosciences Discuss., doi:10.5194/bg-2016-310,2016 Manuscript under review for journal Biogeosciences Published: 9 August 2016 c Author(s) 2016.CC-BY 3.0 License.
215
et al. (2011a)) from the MODIS collection V005 (MCD43B3) broadband white sky albedo (WSA) 220on its individual 16 day synthesis period.This procedure maximises the temporal independence of the observational input for JRC-TIP.The MODIS collection V005 WSA product provides a quality flag associated with all spectral bands, but no covariance of uncertainty.As described in section 3 we populate the non-diagonal elements of C(d) with 0. For the diagonal elements the quality flag such that good (other) quality is mapped onto a one sigma uncertainty range of 5% (7%) relative 225 to the flux and a floor value is set of 2.5 10 −3 .All other observations are discarded.In addition the MODIS snow indicator is used to trigger a swich of the prior for the background reflectance from the non-snow to the snow version.
We analyse the JRC-TIP products over the range of the albedo input plane that is actually covered by observations, more specifically the range covered by the MODIS collection 5 albedo 1 km input products for the year 2005 that were processed byPinty et al. (2011a, b).We focus on snow-free background conditions, i.e. all prior values and uncertainties are spatially invariant.We show for 275 effective LAI and background reflectance in the VIS (figure4) as well as for effective canopy single scattering albedo in the VIS and FAPAR (figure5) the retrieved mean values (top panels) and one sigma uncertainty ranges (middle panels) as well as uncertainty reduction/knowledge gain as defined by equation (9) (bottom panels) over the albedo plane.The first point to note is the limited sub-set of the albedo plane that is covered by actual albedo observations.A further point to highlight is the 280 fundamental role of the effective LAI: High effective LAI values correspond to relatively high posterior LAI uncertainty and little knowledge gain, because the dense canopy can only be penetrated to a limited extent.For the same reason, we can infer little information on the background under dense canopies, i.e. there is a high posterior uncertainty and little knowledge gain.By contast, given the large amount of canopy material, we can substantially reduce the uncertainty in the single scattering 285 albedo, i.e. we have a large knowledge gain.Low effective LAI characterise an almost transparent canopy: Uncertainty on LAI and background reflectance is low and there is high knowledge gain from observations.The low amount of canopy material limits the knowledge gain for the single scattering albedo, i.e. we are left with relatively high uncertainty.In this regime the observed albedo is determined by the background reflectance (shown for the visible domain in panel b of figure4).The 290 pattern of the mean value for FAPAR is similar to that for LAI.The uncertainty is, however, different.
Figure 4 :Figure 5 :
Figure 4: Mean value (upper panel), Uncertainty (middle panel), and Uncertainy reduction (bottom panel) for effective LAI (left) and background reflectance in the VIS (right) Biogeosciences Discuss., doi:10.5194/bg-2016-310,2016 Manuscript under review for journal Biogeosciences Published: 9 August 2016 c Author(s) 2016.CC-BY 3.0 License.possibility to assess the performance of the JRC-TIP with regard to its ability to partition the incoming solar radiation.For the extreme conditions of computer reconstructed 'actual' canopy scenarios, with a range of sun zenith angle and vegetation background including snow covered conditions, the vast majority of the absorbed flux values (i.e.FAPAR) falls within +/-10 % relative to those values estimated by the reference Monte-Carlo model (see section 3 of Widlowski et al., 2011).
Figure 8 shows these observations together with the direct transmission derived by JRC-TIP from MODIS collection 5 broadband WSA products at 500m and 1km.Grey and blue shaded ranges indicate the spatial variability along the transect at which the observations were collected, 360 and the red error bars indicate the uncertainty range that is part of the retrieved product.The lefthand panel is based on 500m MCD43 input albedos and exhibits slightly better fit to the in situ observed fluxes than the right-hand panel, which is based on the MCD43 1km albedo product.In this example the root mean squared error (RMSE) is used (see upper right corner in each panel) as a simple metric that quantifies the fit.Temporal correlation or more sophisticated metrics that 365 take the uncertainty in products and in observations into account are possible alternatives.We point out that the uncertainty ranges that are displayed for observed and retrieved transmittance capture different aspects of uncertainty: While the ranges in the observations cover spatial variability along the transect, the product error bars refer to the pixel average and indicate the one-sigma uncertainty range that is consistent with the uncertainties in the prior and in the albedo input.370Ingeneral, the results show good consistency between the JRC-TIP products and this ensemble of information given that the MODIS sub-pixel variability corresponds to a range of values that are analogous to the uncertainties associated with the JRC-TIP retrievals.For a single period (from mid to end of January) the direct transmission derived by JRC-TIP from both products is completely outside the observed range.For the 500m resolution, we trace this back to the input albedos (shown BiogeosciencesDiscuss., doi:10.5194/bg-2016Discuss., doi:10.5194/bg--310, 2016 Manuscript under review for journal Biogeosciences Published: 9 August 2016 c Author(s) 2016.CC-BY 3.0 License.
simple protocol to validate the JRC-TIP products against in-situ data,Pinty et al. (2011c) also highlighted the lack of critical, although not challenging, measurements of for instance the background albedo and its spatio-temporal variability at site level.This is a typical but very unfortunate situation, as the combination of the direct transmission (i.e.effective LAI) and the 395 background reflectance, largely determines the partitioning of the incoming flux between the canopy and the soil.It has been so far very challenging to identify other such sites where comparable datasets acquired in-situ over time are available for in-depth validation exercise.
Biogeosciences Discuss., doi:10.5194/bg-2016-310,2016 Manuscript under review for journal Biogeosciences Published: 9 August 2016 c Author(s) 2016.CC-BY 3.0 License.annual variability as demonstrated for FAPAR by Gobron (2015).Sippel et al. (2016) use a deviation of the 2012 spring and late summer FAPAR from the respective long-term means to analyse the effect 415 of a drought event on vegetation activity over North America and explain the response mechanism
425
and the complete set of state variables that parameterise the two-stream model at its core.This two-stream model provides a one-dimensional approximation of the radiative transfer within the canopy soil system, typically implemented in advanced land components of climate models.This renders the retrieved (model dependent) state variables (such as the effective LAI) as compliant as possible to climate model applications (climate model compliance).The retrieved fluxes have a clear 430 physical definition and are, thus, model independent.Hence, among the JRC-TIP products the fluxes are particularly suitable for assimilation into terrestrial models.Even in this case it is, nevertheless, crucial to have in the terrestrial model an observational operator that provides a correct mapping from the state variables onto the simulated counterpart of the flux component that is being assimilated.All JRC-TIP products include estimates of uncertainty including their covariance that are consis-435 tently derived in a fully traceable manner through rigorous uncertainty propagation from prior and observational information in a two-step procedure.The first step derives uncertainty estimates for the state variables and the second step maps these uncertainty estimates forward to the simulated fluxes.
445
tion it generates temporally stable climate data records from any albedo input record that fulfils this condition.Biogeosciences Discuss., doi:10.5194/bg-2016-310,2016 Manuscript under review for journal Biogeosciences Published: 9 August 2016 c Author(s) 2016.CC-BY 3.0 License.JRC-TIP products are typically provided in the native resolution of the albedo input product, i.e. on grids that are much finer (e.g. a few 100 to a few 1000 m) than typical resolutions of continental to global-scale terrestrial models.To ensure their radiative consistency and climate model compliance, 450 products on grids coarser than this native resolution have to be derived by first aggregating the albedo input and then applying JRC-TIP.
Figure 7 :Figure 8 :Figure 9 :
Figure 7: Relative differences of JRC-TIP FAPAR differently aggregated to 0.5°.wrong denotes aggregation of the JRC-TIP FAPAR generated at 0.01°, whereas correct denotes aggregation of the input albedo products and subsequent application of JRC-TIP.
a Value associated with the 'green' leaf scenario.
table.Once the TIP table is generated, Voßbeck et al. (2010)iven albedo input pair can be performed through a look up in the TIP table.We stress that the role of the TIP table is different from the traditional use of look up tables (LUTs) in retrieval schemes: While traditional LUTs relate input to output of the forward model (i.e.state 245 variables to albedos), the TIP table relates input to output of the inverse model JRC-TIP (i.e.albedos Biogeosciences Discuss., doi:10.5194/bg-2016-310,2016ManuscriptunderreviewforjournalBiogeosciencesPublished: 9 August 2016 c Author(s) 2016.CC-BY 3.0 License.to the complete set of variables retrieved by TIP, including uncertainty ranges and auxillary information).The use of the TIP table in a processing system has four advantages over the use of a standard JRC-TIP retrieval.2.Second, it simplifies quality control:Clerici et al. (2010);Voßbeck et al. (2010)describe a 255 number of iterative procedures to enhance the quality of the retrievals in TIP table.They exploit, for example, the requirement of a smooth dependence of the solution on the input albedos to detect outliers.3.Third, the TIP table approach assures stability of any Climate Data Record (CDR) that is generated from a stable albedo CDR.By construction, JRC-TIP will always retrieve the same 260
|
v3-fos-license
|
2024-02-02T16:02:46.130Z
|
2024-01-02T00:00:00.000
|
267379577
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://biomedicineonline.org/index.php/home/article/download/3498/1114",
"pdf_hash": "c6af821b07ea906a39c72195d3e275cc220d4045",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46182",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"sha1": "8512c70003bb10a8cb55e93b651c5905d059c526",
"year": 2024
}
|
pes2o/s2orc
|
In vitro cytogenetic and cytotoxic activity of Xanthium strumarium plant extract on human breast cancer cell line
Introduction and Aim: The enormous greater parts of people rely on conventional medicinal plants for their everyday health care desires . One of these medicinal plants is Xanthium strumarium . The purpose of this study was to assess the total phenolic content and antioxidant activity of X. strumarium extract as well as test its activity against micronucleus formation and cytotoxicity against breast cancer cell lines. Materials and Methods: An estimation was made of the total flavonoid antioxidant content in the ethanolic extract of X. strumarium . The plant extract was assessed for its DPPH radical scavenging activity and compared to the conventional vitamin C. The plant extract's cytotoxicity was evaluated by conducting an MTT test on MCF-7 cancer cell lines. Results: The total flavonoid content estimated in the plant extract was 215.63± 5.85 µg/ml. The DPPH results varied depending on concentration and found to be significantly reduced at 100 and 200 mg/ml of the extract in comparison to vitamin C. In contrast to the negative control, the micronucleus generation in the blood of breast cancer patients was reduced, notably at 200 mg\ml concentration of plant to (0.0065 ± 0.0006 mn\cell), when it was higher in untreated culture (0.0230 ± 0.0013 mn\cell). When plant concentration was increased (from 6.25 to 200 µg/ml), there was a drop in cell viability (86.27± 0.70% to 50.04± 3.32%). Conclusion: This research indicates that the ethanolic extract of X. strumarium exhibits antioxidant capabilities and has the capability to cause cytotoxic effects on breast cancer cells. These findings were the result of an investigation that was conducted.
INTRODUCTION
he genus Xanthium (Family: Asteraceae) commonly known as 'cockleburs' are flowering herbs that are widely distributed throughout America and Eastern Asia (1).Xanthium species have been used in traditional Chinese and Indian medicine since ancient times (2).Pharmacological and phytochemical studies have shown the plant X. strumarium to have antiinflammatory, analgesic, antibacterial, anticancer, antifungal, antihyperglycemic, antimitotic, antitrypanosomal, antimalarial, and diuretic properties (1)(2)(3)(4).Even though X. strumarium is used medically, few research studies have shown X. strumarium ingestion to cause deleterious toxic side effects resulting in death in farm animals (5,6) and hepatotoxicity in humans (7).Recent studies have shown that plants with potential cytotoxic activity to be successfully used in treating human cancers (8,9).Among 100s of cancer, breast cancer is the most prevalent cancer worldwide (10)(11)(12)(13).
Several cytotoxic phytocompounds have been reported from Xanthium species (2).Xanthanolides from Xanthium species have been reported to have antitumor activity (14,15).Xanthatin and xanthinosin from X. strumarium L. has been reported to be a potential anticancer agent (16).Sesquiterpene lactones are the main bioactive constituents isolated from Xanthanium species that have been reported to exhibit antioxidant and cytotoxicities against different cancer cell lines (15)(16)(17).The research aimed to investigate the effect of ethanolic extract of X. strumarium for its phytochemical constituents was assayed as well as its cytogenetic and cytotoxic potentials on breast cancer cell line MCF-7 originated from the human breast.
Plant material
Fresh aerial fragments of Xanthium strumarium were collected between March 2020 and January 2021 from the northern region of Iraq (Erbil city).The plant material was sent to the Herbarium division of the Department of Biology of the Sciences College, Baghdad University for identification and documentation.
X. strumarium ethanol extraction
In the laboratory, the fresh plant was rinsed with distilled water following which it was cut into pieces, dried and powdered.To obtain the ethanolic extract of the plant material, about 50 grams of plant powder was mixed with 70% of ethanol and the mixture allowed to evaporate at 65°C for three hours using a T Soxhlet apparatus.The crude extract obtained was filtered twice: once with clean Whatman no.1 filter paper and then with a muslin cloth.At a temperature between 40 and 45 o C, the filtrate was dried by evaporation.The concentrated extract obtained was stored in a plastic container at 4 o C until use (18).
Estimation of total flavonoids
The plant extract was analyzed using the aluminium chloride colorimetric method to determine the concentration of the most active ingredient, which was the total flavonoid (19).In summary, the ethanolic extract (3.2 mg) was dissolved in 5ml of 50% methanol, and then 1ml of a 5% (w/v) sodium nitrite solution was added. 1 ml of a 10% (w/v) aluminium chloride solution was added to the mixture, and it was left undisturbed for 5 minutes.Then, 10 ml of a 10% (w/v) NaOH solution was added.The mixture was diluted to a total amount of 50 cc using distilled water and well mixed.The spectrometer was used to measure the absorbance of the mixture at 450 nm after a duration of 15 minutes.The measurement of a standard curve was conducted using rutin as a reference for flavonoids.Various concentrations (2.5, 5, 10, 20, 40, and 80µg/ml) were employed in the procedure.
Estimation of DPPH radical scavenging activity
The assessment of DPPH radical scavenging activity was conducted according to the previously reported procedure (20)(21).A portion of 0.1 ml of either the plant extract or the standard (Vitamin C) at concentrations of 0.625, 0.125, 0.250, and 0.500 mg/ml, was combined with 3.9 ml of the DPPH solution.The absorbance of each solution was measured at 517 nm using a spectrophotometer after being incubated at 37 °C for 30 minutes.The effectiveness of a chemical in removing DPPH radicals was evaluated using the following equation:
Detection of micronucleus production in blood of breast cancer patients
This test involved female breast cancer patients (n=10) and healthy individuals (n=10), all aged between 25-40 years.Breast cancer patients were referred to the Baghdad Teaching Hospital for evaluation and treatment.Using a disposable syringe that was heparin-coated, peripheral blood (5 ml) was drawn from each of the participants.Five plant extract concentrations (6.2 to 200 g/ml) were tested for their ability to detect the formation of micronuclei in both patient and control cultured blood cells.
Cell micronucleus test
The cell micronuclei test was done according to the protocol of Schmid, W (22). Two millilitres of fully prepared, ready-to-use RPMI-1640 culture medium were enhanced with 0.1-0.3millilitres of PHA.Subsequently, the culture tube was filled with 0.5 millilitres of blood and 0.1 millilitres of the plant extract at different concentrations (6.2-200 µg/ml).The six cultures were incubated at 37 °C for 72 hours followed by centrifugation at 800 rpm for 5 mins.The cell deposit was subsequently diluted in a 0.1M hypotonic KCl solution, warmed to 37°C, and gently agitated every five minutes for 30 minutes in a water bath maintained at 37°C.Following centrifugation of the suspension at 800 rpm for 5 minutes, the liquid portion above the sediment was removed, and the total volume was adjusted to 5 ml by adding a small amount of a cold fixative (4°C).On a clean slide, the fixed cells were spread out and allowed to air dry.The slide was subjected to a 15-minute staining process using Giemsa stain, followed by rinsing with distilled water and subsequent air drying.The slide was scrutinized using a 100X oil immersion lens to assess the presence of micronuclei in the cells' growth.A sample of 1000 cells was selected at random for analysis, and the micronucleus index score was calculated using the following equation:
Cytotoxic screening and MTT assay for X. strumarium extract on MCF-7 cancer cell line
Breast carcinoma cells MCF-7, was used for cytotoxicity screening of X. strumarium ethanol extract.Through the use of a ready kit of MTT, different plant extract concentrations ranging from 6.2, 12.5, 25, 50, 100, and 200 µg/mL were used to assess the cytotoxic effects of X. strumarium on MCF-7 breast cancer cell line in vitro.The cell line was maintained in accordance with (23,24).In summary, MCF-7 cells were placed into 96-well microtiter plates and kept at a temperature of 37°C for 24 hours to allow them to adhere.Subsequently, varying amounts of the plant extract were applied to each well and incubated for an additional 24 hours.Subsequently, a volume of 10 mL of MTT solution was introduced into the mixture, which was then incubated once again at a temperature of 37°C and a CO2 concentration of 5% for a duration of 4 hours.A volume of 100 mL of the stable solution from the kit was introduced, and the absorbance was then measured after 5 minutes using an ELISA microplate reader (Bio-Rad, USA) to quantify the generation of formazan at a wavelength of 570 nm.
Statistical analysis
Analysis of variance (ANOVA) was used using the computer software SPSS version 13.1 to determine differences between means.The values of the studied parameters are expressed as mean standard error.
RESULTS
The flavonoid content in the ethanolic extract of X. strumarium in this study was estimated to be 215.63 ± 5.85µg/ml.The DPPH radical scavenging ability estimated for 200, 100, 50, 25, and 12.5 mg/ml of the plant extract was 75.08±1.59,63.23±3.742,52.08±2.780,39.66±4.169and 28.70±3.108respectively (Table 1) at an IC50 value of 50.89 μg/mL.A significant DPPH radical scavenging activity was observed at 100 and 200 µg/ml concentrations of the X. strumarium extract in comparison to Vitamin C at the same concentrations (Table 1)
Effect of X. strumarium extract on inhibiting micronucleus formation
The presence of micronucleus in breast cancer aspirate was seen (Fig. 1).The micronucleus formation in lymphocyte cultures of breast cancer patients in the presence of different concentrations of ethanolic extract is given in Table 2.The results indicated the ability of the plant to reduce micronucleus according to concentrations tested by the plant.The maximum reduction is shown at 200 µg\ml.the micronucleus formation ratio was (0.0065±0.0006, 0.0087±0.0008,0.0124±0.0009,0.0136±0.0008and 0.0170±0.0007micronucleus\cell) for (200,100, 50,25 and 12.5 µg\ml) respectively in comparison to culture of untreated breast cancer cell (0.0230±0.0013 micronucleus\cell).
MTT test plant extract cytotoxicity
The percentage vitality of treated cells was determined by comparing them to the normal cell line WRL-68.Table 3 showed that there was a decline in cell viability as the concentration of the plant rose.The greatest reduction in MCF-7 cell viability (%) was observed at a concentration of 200 µg/ml (50.04±3.32),while the lowest reduction was reported at a concentration of 12.5 µg/ml (86.27±0.70).Across all examined doses of the plant extract, the cell viability of MCF7 cell line cells was considerably decreased compared to normal WRL-68 cells (Table 3).The plant extract demonstrated cytotoxic action with an IC50 value of 24.02 μg/ml.From the analysis of the plant extract's impact on the WRI-68 normal cell line, a specific IC50 value of 404.3 μg/ml was determined (Table 3, Fig2).
DISCUSSION
In recent years, phytochemicals extracted from plants have been widely used in traditional medicine for the prevention and treatment of various health problems.Phytochemical analysis of X. strumarium ethanolic extract in the present showed the plant extract to be rich in flavonoids with a high scavenging ability.This is in line with previous studies wherein both leaves and stems of X. strumarium have been reported to contain flavonoids as one of its phytochemical components and well thought-out as sources of antioxidants and scavenging activity (25,26).A MTT assay study for the anti-cancer activity of the ethanolic extract in this study, revealed the extract to exhibit cytotoxic activity on MCF-7 breast cancer cells with an IC50 rate of 24.02 μg/ml.In addition, the extract also reduced micronucleus formation in breast cancer cells which indicates that X. strumarium ethanolic extract possesses the potential to reduce DNA damage caused at the chromosome level.Our results have shown X. strumarium ethanolic extract to have antiproliferative and antioxidant effects which assumes significance as anti-proliferative activity has been linked to an increase in apoptosis (27).There are many attempts for cancer treated with alternative treatment such as usinf genetically engineered gene transfer systems (28) or bacterial enzymes (29) or by applied oncolytic viruses (30,31).However, on the basis of these results, more research on X. strumarium with anticancer activity potential required due to its role as source of phytochemicals for cancer therapies could be based on these findings (32,33).
CONCLUSION
This study showed that X. strumarium's ethanolic extract can induce apoptosis in the MCF-7 breast cancer cell line and exhibit antioxidant and anti-cancer properties.
Fig. 2 :
Fig. 2: Cytotoxicity potential of plant extract on breast cancer and normal cells
Table 1 :
DPPH radical scavenging activity of X. strumarium ethanolic extract and Vitamin C *siginficant
Table 2 :
Micronucleus formation in lymphocyte cultures of breast cancer patients
|
v3-fos-license
|
2018-04-03T04:41:21.335Z
|
2017-10-31T00:00:00.000
|
205613681
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-017-13758-6.pdf",
"pdf_hash": "72378ea796d19528ad3d769731d6604caa9b6624",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46183",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "544f506d734cf751cfb92787ff6e4c6f0251d389",
"year": 2017
}
|
pes2o/s2orc
|
Enhanced flux pinning in YBCO multilayer films with BCO nanodots and segmented BZO nanorods
The flux pinning properties of the high temperature superconductor YBa2Cu3O7−δ (YBCO) have been conventionally improved by creating both columnar and dot-like pinning centres into the YBCO matrix. To study the effects of differently doped multilayer structures on pinning, several samples consisting of a multiple number of individually BaZrO3 (BZO) and BaCeO3 (BCO) doped YBCO layers were fabricated. In the YBCO matrix, BZO forms columnar and BCO dot-like defects. The multilayer structure improves pinning capability throughout the whole angular range, giving rise to a high critical current density, J c. However, the BZO doped monolayer reference still has the most isotropic J c. Even though BZO forms nanorods, in this work the samples with multiple thin layers do not exhibit a c axis peak in the angular dependence of J c. The angular dependencies and the approximately correct magnitude of J c were also verified using a molecular dynamics simulation.
(a) Figure S2. Figure S3. The critical current densities at 40 K (a) in the case B||c axis of YBCO and (b) B||ab plane, extracted from the angular dependencies. The part of the data for m100 is missing due to current limitations of the measurement system.
TEM characterizations
The TEM image ( Fig. S2 (a)) shows BCO particles in m1000ZC. The diameter of the particles is roughly (3±1) nm. Other defects seen in the BCO layer of 1000ZC ( Fig. S2 (b)) are strained zones, dislocations, basal dislocations and Y124 intergrowths. Additionally, the BCO layer contains very long stacking faults that are seen as white lines in the images. The BZO layer has also stacking faults, although shorter ones. The interface between the BZO layer and the BCO doped layer is good, although there are some dislocations to accommodate strain changes between a BCO and BZO doped layers. On the STO substrate/film interface, there is a 15 -20 nm thick layer that is not fully crystallized. On the BZO/BCO interface, there can be seen accumulation of BCO, sometimes above BZO nanorods. As a whole, the effect of these defects on pinning is smaller as compared to correlated pinning centres. S4, S5 As measured by TEM, the thickness of m250 sample is 310 nm, somewhat less than the 380 nm of the m1000ZC. The m250 has a 6 nm disordered layer at the film/substrate interface. The nanorods in m250 are similar to 1000ZC, although shorter because of the smaller layer thickness. The twin regions are small which means that there are also correspondingly large number of twin boundaries. The sample m100 is 305 nm thick with a 5 nm layer of unordered YBCO next to the substrate interface. The BZO here is not splayed, but straight and the BZO layer thickness is on average only 16 nm. The thickness and distance of the nanorods is the same as on samples with thicker layers. The CZ sample, on the other hand, has both good and strained areas, and altogether the sample is very strained. There are no BZO columns but some nanodots have formed nanocolumn-like features.
Superconducting properties
To get a good picture of the pinning properties of the samples, the measurements of J c (θ ) were made at 10, 40 and 77 K in 1, 2, 4, 6 and 8 T. The magnetic field dependencies of J c at 40 K, extracted from the angular data, show clear differences between samples. In the case B||c axis of YBCO the ( Fig. S3 (a)), the samples having a c peak due to BZO have a high J c within the deposited samples. Also, samples with nanorods but no c peak have a rather high J c value. Only the m250 is deteriorated more in increasing field than other samples with similar self-field J c . In the ab direction ( Fig. S3 (b)) the Z sample with a monolayer of BZO is not any more among the highest J c samples. Here, especially the high J c values of samples with a large number of layers can be most clearly seen. Only the J c of the C sample decreases faster under the external magnetic field than other samples with similar value of J c in the self-field.
Simulation details
All vortices in the simulation were divided into 40 vertical parts, i.e. the simulation was formed of stacked layers. All the layers were subject to periodic boundary conditions in the ab plane. Thus the vortices leaving the samples due to the Lorentz force re-enter from the other side. Inside each layer, a vortex can interact with other vortices in the same layer. Between the layers, the line tension of a vortex acts as a binding force. Because of the layered structure, the simulation cannot be used to describe the situations close to case B||ab plane of YBCO. The only defects that were introduced to pin vortices were either nanorods (radius of 3 nm) or nanodots (radius 1.5 nm). The distribution of the nanorods was taken from previous experiments S6 but for computational reasons their mutual distances were doubled. If there were another layer of nanorods, like in "m250 equivalent" sample, the position of the nanorods at another layer was varied slightly. This was done to avoid having nanorods on top of each other, as suggested by the experimental results in this work. The positions of the nanodots were random. No other typical defects for YBCO were introduced because we wanted to see the effect of these defects in particular. The defects were divided so that there are first nanorod layers with certain thickness then equally many nanodot layers. This was repeated until the total thickness of simulation is 40 layers. To speed up the stabilization of the vortex lattice, initially the vortices were set into hexagonal lattice. S7 In the model, the temperature is not implemented i.e. it is 0 K. The simulation was written in Python using molecular dynamics with velocity Verlet algorithm.
The J c was iterated by bisection method. The simulation was run with one current and the next current was adjusted according to the stability of the found state. The solution was considered stable if the vortices were moving less than double the coherence length of YBCO in the ab plane. This was done by comparing the position of the vortex at 1,000 and 500 steps earlier with the present coordinates. Also, if this condition of stability was not fulfilled, the stability was determined based on the speed of vortices. If it was below 200 m/s, the simulation was considered stable. Both stability conditions were checked at the same time, with regular intervals.
|
v3-fos-license
|
2018-12-19T04:17:19.948Z
|
2017-01-01T00:00:00.000
|
114259135
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/22/matecconf_icmaa2017_04012.pdf",
"pdf_hash": "18515e76b3f99366e946231b82fab6adeaf7e68b",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46185",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "18515e76b3f99366e946231b82fab6adeaf7e68b",
"year": 2017
}
|
pes2o/s2orc
|
Analysis of windlass sprocket wheel intensity
The multi-body dynamics model of windlass was established by RecurDyn and Pro/E in this paper, and the multi-body dynamics simulation was also conducted. According to the results of the multi-body dynamics simulation, structural static analysis on sprocket wheel in dangerous conditions were executed by ANSYS Workbench. The results showed that there are three bearing situations in the meshing transmission process of sprocket wheel and chain: Single tooth bearing, two tooth bearing and carrying equal contact force and two tooth bearing but one tooth contact force reaches the maximum value. The equivalent stress value at the contact position of sprocket wheel and chain exceeds the yield stress of the chain wheel material, but it does not cause damage to chain wheel. In addition to the contact location, the equivalent stress value of the remaining parts of sprocket wheel are in the allowable stress range of sprocket wheel material, comply with the requirement of strength design.
Introduction
Windlass is an equipment to overcome the external force in the work phase of anchor and keep the stablility of the ship.It ensures the safety of the ship when it leave away from the dock and in case of emergency braking.Safe and reliable operation of the windlass directly affect normal operation and operational safety of the ship.The anchor of windlass is implemented by meshing transmission process of sprocket wheel and chain.Therefore, the normal work of sprocket wheel is essential for windlass.
Design and Verification on windlass components are based on empirical formulas and simple mechanical models currently [1], such as Zhou Zhong-wang, Fang Lian-xing [2][3].Their calculation of the load in the sprocket shaft and the base by finite element analysis are different from the actual load when the windlass working.In this paper, the sprocket wheel was studied, and the multi-body dynamics model of windlass was established by RecurDyn and multi-body dynamics simulation was also conducted.The mechanical characteristics of sprocket wheel during operation were analyzed and the dangerous working conditions of sprocket wheel were determined.The strength of sprocket wheel were analyzed by ANSYS Workbench.
Brief introduction of sprocket wheel
Since the shape of sprocket wheel is so complex, it is generally made by casting fabrication [4], and chain are also blank parts, they both have large manufacturing errors.Therefore, there is slip phenomenon in the sprocket wheel nest in the meshing transmission process of sprocket wheel and chain.It caused abrasion in the tooth surface of sprocket wheel.The meshing transmission between sprocket wheel and chain belongs to circular chain meshing transmission.There are frequent shocks between sprocket wheel and chain in the meshing transmission process [5].So the sufficient strength of sprocket wheel is required.
In this paper, the diameter of the hydraulic windlass anchor is 78mm.Sprocket wheel is a five gears of A type.Its material is GS-52, and the yield strength is 260MPa.Its tooth structure is shown in Fig. 1.According to People's Republic of China shipbuilding industry CB/T 3179-1996 <sprocket wheel> Standard [6], the main structural parameters are as follows: the pitch circle's diameter is 1062mm, namely when the sprocket wheel meshing with a chain, circumscribed circle diameter of the regular polygon formed by the center line of chain, can be used as the calculation diameter when sprocket wheel transfering torque.The speed calculation diameter is 993mm, used as the calculation diameter which determines the anchor's mobile line speed when anchor handling.
Windlass multi-body dynamics model building and analysis of results
Since the shape of sprocket wheel is so complex, it is generally made by casting fabrication [4], and chain are also blank parts, they both have large manufacturing errors.Therefore, there is slip phenomenon in the sprocket wheel nest in the meshing transmission process of sprocket wheel and chain.It caused abrasion in the tooth surface of sprocket wheel.The meshing transmission between sprocket wheel and chain belongs to circular chain meshing transmission.There are frequent shocks between sprocket wheel and chain in the meshing transmission process [5].So the sufficient strength of sprocket wheel is required.
Import and process of solid models of windlass
The windlass, chain and chain stopper are interferencefree assembled in Pro/E [7] and save as a Parasolid file, then import it to Set units to "MMKS", adjust the direction of gravity for the Z-axis positive direction, the remaining parameters keep the default settings.Use Merge function to merge every parts of windlass, the assembly model of windlass is shown in Fig. 2 after the merger.The name of the components are shown in table 1.
Windlass model constraints exerts
According to the working principle of the various components of the windlass, constraints for various components of the windlass are added, they are shown in table 2.
Windlass model exposure modeling
The contact between various components of the windlass use the Extended Surface To Surface in RecurDyn, such as the contact between chain and chain stopper, chain and chain, chain wheel and chain.
Windlass model-driven load
The speed of the anchor must be greater than or equal to 0.15m/sec in the working phase of anchor.The speed of gearbox pinion is 24 rad/s.To reduce the impact of sudden changes in velocity, load is applied by STEP function [8][9].Gearbox pinion angular velocity-time curve graph is shown in Fig. 3.
Analysis of Kinetic simulation results
Composition of contact force graph between sprocket wheel and chain during the working phase of anchor is shown in Fig. 5.The analysis of results from figure 5 are listed as follows: 1.The number of chain are no more than two during the meshing transmission process of sprocket wheel and chain; 2. Points of intersection of adjacent curves represent two teeth bearing and carrying equal contact force; 3. With the rotation of sprocket wheel, sprocket wheel tooth contact force increasing and the anchor chain nest slippage occurs, impact between sprocket wheel and chain appears, the contact force mutates and reaches the maximum value, namely the time when contact force of various sprocket wheel and chain tooth reaches the maximum in the diagram.In this case, two teeth bearing, but one tooth contact force reaches the maximum value.
Strength analysis of sprocket wheel
According to the results of contact force graph between sprocket wheel and chain, there are three bearing situations in the meshing transmission process of sprocket wheel and chain: Single tooth bearing, two tooth bearing and carrying equal contact force and two tooth bearing but one tooth contact force reaches the maximum value.The third case is a dangerous condition from the perspective of sprocket wheel strength.Teeth load values are listed as follows: FX1=880.92N,FY1=108268.8N,FZ1=331395.91N,FX2=466.63N,FY2=8110.92N,FZ2=6391.01N.
Sprocket wheel and Sprocket wheel gear are connected by five bolts and five cylindrical pins.The effective area of the bolt and cylindrical pin are 1152mm^2 and 1440mm^2.The torque of sprocket wheel in the working phase of anchor is T=F*D/2=290KN*1062mm/2=153990Nm.The results showed that the pressure value of cylinder pin hole and bolt hole are 12.98Mpa and 16.23Mpa.
In the ANSYS Workbench software, material properties on chain wheel are defined as follows: elastic modulus of 202Gpa, Poisson's ratio of 0.3, density of 7800kg/m^3.Mesh the procket wheel, add pressure load to bolts and cylindrical pins, then add tooth load, define boundary conditions and take the static analysis.Mesh model of sprocket wheel is shown in Fig. 6.Finite element analysis model of sprocket wheel is shown in Fig. 7. Strength analysis of sprocket wheel are listed as follows: the contact between sprocket wheel and chain belongs to point contact in theory, therefore, stress value at the contact position of sprocket wheel and chain is very large, the maximum equivalent stress reaches 1567.6Mpa.When the stress exceeds the yield limit of the material of sprocket wheel, plastic deformation occurs at the contact position of the sprocket wheel and chain.When the plastic deformation occurs, the contact area of the sprocket wheel and chain increases, the stress value declines sharply.Therefore, sprocket wheel will not be undermined in the actual work process.In addition to the contact position of the sprocket wheel and chain, the equivalent stress values of the remaining parts of the sprocket wheel do not exceed 111.97Mpa, less than the allowable stress of the sprocket wheel material 0.9 =234Mpa, meet the strength requirements of chain wheel.
Conclusion
The mechanical characteristics were analyzed in the meshing transmission process of sprocket wheel and chain by RecurDyn.The results showed that: there are three bearing situations in the meshing transmission process of sprocket wheel and chain: Single tooth bearing, two tooth bearing and carrying equal contact force and two tooth bearing but one tooth contact force reaches the maximum value.The third case is a dangerous condition from the perspective of sprocket wheel strength.
As can be seen from the results of static analysis, the equivalent stress value at the contact position of sprocket wheel and chain exceeds the yield stress of the chain wheel material, but it does not cause damage to chain wheel.In addition to this location, the equivalent stress values of the remaining parts of the sprocket wheel are in the allowable stress range of sprocket wheel material, comply with the requirement of strength design.
The equivalent stress cloud graph and the related conclusions of the sprocket wheel through the strength analysis provide a reference for structure optimized design and lightweight design of the windlass.
Figure 5 .
Figure 5. Composition of contact force.
Figure 7 .Figure 8 .
Figure 7. Finite element analysis model of sprocket wheel.According to the static structural analysis results of sprocket wheel, the equivalent stress figure of sprocket wheel in dangerous conditions is shown in Fig.8.
Table 2 .
Constraints for various components
|
v3-fos-license
|
2017-06-06T14:07:01.000Z
|
2017-06-01T00:00:00.000
|
3320440
|
{
"extfieldsofstudy": [
"Physics",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41467-017-01304-x.pdf",
"pdf_hash": "440fbb0d686d87f9832a8a36085f1c15163c53aa",
"pdf_src": "ArXiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46187",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"sha1": "031e065f7f9e5eb4aa47842b55b6fa3d6e2067b2",
"year": 2017
}
|
pes2o/s2orc
|
Mechanical on-chip microwave circulator
Nonreciprocal circuit elements form an integral part of modern measurement and communication systems. Mathematically they require breaking of time-reversal symmetry, typically achieved using magnetic materials and more recently using the quantum Hall effect, parametric permittivity modulation or Josephson nonlinearities. Here we demonstrate an on-chip magnetic-free circulator based on reservoir-engineered electromechanic interactions. Directional circulation is achieved with controlled phase-sensitive interference of six distinct electro-mechanical signal conversion paths. The presented circulator is compact, its silicon-on-insulator platform is compatible with both superconducting qubits and silicon photonics, and its noise performance is close to the quantum limit. With a high dynamic range, a tunable bandwidth of up to 30 MHz and an in situ reconfigurability as beam splitter or wavelength converter, it could pave the way for superconducting qubit processors with multiplexed on-chip signal processing and readout.
conversion [19,20] and amplification [21].Very recently, several theoretical proposals [6,22,23] have pointed out that optomechanical systems can lead to nonreciprocity and first isolators have just been demonstrated in the optical domain [24][25][26].Here, we present an on-chip microwave circulator using a new and tunable silicon electromechanical system.
The main elements of the microchip circulator device are shown in Fig. 1 a-b.The circuit is comprised of three high-impedance spiral inductors (L i ) capacitively coupled to the in-plane vibrational modes of a dielectric nanostring mechanical resonator.The nanostring oscillator consists of two thin silicon beams that are connected by two symmetric tethers and fabricated from a high resistivity silicon-on-insulator device layer [27].Four aluminum electrodes are aligned and evaporated on top of the two nanostrings, forming one half of the vacuum gap capacitors that are coupled to three microwave resonators and one DC voltage bias line as shown schematically in Fig. 1c (see App.A for details).
The voltage bias line can be used to generate an attractive force which pulls the nanobeam and tunes the operating point frequencies of the device [9].Fig. 1 d shows the measured resonance frequency change as a function of the applied bias voltage V dc .As expected, resonators 1 and 3 are tuned to higher frequency due to an increased vacuum gap while resonator 2 is tuned to lower frequency.A large tunable bandwidth of up to 30 MHz as obtained for resonator 2, the ability to excite the motion directly and to modulate the electromechanical coupling in-situ represents an important step towards new optomechanical experiments and more practicable on-chip reciprocal and nonreciprocal devices.
As a first step we carefully calibrate and characterize the individual electromechanical couplings and noise properties.We then measure the bidirectional frequency conversion between two microwave resonator modes as mediated by one mechanical mode [10].The incoming signal photons can also be distributed to two ports with varying probability as a function of the parametric drive strength and in direct analogy to a tunable beam splitter.We present the experimental results, the relevant sample parameters and the theoretical analysis of this bidirectional frequency conversion process in App.B.
Directionality is achieved by engaging the second mechanical mode, a method which was developed in parallel to this work [28,29] for demonstrating nonreciproc-FIG.1. Microchip circulator and tunability.a, Scanning electron micrograph of the electromechanical device including three microwave resonators, two physical ports, one voltage bias input (V dc ) and an inset of the spiral inductor cross-overs (green dashed boxed area).b, Enlarged view of the silicon nanostring mechanical oscillator with four vacuumgap capacitors coupled to the three inductors and one voltage bias.Insets show details of the nanobeam as indicated by the dashed and dotted rectangles.c, Electrode design and electrical circuit diagram of the device.The input modes ai,in couple inductively to the microwave resonators with inductances Li, coil capacitances Ci, additional stray capacitances Cs,i, and the motional capacitances Cm,i.The reflected tones ai,out pass through a separate chain of amplifiers each, and are measured at room temperature using a phase locked spectrum analyzer (not shown).The simulated displacement of the lowest frequency in-plane flexural modes of the nanostring are shown in the two insets.d, Resonator reflection measurement of the three microwave resonators of an identical device, as a function of the applied bias voltage and a fit (dashed lines) to ∆ω = α1V 2 + α2V 4 with the tunabilties α1/2π = 0.53 MHz/V 2 and α2/2π = 0.05 MHz/V 4 with a total tunable bandwidth of 30 MHz for resonator 2 at 9.8 GHz.
ity in single-port electromechanical systems.We begin with the theoretical model describing two microwave cavities with resonance frequencies ω i and total linewidths κ i with i = 1, 2 parametrically coupled to two distinct modes of a mechanical resonator with resonance frequencies ω m,j and damping rates γ m,j with j = 1, 2. To establish the parametric coupling, we apply four microwave tones, with frequencies detuned by δ j from the lower motional sidebands of the resonances, as shown in Fig. 2a.In a reference frame rotating at the frequencies ω i and ω m,j +δ j , the linearized Hamiltonian in the resolved sideband regime (ω m,j κ 1 , κ 2 ) is given by (h = 1) where a i (b j ) is the annihilation operator for the cavity i (mechanics j), G ij = g 0,ij √ n ij and g 0ij are the effective and vacuum electromechanical coupling rates between the mechanical mode j and cavity i respectively, while n ij is the total number of photons inside the cavity i due to the drive with detuning ∆ ij , and φ ij is the relative phase set by drives.Here, ∆ 11 = ∆ 21 = ω m,1 + δ 1 and ∆ 22 = ∆ 12 = ω m,2 + δ 2 are the detunings of the drive tones with respect to the cavities and H off describes the time dependent coupling of the mechanical modes to the cavity fields due to the off-resonant drive tones.These additional coupling terms create crossdamping [30] and renormalize the mechanical modes, and can only be neglected in the weak coupling regime for G ij , κ j ω j , |ω m,2 − ω m,1 |.To see how the nonreciprocity arises we use the quantum Langevin equations of motion along with the inputoutput theorem to express the scattering matrix S ij of the system described by the Hamiltonian (1), and relating the input photons a in,i (ω i ) at port i to the output photons a out,j (ω j ) at port j via a out,i = j=1,2 S ij a in,i with i = 1, 2. The dynamics of the four-mode system described by Hamiltonian ( 1) is fully captured by a set of linear equations of motion as verified in App. C. Solving these equations in the frequency domain, using the input-output relations, and setting φ 22 = φ, φ 11 = φ 21 = φ 12 = 0, the ratio of backward to forward transmission reads (2) Here, Σ m,j = 1 + 2i (−1) j δ − ω /γ m,j is the inverse of the mechanical susceptibility divided by the mechanical linewidth γ m,j and C ij = 4G 2 ij /(κ i γ m,j ) is the optomechanical cooperativity.Note that, in Eq. ( 2) we assume the device satisfies the impedance matching condition on resonance i.e. S ii (ω = 0) = 0 which can be achieved in the high cooperativity limit (C ij 1).Inspection of equation ( 2) reveals the crucial role of the relative phase between the drive tones φ and the detuning δ to obtain nonreciprocal transmission.When the cooperativities for all four optomechanical couplings are equal (C ij = C) then perfect isolation, i.e. λ = 0, occurs for Equation 3 shows that on resonance (ω = 0) tan[φ] ∝ δ, highlighting the importance of the detuning δ to obtain nonreciprocity.Tuning all four drives to the exact red sideband frequencies (δ = 0) results in bidirectional behavior (λ = 1).At the optimum phase φ given by Eq. ( 3), ω = 0, and for two mechanical modes with identical decay rates (γ m,1 = γ m,2 = γ) the transmission in forward direction is given by where is the resonator coupling ratio and κ i = κ int,i + κ ext,i is the total damping rate.
Here κ int,i denotes the internal loss rate and κ ext,i the loss rate due to the cavity to waveguide coupling.Equation (4) shows that the maximum of the transmission in forward direction, , occurs when 2C = 1 + 4δ 2 /γ 2 and for large cooperativities C 1.These conditions, as implemented in our experiment, enable the observation of asymmetric frequency conversion with strong isolation in the backward direction and small insertion loss in forward direction.
Figure 2 b shows the measured transmission of the wavelength conversion in the forward |S 21 | 2 and backward directions |S 12 | 2 as a function of probe detuning for two different phases as set by one out of the four phase locked microwave drives.At φ = −102.6degree and over a frequency range of 1.5 kHz we measure high transmission from cavity 1 to 2 with an insertion loss of 2.4 dB while in the backward direction the transmission is suppressed by up to 40.4 dB.Likewise, at the positive phase of φ = 102.6 degree the transmission from cavity 1 to 2 is suppressed while the transmission from cavity 2 to 1 is high.In both cases we observe excellent agreement with theory (solid lines).Fig. 2 c shows the S parameters for the whole range of phases φ, which are symmetric and bidirectional around φ = 0. We find excellent agreement with theory over the full range of measured phases with less than 10% deviation to independently calibrated drive photon numbers and without any other free parameters.
For bidirectional wavelength conversion, higher cooperativity enhances the bandwidth.In contrast, the bandwidth of the nonreciprocal conversion is independent of cooperativity and set only by the intrinsic mechanical linewidths γ m,i , which can be seen in Eq (2).This highlights the fact that the isolation appears when the entire signal energy is dissipated in the mechanical environment, a lossy bath that can be engineered effectively [7].
In the present case it is the off-resonant coupling between the resonators and the mechanical oscillator which modifies this bath.The applied drives create an effective interaction between the mechanical modes, where one mode acts as a reservoir for the other and vice versa.This changes both the damping rates and the eigenfrequencies of the mechanical modes.It therefore increases the instantaneous bandwidth of the conversion and automatically introduces the needed detuning, which is fully taken into account in the theory.
The described two-port isolator can be extended to an effective three-port device by parametrically coupling the third microwave resonator capacitively to the dielectric nanostring, as shown in Fig. 1 a.The third resonator at a resonance frequency of ω 3 /2π = 11.30GHz is coupled to the waveguide with η 3 = 0.52 and to the two in-plane mechanical modes with (g 0,31 , g 0,32 )/2π = (22,45) Hz.Similar to the isolator, we establish a parametric cou- pling between cavity and mechanical modes using six microwave pumps with frequencies slightly detuned from the lower motional sidebands of the resonances, which for certain pump phase combinations can operate as a threeport circulator for microwave photons, see Fig 3 a.Using an extra microwave source as probe signal, we measure the power transmission between all ports and directions as shown in Fig. 3 b for a single fixed phase of φ = −54 degree, optimized experimentally for forward circulation.
At this phase we see high transmission in the forward direction S 21,32,13 with an insertion loss of (3.8, 3.8, 4.4) dB and an isolation in the backward direction S 12,23,31 of up to (18.5, 23, 23) dB.The full dependence of the circulator scattering parameters on the drive phase is shown in Fig. 3 c where we see excellent agreement with theory.The added noise photon number of the device is found to be (n add,21 , n add,32 , n add,13 ) = (4, 6.5, 3.6) in the forward direction and (n add,12 , n add,23 , n add,31 ) = (4, 4, 5.5) in the backward direction, limited by the thermal occupation of the mechanical modes and discussed in more detail in App.D.
In conclusion, we realized a frequency tunable microwave isolator / circulator that is highly directional and operates with low loss and added noise.Improvements of the circuit properties will help increase the instantaneous bandwidth and further decrease the transmission losses of the device.The external voltage bias offers new ways to achieve directional amplification and squeezing of microwave fields in the near future.Direct integration with superconducting qubits should allow for on-chip single photon routing as a starting point for more compact circuit QED experiments.
I. SUPPLEMENTARY INFORMATION Appendix A: Circuit properties
The electromechanical microwave circuit shown in Fig. 1 a, includes three high-impedance microwave spiral inductors (L i ) capacitively coupled to the in-plane vibrational modes of a dielectric nanostring mechanical resonator, creating three LC resonators with frequencies ω i = 1/ √ L i C i with i = 1, 2, 3.The nanostring resonator fabricated from a high resistivity smart-cut silicon-on-insulator wafer with 220 nm device layer thickness has a length of 9.4 µm and consists of two metalized beams that are connected with two tethers at their ends.The vacuum gap size for the mechanically compliant capacitor fabricated with an inverse shadow technique [31] is approximately 60 nm.
The electromechanical coupling between the nanostring mechanical resonator and each LC circuit is given by where v is the amplitude coordinate of the in-plane mode, CΣ,i is the participation ratio of the vacuum gap capacitance C m,i to the total capacitance of the circuit C Σ,i = C m,i + C s,i , where C s,i is the stray capacitance of the circuit including the intrinsic self-capacitance of the inductor coils.Eq. (A1) indicates that large electromechanical coupling g 0i requires a large participation ratio.We can make the coil capacitance C L,i relatively small by using a suspended and tightly wound rectangular spiral inductor with a wire width of 500 nm and wire-to-wire pitch of 1µm [32].Knowing the inductances L i of the fabricated inductors based on modified Wheeler, as well as the actually measured resonance frequencies ω i along with vacuum-gap capacitance C m (from FEM simulations), we can find the total stray capacitance including the intrinsic self-capacitance of the each inductor coil correspondingly.Careful thermometry calibrated mechanical noise spectroscopy measurements similar to the ones in [32] yield the measured electromechanical coupling for each mode combination as outlined in the table below.We use finite-element method (FEM) numerical simulations to find the relevant in-plane mechanical modes of the structure and optimize their zero point displacement amplitudes and mechanical quality factor.Our simulations are consistent with the measured mechanical frequencies for a tensile stress of ∼600 MP in a ∼70 nm thick electron beam evaporated aluminum layer [33].The associated effective mass and zero-point displacement amplitude along with the measured linewidths and resonance frequencies of the first two in-plane modes of the nanostring are presented in the table below.To understand the optomechanical frequency conversion, we first theoretically model our system to see how frequency conversion arises.Figure 4 a shows an electromechanical system, in which two microwave cavities with resonance frequencies ω 1 and ω 2 and linewidths κ 1 and κ 2 are coupled to a mechanical oscillator with frequency ω m and damping rate γ.The electromechanical coupling is driven by two strong drive fields, E 1 and E 2 , near the red sideband of the respective microwave modes at ω d,1(2) = ω 1(2) − ω m , see Fig. 4 b.In the resolved-sideband limit (ω m κ 1(2) , γ) the linearized electromechanical Hamiltonian in the rotating frames with respect to the external driving fields is given by (h = 1) where a 1(2) is the annihilation operator for the microwave signal field 1 (microwave signal field 2), b is the annihilation operator of the mechanical mode, ∆ 1(2) = ω 1(2) − ω d1(2) = ω m is the detuning between the external driving field and the relevant cavity resonance, and G i = g 0i √ n i is the effective electromechanical coupling rate between the mechanical resonator and cavity i with n i = 2Ei being the total number of photons inside the cavity.Note that, the fast-oscillating counter-rotating terms at ±2ω m are omitted from the Hamiltonian under the rotating wave approximation.
The first and second terms of Hamiltonian (B1) describe the free energy of the mechanical and cavity modes while the last term of the Hamiltonian indicates a beam splitter-like interaction between mechanical degree of freedom and microwave cavity modes.In fact this term allows both optomechanical cooling (with rate Γ i = 4G 2 i /κ i ) and bidirectional photon conversion between two distinct microwave frequencies.In the photon conversion process, first an input microwave signal at frequency ω 1 with amplitude a in,1 (ω 1 ) is down-converted into the mechanical mode at frequency ω m , i.e. a 1 (ω 1 ) Next, during an up-conversion process the mechanical mode transfers its energy to the output of the other microwave cavity at frequency ω 2 and amplitude a out,2 (ω 2 ), i.e. b(ω m ) Likewise, an input microwave signal at frequency ω 2 can be converted to frequency ω 1 by reversing the conversion process.In fact, the Hermitian aspect of the Hamiltonian (B1) makes the conversion process bidirectional and holds the time-reversal symmetry.
The photon conversion efficiency, which is defined as the ratio of the output-signal photon flux over the input-signal photon flux, is given by
2
. Since the conversion process is bidirectional therefore In the steady state and in the weak coupling regime the conversion efficiency reduces to where is the electromechanical cooperativity for cavity 1 (2) and η is the output coupling ratio in which κ i = κ int,i + κ ext,i is the total damping rate while κ int,i and κ ext,i show the intrinsic and extrinsic decay rate of the microwave cavities, respectively.Likewise, the reflection coefficients due to impedance mismatch are given by Note that for the lossless microwave cavities (η i = 1), near unity photon conversion can be achieved in the limit that C 1 = C 2 = C and C 1.The former condition balances the photon-phonon conversion rate for each cavity while the later condition guarantees the mechanical damping rate γ m is much weaker than the damping rates Γ i = γ m C i .Under these two conditions, the ideal photon conversion is achieved i.e. |T | 2 = 1 (perfect transmission) and |S 11 | 2 = |S 22 | 2 = 0 (no reflection).The denominator of Eq. (B2) indicates that the bandwidth of the conversion is given by Γ T = γ m + Γ 1 + Γ 2 , which is the total back-action-damped linewidth of the mechanical resonator in the presence of the two microwave drive fields.
We perform coherent microwave frequency conversion using the intermediate nanostring resonator as a coupling element between two superconducting coil resonators at ω 1 /2π = 9.55 GHz and ω 2 /2π = 9.82 GHz as shown in Fig 1 a.The microwave cavities are accessible by ports", i.e. semi-infinite transmission lines giving the modes finite energy decay rates leading to the cavity linewidths κ 1 /2π = 2.42 MHz and κ 2 /2π = 1.98 MHz with associated output coupling ratios η 1 = 0.74 and η 2 = 0.86, indicating that both cavities are strongly overcoupled to the two distinct physical ports 1 and 2. The fundamental mode of the mechanical oscillator has a resonance frequency of ω m /2π = 4.34MHz with the corresponding damping rate of γ m /2π = 4Hz.Measuring the mechanical resonator noise spectrum along with the off-resonant reflection coefficients of each cavity and measurement line, we calibrate the gain and attenuation in each input-output line and accurately back out the vacuum optomechanical coupling rate for each cavity of g 01 /2π = 33Hz and g 02 /2π = 13Hz.Another important aspect of such a transducer is the dynamic range of the device.In the inset of Fig. 4 c we show measured maximum transmission as a function of the applied signal power.Our results demonstrate that high conversion efficiencies can be maintained up to about −80 dBm input signal power, corresponding to about 10 5 signal photons inside the cavities.At even higher signal powers the transmission efficiency is degraded abruptly, because the probe tone acts as an additional strong drive invalidating the transducer model, and also because of an increase of the resonance frequency shifts and resonator losses.
Appendix C: General theory of a coupled electromechanical system
Hamiltonian of a multi-mode electromechanical transducer
In this section we present a general theory to describe the nonreciprocal behavior of our on-chip electromechanical transducer, shown in Fig 1a of the main paper.We begin with an optomechanical system comprised of three microwave cavities with frequencies ω i and linewidths κ i where i = 1, 2, 3 that are coupled to two vibrational modes of a mechanical oscillator with frequencies ω m.i and damping rates γ m,i where i = 1, 2. To tune a desired coupling into resonance, we assume the cavities are coherently driven with six microwave tones, with frequencies detuned from the lower motional sidebands of the resonances by δ 0,i .The Hamiltonian of the system is (h = 1) [34] where a i is the annihilation operator for the cavity i, b j is the annihilation operator of the mechanical mode j, and describes the Hamiltonian of the pumps with amplitude E ij = E * ij , frequency ω d,ij , and phase φ ij .We can linearize Hamiltonian (C1) by expanding the cavity modes around their steady-state field amplitudes, is the mean number of photons inside the cavity i induced by the microwave pump due to driving mechanical mode j, the κ i = κ int,i + κ ext,i is the total damping rate of the cavity while κ int,i and κ ext,i show the intrinsic and extrinsic decay rate of the microwave cavities, respectively.Here, ∆ ij = ω i − ω d,ij is the detuning of the drive tone with respect to cavity i.In the rotating frame with respect to g 0,ij b j e −i(ωm,j +δ0,j )t + b † j e i(ωm,j +δ0,j )t .(C3) By setting the effective cavity detunings so that ∆ 11 = ∆ 21 = ∆ 31 = ω m,1 + δ 0,1 and ∆ 12 = ∆ 22 = ∆ 32 = ω m,2 + δ 0,2 and neglecting the terms rotating at ±2ω m,1 (2) and ω m,1 + ω m,2 , the above Hamiltonian reduces to where is the effective coupling rate between the mechanical mode j and cavity i and H off describes off-resonant/time dependent interaction between mechanical modes and the cavity fields, and it is given by where δω m = ω m,2 − ω m,1 + δ 0,2 − δ 0,1 and we define following off-resonant optomechanical coupling parameters The off-resonant Hamiltonian (C5) has an essential role in the nonreciprocity aspect of our device, therefore, it is important to discuss the physical roots of such off-resonant couplings [30,35].Inspection of Hamiltonians (C4) and (C5) reveals that each drive tone generates two different types of interactions: Resonant coupling in which the drive tone couples a single mechanical mode to a single cavity mode, described by the time-independent part of the Hamiltonian (C4).Each drive tone also generates an interaction which couples the other mechanical mode to the cavity off-resonantly.The Hamiltonians (C5) explain this off-resonant coupling between cavity fields and mechanical modes.As we will see, these off-resonant couplings alter the mechanical damping rate, which changes the isolation bandwidth and also cools the mechanical modes.In addition, the coupling also introduces mechanical frequency shifts and introduces an effective detuning for the drive tones.Note that, within the rotating wave approximation (RWA) the non-resonant/time-dependent components of the effective linearized interactions can be neglected in the weak coupling regime and when the cavity decay rates κ i are much smaller than the two mechanical frequencies ω m,i and their difference Finally, we note that for the isolator case we deal with two cavities coupled two mechanical modes, which mathematically is equivalent to set G 31 = G 32 = F 31 = F 32 = 0 in our general model.In this special case, the Hamiltonian (C4) reduces to the Hamiltonian (1) presented in the paper with
Equations of motion and effective model
The full quantum treatment of the system can be given in terms of the quantum Langevin equations where we add to the Heisenberg equations the quantum noise acting on the mechanical resonators b in,i with damping rates γ i as well as the cavities input fluctuations a in,i with damping rates κ ext,i .The resulting Langevin equations, including the off-resonate terms, for the cavity modes and mechanical resonators are where i = 1, 2, 3 and j = 1, 2.
In order to study the dynamics of the system we solve the time-dependent quantum Langevin equations (C10).We use an iterative method to solve these equations by defining a new set of auxiliary operators (toy modes) and cutting the iteration sequence at higher order dependence to O(n δω m ; δω n m ) with n ≥ 2, which yields where i = 1, 2, 3.The auxiliary modes A ± i = a i e ±iδωmt , B 1 = b 1 e iδωmt and B 2 = b 2 e −iδωmt describe the off-resonant components of the equations of motion.Here, we take δω m to be much larger than the relevant system frequencies, i.e. δω m γ m,i , δ 0,i , ω, and can thus adiabatically eliminate the auxiliary modes by taking Ḃj = Ȧ± i = 0 in Eqs.(C11), which results in the following equations for the auxiliary modes , (C12)
,
In the limit of δω m → ∞, the contribution of all auxiliary modes can be totally neglected in the dynamics of the system, i.e. {B j , A ± i } → 0. In this case the off-resonant interactions between the mechanical modes and cavities are negligible and we can safely ignore the time-dependent components of the Hamiltonian (i.e.H off = 0).However, in our system due to finite value of δω m ≈ κ i /2, we cannot ignore these off-resonant interactions.We can simply further the equations of motion for the main modes by substituting Eqs.(C12) into the equations of motion for a i and b j in Eqs.(C11) and assuming δω m , κ where δ j and Γ m,j are the effective detuning and damping rates of the mechanical modes, respectively, and they are given by Note that in the derivation of Eqs.(C13) we assume that the off-resonant interaction does not considerably modify the self-interaction and damping rate of the cavity modes.Inspection of Eqs.(C13) reveals that the off-resonant coupling between mechanical modes and cavities shifts the resonance frequency and damps/cools the mechanical modes by introducing a cross-damping between them.The strength of the frequency shift and the cross-damping is given by the off-resonant optomechanical coupling parameters F ij , which indicates that the drive tones creates an effective coupling between the two mechanical modes.In the weak coupling regime and for very large δω m this cross-coupling is negligible, thus δ j ≈ δ 0,j and Γ m,j ≈ γ m,j .We can solve the Eqs.(C13) in the Fourier domain to obtain the microwave cavities' variables.Eliminating the mechanical degrees of freedom from the equations of motion (C13) and writing the remaining equations in the matrix form, we obtain where χ −1 j (ω) = Γ m,j /2 − i(ω + δ j ) is the mechanical susceptibility for mode j and we introduced the drift matrix By substituting the solutions of Eq. (C15) into the corresponding input-output formula for the cavities variables, i.e. a out,j = √ κ ext,j a j − a in,j , we obtain where we defined T = Diag √ κ ext,1 , √ κ ext,2 , √ κ ext,3 .
3. Scattering matrix and nonreciprocity for a two-port device In this section, we verify the details of our analysis in the isolator section of the main paper and we examine our model to see how the nonreciprocity arises in a two-port electromechanical system.Here, we are only interested in the response an electromechanical system comprised of two microwave cavities and two mechanical modes.Therefore, by setting G 3j → 0 and δ 1 = −δ 2 = δ in Eq. (C16) and assuming φ 22 = φ, φ 11 = φ 12 = φ 21 = 0, we can find the ratio of backward to forward transmission as specified in Eq. ( 2) of the paper.Here, Σ m,j = 1 + 2i (−1) j δ − ω /Γ m,j is the inverse of the mechanical susceptibility divided by the effective mechanical linewidth Γ m,j .Examination of Eq. (C17) shows that the nominator and denominator of this equation are not equal and they possess different relative phase.This asymmetry is the main source of the nonreciprocity and appearance of isolation in the system.In particular, at a phase the nominator of the Eq.(C17) will be zero, therefore, backward transmission S 12 is canceled while forward transmission S 21 is non-zero.Rewriting Eq. (C18) gives By neglecting the contribution of the off-resonant term in the response of the system, i.e.Γ m,j → γ m,j the Eq.(C19) reduces to Eq. ( 3) of the paper.At the optimum phase (C18) and at cavity resonance, the transmission in the forward direction is given by .
For equal mechanical damping Γ m,1 = Γ m,2 = Γ (equivalent to γ m,1 = γ m,2 = γ of the main text) and at equal cooperativities for all four optomechanical couplings (C ij = C) the above equation reduces to as specified in Eq. ( 4) of the paper.For the particular cooperativity 2C = 1 + 4δ 2 /Γ 2 , the power transmission in forward direction is given by By neglecting the off-resonant interaction all damping rates reduce to Γ m,j ≈ γ m,j which is consistent with our notation in the main text.We also note that the frequency shifts due to off-resonant interaction for the isolator system discussed in the main text are given by (δ 1 , δ 2 )/2π = (−84, 233) Hz while the cross-damping rates are (Γ m,1 , Γ m,2 )/2π = (190, 407) Hz.
Theoretical model for the circulator
The theoretical model, we presented in Eqs.(C11), or equivalently Eq. (C16), fully describes the nonreciprocal behavior of the system for the case of the circulator.In order to check this, in Fig. 5 we show both measured experimental data and the theoretical prediction.The theoretical model is in excellent agreement with the experiment and can perfectly describe the nonreciprocity of photon transmission for both forward and backward circulation.cavities (mechanical resonator) for i = 1, 2, 3 (j = 1, 2) at temperature T i .The output of the cavities are then sent through a chain of amplifiers.The electromagnetic modes at the output of the amplifiers are given by where G i is the effective gain of the amplifier chain at port i and c amp,i is the added noise operator of the amplifiers.
We can now write the expression for the single sided power spectral density as measured by a spectrum analyzer, in the presence of all relevant noise sources S noise,i (ω) = hω where G i is the gain in dB, and using the white correlation functions for the noise operators, we find S noise,i (ω) = hω10 Gi/10 (1 + n amp,i + n add,ij ), (D4) where n amp,ij is the total noise added by the amplifier chains and n add,i is the total noise added by the cavities and mechanical resonators associated with the photon conversion from cavity j to cavity i.
Measuring the output noise spectrum and having calibrated the gain of the amplifiers at each port (G 1 , G 2 , G 3 ) = (67.5,64, 60.5) dB, we can accurately infer the amplifiers added noise quanta at each port (n amp,1 , n amp,2 , n amp,3 ) = (23, 23, 33) ± 2. The only remaining unknown parameter in Eq. (D4) is n add,ij which can be found by measuring the noise properties of the three cavities when all six pumps are on and compare them to the case when the pumps are off.In the Fig. 6 we show the measured added noise photons for all six transmission parameters of the circulator.On resonance where the directionality is maximized we find (n add,21 , n add,32 , n add,13 ) = (4, 6.5, 3.6) in the forward direction and (n add,12 , n add,23 , n add,31 ) = (4, 4, 5.5) in the backward direction.
TransmissionjS 21 j 2 jS 12 j 2 jS 12 j 2 jS 21 j 2 jS 12 j 2 jS 21 j 2 jS 12 j 2 FIG. 2 .
FIG. 2. Optomechanical isolator.a, Mode coupling diagram for optomechanically induced nonreciprocity.Two microwave cavities (C1 and C2) are coupled to two mechanical modes (M1 and M2) with the optomechanical coupling rates Gij (where i, j = 1, 2), inducing two distinct signal conversion paths.Power spectral density (PSD) of the two microwave cavities and arrows indicating the frequency of the four microwave pump tones slightly detuned by δi from the lower motional sidebands of the resonances.All four pumps are phase-locked while the signal tone is applied.Only one of the microwave source phases is varied to find the optimal interference condition for directional transmission between port 1 and 2. b, Measured power transmission (dots) in forward |S21| 2 (cavity 1 → cavity 2) and backward directions |S12| 2 (cavity 2 → cavity 1) as a function of probe detuning for two different phases φ = ±102.6degrees.The solid lines show the results of the coupled-mode theory model discussed in the text.c, Experimental data (top) and theoretical model (bottom) of measured transmission coefficients |S12| 2 and |S21| 2 as a function of signal detuning and pump phase φ.Dashed-lines indicate the line plot locations of panel b.
FIG. 3 .
FIG. 3. Optomechanical circulator.a, Mode coupling diagram describing the coupling between three microwave cavities (C1, C2 and C3) and two mechanical modes (M1 and M2) with optomechanical coupling rates Gij (where i = 1, 2, 3 and j = 1, 2), creating a circulatory frequency conversion between the three cavity modes.b, Measured power transmission (dots) in forward (|S21| 2 , |S32| 2 and |S13| 2 ) and backward directions (|S12| 2 , |S23| 2 and |S31| 2 ) as a function of probe detuning for a pump phase φ = −54 degrees.The solid lines show the prediction of the coupled-mode theory model discussed in the text.c, Measured S parameters (top) and theoretical model (bottom) as a function of detuning and pump phase.Dashed-lines indicate the line plot positions shown in panel b.
2 FIG. 4 .
FIG.4.Bidirectional frequency conversion.a, The microwave-mechanical mode diagram for the frequency conversion.Two microwave cavities C1 and C2 are parametrically coupled to a mechanical mode with coupling rates G1 and G2, which gives rise to frequency conversion between the two microwave cavities.b, Power spectral densities (PSD) of the mechanical mode and microwave cavities and the drive tone frequencies indicated with vertical arrows near the red sidebands of the microwave modes at ω d,1(2) = ω 1(2) − ωm.c, Experimental demonstration (dots) and theoretical prediction (solid lines) of the frequency conversion between two microwave cavities at resonance frequencies (ω1, ω2)/2π = (9.55,9.82) GHz as a function of cooperativity C2 for C1 = 95.Here, |T | 2 = |S12| • |S21| (yellow dots), |S11| 2 (red dots) and |S22| 2 (blue dots) demonstrate the magnitude of the transmission and reflection coefficients on resonance with the cavities, respectively.As predicted by Eq. (B2), the transmission between the two cavities is maximum for C1 = C2 ≈ 95.The inset shows the dynamic range of the device where the transmission coefficient is measured as function of the signal input power P signal or mean total number of signal photons inside the microwave cavities n signal .
|
v3-fos-license
|
2017-03-31T19:32:15.706Z
|
2014-05-07T00:00:00.000
|
17471521
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=46324",
"pdf_hash": "f9e877c0e0ca72fa95687b8dba95b6e6bcea4cdf",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46188",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "f9e877c0e0ca72fa95687b8dba95b6e6bcea4cdf",
"year": 2014
}
|
pes2o/s2orc
|
Malakoplakia of the Testis
Malakoplakia is an uncommon chronic inflammatory disease usually affecting the urogenital tract and often associated with the infection due to E. coli. It is characterised by the presence of Von Hansemann cells and intracytoplasmic inclusion bodies called Michaelis-Gutmann Bodies. Testes are affected in 12% cases. The lesion mainly occurs in middle aged men, appearing clinically as epididymo-orchitis or testicular enlargement with fibrous consistency and some soft areas. Or-chidectomy is the only way to differentiate the lesion from other malignant or infected processes. This is a case report of a young patient with testicular malakoplakia.
Introduction
Malakoplakia is an uncommon chronic inflammatory disease usually affecting the urogenital tract and often associated with the infection due to E. coli [1].The condition was originally described by Michaelis and Gutmann in1902 [2].The term malakoplakia was coined by Von Hansemann (from the Greek malakos, soft and plakos, plaque) in 1903 [3].The urinary bladder is the most commonly affected site, though involvement of the extravesical sites such as kidneys, testis, prostate and colon is also reported [4].Malakoplakia is characterised by the presence of large cells with abundant eosinophilic cytoplasm called Von Hansemann cells and within the cytoplasm are present calcified inclusion bodies called as Michaelis-Gutmann (MG) bodies which exhibit a concentric laminated (targetoid or owl's eye) appearance with a basic dye like haematoxyline [4] [5].This is a case report of a young patient with testicular malakoplakia.
Case Report
A 24-year-old patient presented to the surgical emergency room with pain in left hemiscrotum since 3 weeks.There were no urinary complaints.On examination, there was swelling of the left hemiscrotum with an erythematous overlying skin.Palpation revealed a firm, tender swelling with local rise in temperature.There was no evidence of inguinal lymphadenopathy.A clinical diagnosis of pyocele was made.Urine examination showed few pus cells.Ultrasound showed an enlarged left testicle with hypoechoicechotexture with highly increased vascularity on colour doppler study.A 9 mm size hypo to isoechoic wedge shaped lesion was noted at its anterior aspect with no obvious vascularity within it, suggestive of a haematoma.A diagnosis of left orchitis with small intratesticular haematoma was made and the patient was administered anti-inflammatory treatment with analgesia.The condition did not improve clinically after 5 days of treatment.The urine culture report revealed infection with E. coli.A decision for scrotal exploration was taken.Intraoperatively, the tunica albugenia was thickened and showed haemorrhagic areas.There was evidence of thick, purulent fluid within the left testis with yellowish plaques (Figure 1).Left orchidectomy was performed and histopathological examination of specimen revealed atrophy of the tubules along with interstitial infiltration by large number of histiocytes with abundant granular eosinophilic cytoplasm.The histiocytes showed intracytoplasmicMichaelis-Gutmann bodies (Figure 2).PAS stain confirmed the same.A final diagnosis of malakoplakia of testis was made.The culture report of the purulent fluid revealed Klebsiellapneumoniae sensitive to Quinolones.Ciprofloxacin was administered for 3 months.The patient has been asymptomatic with a follow up of one year till date.
Discussion
The first case of testicular malakoplakia was published in 1958 by Haukohl and Chinchinian [6].The lesion occurs mainly in middle-aged men, appearing clinically as epididymo-orchitis affected in 12% cases.Several theories have been put forth, with three factors playing a major role: altered phagocytic function of macrophages, gram-negative infection and an abnormal immune response [7].It may be an expression of microtubular/microfilamentaldysfunction [5].Ineffective phagocytosis occurs due to defect in the lysosome response of macrophages to bacterial infections, usually by E. coli, as seen in our case.There seems to be an imbalance between cyclic adenosine monophosphate (AMPc) and cyclic guanosine monophosphate (GMPc), which causes inadequate lysosomic degranulation in the monocytes [7].
The association of coliform urinary infection with testicular malakoplakia can be explained by the fact that testicular infection may be acquired by retrograde spread from the urinary tract and is intratubular initially.The Sertoli cells and macrophages interact with bacteria, forming intracellular phagosomes which may fuse to form giant cytosegrosomes which undergo calcification resulting in MG bodies [4].
Grossly the lesions are yellowish soft plaques as seen in our case or nodules.Microscopically Von Hansemann cells and intensely PAS-positive structures-MG bodies are pathognomonic of malacoplakia [4].In a study by McClure, out of the six cases in whom testicular tissue was taken and cultured for microorganisms, E. coli was cultured in four cases, Klebsiella pneumonia and Proteus species in other cases [4].An increased frequency of malakoplakia in immunocompromised patients is well established and seen in upto 40% cases [8].Other conditions which can coexist include cancer, diabetes, alcoholic liver disease and tuberculosis [7].No such coexistent illness was observed in our case.
Orchidectomy is the only way to differentiate the lesion from other malignant or infectious processes like granulomatous orchitis.Although an infectious aetiology is evident, no antimicrobial therapy has been successful in the long term.Fluoroquinolones, especially ciprofloxacin, are the first choice drugs due to 80% to 90% effectiveness [7].Patients with malakoplakia should be followed up periodically.
Conclusion
Malakoplakia of the testis is an uncommon chronic inflammatory condition which should be considered in the differential diagnosis of testicular swellings especially in association with gram-negative infections.
Figure 1 .
Figure 1.Necrotic material with yellow plaques in testicle.
|
v3-fos-license
|
2022-09-21T15:05:21.390Z
|
2022-09-01T00:00:00.000
|
252398363
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/27/18/6099/pdf?version=1663578239",
"pdf_hash": "94e0c8d2400889a672018eafaa4e375c67fdbc34",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46189",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "040d73334acfe69082f9169ad90f8430c0d9d8f8",
"year": 2022
}
|
pes2o/s2orc
|
The Cis-Effect Explained Using Next-Generation QTAIM
We used next-generation QTAIM (NG-QTAIM) to explain the cis-effect for two families of molecules: C2X2 (X = H, F, Cl) and N2X2 (X = H, F, Cl). We explained why the cis-effect is the exception rather than the rule. This was undertaken by tracking the motion of the bond critical point (BCP) of the stress tensor trajectories Tσ(s) used to sample the Uσ-space cis- and trans-characteristics. The Tσ(s) were constructed by subjecting the C1-C2 BCP and N1-N2 BCP to torsions ± θ and summing all possible Tσ(s) from the bonding environment. During this process, care was taken to fully account for multi-reference effects. We associated bond-bending and bond-twisting components of the Tσ(s) with cis- and trans-characteristics, respectively, based on the relative ease of motion of the electronic charge density ρ(rb). Qualitative agreement is found with existing experimental data and predictions are made where experimental data is not available.
Earlier, some of the current authors provided a scalar physics-inspired coupling mechanism explaining the cis-effect in terms of electronic and nuclear degrees of freedom for three families of molecules, including halogen-substituted ethene and diazenes [18]. We undertook a static investigation of the properties of the central X = X or N = N bond paths and found that those of the cis-isomers were more bent: the difference between the length of the bond path and the internuclear separation was up to 1.5% larger in the cis-isomers than in the corresponding trans-isomers. In our earlier contribution, we therefore concluded that the physical origin of the cis-effect was associated with greater bond-path bending. This earlier work, however, only provided correlations of the bond-path bending with the energy and did not explain why the cis-effect is the exception rather than the rule. In this work, the physical basis of the cis-effect will be provided in terms of the least and most preferred directions of electronic charge density motion.
Recently, some of the current authors used an electronic charge-density-based analysis to investigate steric effects within the formulation of next-generation Quantum Theory of Atoms in Molecules (NG-QTAIM) [19]. We found that the presence of chiral contributions suggested that steric effects, rather than hyperconjugation, explained the staggered geometry of ethane [20]. This recent work on steric effects relates to the current investigation on cis-effects, since in both cases we subject the central C = C or N = N bond to a torsion to probe either steric or cis-effects. Low/high values of the NG-QTAIM interpretation of chirality (C σ ) were associated with low/high steric effects due to the absence/presence of an asymmetry. The chirality C σ [21] was earlier used to redefine a related quantity for cumulenes, the bond-twist T σ [22].
In this investigation, we will use NG-QTAIM to explain why the cis-effect was previously found, in our scalar investigation, to be associated with bond bending [18]. This will be undertaken by subjecting the axial bonds, C1-C2 and N1-N2, to a torsion θ to sample the directional response of the electronic charge density ρ(r b ) at the bond cross-section. This will provide a better understanding of the greater (topological) stability of the cis-isomer over the trans-isomer in these halogen-containing species; see Scheme 1. Recently, some of the current authors used an electronic charge-density-based analysis to investigate steric effects within the formulation of next-generation Quantum Theory of Atoms in Molecules (NG-QTAIM) [19]. We found that the presence of chiral contributions suggested that steric effects, rather than hyperconjugation, explained the staggered geometry of ethane [20]. This recent work on steric effects relates to the current investigation on cis-effects, since in both cases we subject the central C = C or N = N bond to a torsion to probe either steric or cis-effects. Low/high values of the NG-QTAIM interpretation of chirality (Cσ) were associated with low/high steric effects due to the absence/presence of an asymmetry. The chirality Cσ [21] was earlier used to redefine a related quantity for cumulenes, the bond-twist Tσ [22].
In this investigation, we will use NG-QTAIM to explain why the cis-effect was previously found, in our scalar investigation, to be associated with bond bending [18]. This will be undertaken by subjecting the axial bonds, C1-C2 and N1-N2, to a torsion to sample the directional response of the electronic charge density ρ(rb) at the bond cross-section. This will provide a better understanding of the greater (topological) stability of the cisisomer over the trans-isomer in these halogen-containing species; see Scheme 1.
We use Bader's formulation of the quantum stress tensor σ(r) [30] to characterize the forces on the electron density distribution in open systems that is defined by: where γ(r,r′) is the one-body density matrix, (r, r ′ ) = ∫ (r, r 2 , … , r ) * (r′, r 2 , … , r )dr 2 ⋯ dr (2) The stress tensor is then any quantity σ(r) that can satisfy equation (2): any divergence-free tensor can be added [30][31][32]. Bader's formulation of the stress tensor σ(r), equation (1), is a standard option in the AIMAll QTAIM package [33]. Earlier Bader's formulation of σ(r) demonstrated superior performance compared with the Hessian of ρ(r) for distinguishing the Sa-and Ra-geometric stereoisomers of lactic acid [34] and therefore will be used in this investigation.
In this investigation, we include the entire bonding environment, including all contributions to the Uσ-space cis-and trans-characteristics, by considering the C1-C2 BCP Tσ(s)
We use Bader's formulation of the quantum stress tensor σ(r) [30] to characterize the forces on the electron density distribution in open systems that is defined by: where γ(r,r ) is the one-body density matrix, The stress tensor is then any quantity σ(r) that can satisfy equation (2): any divergencefree tensor can be added [30][31][32]. Bader's formulation of the stress tensor σ(r), equation (1), is a standard option in the AIMAll QTAIM package [33]. Earlier Bader's formulation of σ(r) demonstrated superior performance compared with the Hessian of ρ(r) for distinguishing the S a -and R a -geometric stereoisomers of lactic acid [34] and therefore will be used in this investigation.
In this investigation, we include the entire bonding environment, including all contributions to the U σ -space cisand trans-characteristics, by considering the C1-C2 BCP T σ (s) of the asymmetric, i.e., 'reference' carbon atom (C1) or the N1-N2 BCP T σ (s) of the nitrogen atom (N1); see Scheme 1 and the Computational Details section. The C1-C2 BCP T σ (s) and N1-N2 BCP T σ (s) are created by subjecting these BCP bond paths to a set of torsions θ; see the Computational Details section.
The bond-twist T σ is the difference in the maximum projections, the dot product of the stress tensor e 1σ eigenvector and the BCP displacement dr, of the T σ (s) values between the counter-clockwise (CCW) and clockwise (CW) torsion θ.
Equation (3) for the bond-twist T σ quantifies the bond torsion BCP-induced bond twist for the CCW vs. CW direction, where the largest magnitude stress tensor eigenvalue (λ 1σ ) is associated with e 1σ ; see Figures 1 and 2. The eigenvector e 1σ corresponds to the direction along which electrons at the BCP are subject to the most compressive forces. Therefore, e 1σ corresponds to the direction along which the BCP electrons will be displaced most readily when the BCP is subjected to a torsion [35]. Higher values of the bond twist T σ correspond to greater asymmetry, and therefore to a dominance of the transcompared with the cis-isomer in U σ space. This reflects the structural symmetry, with respect to the positioning of the halogen substituents, of the transrather than the cis-isomer. Table 1. Notice the markers at 30° intervals. -figures (a-c), respectively; see the caption of Figure 1 and Table 2. Figure 1 and Table 2. Conversely, the eigenvector e 2σ corresponds to the direction along which the electrons at the BCP are subject to the least compressive forces. Therefore, e 2σ corresponds to the direction along which the BCP electrons will be least readily displaced when the BCP undergoes a torsion distortion. The bond-flexing F σ associated with e 2σ is defined as: The bond-flexing F σ is calculated from the torsion BCP bond flexing defined by equation (4); see Figures 1 and 2. Equation (4) provides a U σ -space measure of the 'flexing-strain' or bond bending that a bond path is under in the cisor trans-isomer configurations. This is consistent with greater 'flexing-strain' or bond bending that previously correlated with a greater presence of the cis-effect [18]. Higher values of F σ correspond to the dominance of the ciscompared with the trans-isomer in U σ space, because bond bending reflects the symmetry of the cis-isomer rather than the trans-isomer, with respect to the positioning of the halogen substituents.
The bond-axiality A σ is part of the U σ -space distortion set ∑{Tσ,Fσ,Aσ}, which provides a measure of the chiral asymmetry. It is defined as: Equation (5) quantifies the direction of axial displacement of the bond critical point (BCP) in response to the bond torsion (CCW vs. CW), i.e., the sliding of the BCP along the bond path. We will, however, not use A σ as it does not comprise the bond cross-section, but provide it in the Supplementary Materials S5 and S6. Instead, we will use the so-called U σ -space bond cross-section set ∑{Tσ,Fσ} developed for cisand trans-isomers.
The (+/−) sign of the bond-twist T σ and bond-flexing F σ determines the S σ (T σ > 0, F σ > 0) or R σ (T σ < 0, F σ < 0) character; see Tables 1 and 2. The bond cross-section set ∑{Tσ,Fσ} is related to the cross-section of a BCP bond path that is quantified by the λ 1σ and λ 2σ eigenvalues associated with the e 1σ and e 2σ eigenvectors, respectively. Note, the e 1σ and e 2σ eigenvectors are the directions along which the BCP electrons are displaced most readily and least readily, respectively, when the BCP is subject to a torsional distortion. The trans-isomer is dominant in U σ space if the magnitude of the bond-twist T σ value is larger for the transthan for the cis-isomer. Conversely, dominance of the cis-isomer is determined by the presence of a larger magnitude bondflexing F σ value for the cis-isomer compared with the trans-isomer; see Tables 1 and 2.
Computational Details
The electronic wavefunction for molecular structures incorporating a chemically conventional double bond is usually well-represented by a single-reference wavefunction in the 'eclipsed' configurations 0 • (cis) or 180 • (trans). It is also well-known that as the dihedral angle across the double bond deviates from the 'eclipsed' configurations, the nature of the wavefunction changes, becoming fully multi-reference in nature at the twisted 90 • 'staggered' configuration. The multi-reference character is determined for ethene using the frequently used T1 measure [36], where values of T1 > 0.02 indicate that a singlereference description is inadequate. For this reason, in all of this work, for both the C 2 X 2 (X = H, F, Cl) and N 2 X 2 (X = H, F, Cl) molecules, we use a multi-reference CAS-SCF(2,2) method [37,38], using Slater determinants for the active space, implemented in Gaussian G09.E01 [39] with symmetry disabled, an 'ultrafine' integration grid and convergence criteria of 'VeryTight' geometry convergence and an SCF convergence criterion of 10 −12 . The cc-pVTZ triple-zeta basis set was used during geometry optimization and the dihedral coordinate scan constrained geometry optimization process. The magnitude of the dihedral angle scan steps was 1 • . Additionally, the second atom used in each sequence defining the dihedral scan angle, C1 and N1, respectively, for the ethene and diazene derivatives, was constrained to be fixed at the origin of the Cartesian spatial coordinates. All initial 'eclipsed' (cis-and trans-) optimized molecular geometries were generated (and checked to be energy minima with no imaginary vibrational frequencies) with these settings; see Supplementary Materials S2 for the optimized structures and tabulated experimental data. The final single-point wavefunctions and densities for each structure produced during the dihedral scans were calculated, as recommended for accurate NG-QTAIM properties [40], using a quadruple-zeta basis set (cc-pVQZ).
The direction of torsion is determined to be CCW (0.0 • ≤ θ ≤ +90.0 • ) or CW (−90.0 • ≤ θ ≤ 0.0 • ) from an increase or a decrease in the dihedral angle, respectively. An exception is made for N 2 Cl 2 where the respective angular limits used were −80 • and +80 • . These latter limits are chosen due to the well-known destabilizing interactions between the nitrogen lone pairs and the relatively weak N-Cl bonds [41,42], which cause dissociation of the molecule into N 2 and Cl 2 when a larger twist is applied: we observed and confirmed this dissociation for dihedral twists > 80 • .
QTAIM and stress tensor analysis were then performed on each single-point wavefunction obtained in the previous step with the AIMAll [33] and QuantVec [43] suite. In addition, all molecular graphs were confirmed to be free of non-nuclear attractor critical points.
Results and Discussions
The scalar distance measures geometric bond length (GBL) and bond path length (BPL) used in this investigation on ethene, doubly substituted ethene and diazene are insufficient to quantify the presence of the cis-effect and are provided in the Supplementary Materials S3. The variation in the (scalar) relative energy ∆E for ethene, doubly substituted ethene and diazene molecules do not provide any insights either into the cis-effect for these molecules and are provided in the Supplementary Materials S4. The intermediate and the complete C1-C2 BCP U σ -space distortion sets are provided in the Supplementary Materials S5 and S6, respectively.
The sum of the bond-cross-section sets ∑{Tσ,Fσ} of the C1-C2 BCP T σ (s) was calculated for all four possible isomers of the formally achiral molecules ethene and doubly substituted ethane, C 2 X 2 (X = H, F, Cl). The results for the molecular graph of pure ethene are provided as a control to enable a better understanding of the effect of the halogen atom substitutions; see Table 1 and Scheme 1. The corresponding results for the formally achiral diazenes N 2 X 2 (X = H, F, Cl) comprising a single isomer N1-N2 BCP T σ (s) are presented in Table 2. The magnitude of the values of the bond cross-section set ∑{Tσ,Fσ} increases with atomic weight, as is demonstrated for F 2 and Cl 2 substitution of ethene; see Table 1. This relationship between the magnitude of the ∑{Tσ,Fσ} values and halogen substituent also occurred in a recent investigation of singly and doubly halogen-substituted ethane [44]. This dependency of {T σ ,F σ } on the atomic weight of the substituent, however, does not occur for the diazenes.
The magnitude of the bond-twist T σ is significantly smaller for the ciscompared with the trans-isomer for C 2 X 2 (X = F, Cl) and slightly smaller for N 2 X 2 (X = Cl). The magnitude of the bond-flexing F σ is significantly larger for the ciscompared with the trans-isomer for C 2 X 2 (X = F, Cl) and N 2 X 2 (X = F, Cl).
These results for the bond cross-section set ∑{Tσ,Fσ} are consistent with the presence of the cis-effect and therefore indicate the occurrence of the cis-effect in U σ space for C 2 X 2 (X = F, Cl) and N 2 X 2 (X = F, Cl). The very large component of the bond-twist T σ for N 2 X 2 (X = H) indicates a complete lack of the cis-effect and a dominance of the trans-isomer in U σ space for this molecule.
All of the investigated molecular graphs comprised a significant degree of chiral character as indicated by the magnitudes of the bond-twist T σ and bond-flexing F σ , particularly for C 2 X 2 (X = F, Cl).
Conclusions
In this investigation, NG-QTAIM was used to determine the presence or absence of the cis-effect for the C 2 X 2 (X = H, F, Cl) and N 2 X 2 (X = H, F, Cl) molecules. Qualitative agreement with experimental data for differences in the energies of the cisand trans-isomers was found.
The molecules of this investigation are formally achiral according to the Cahn-Ingold-Prelog (CIP) priority rules [45], but all comprise at least a degree of chiral character in U σ space, on the basis of the magnitude of the T σ values, with C 2 X 2 (X = Cl) displaying a very significant degree of chirality. This finding reflects the conventional understanding that steric effects are among the reasons for the differences between the relative energetic stabilities of cisand trans-isomers, consistent with our previous association of chiral character in Uσ space for the steric effects for ethane [20].
We found that both C 2 X 2 (X = F, Cl) and N 2 X 2 (X = F, Cl) display the cis-effect. This includes the prediction of a cis-effect in N 2 X 2 (X = Cl), for which no experimental data on the cis-isomer and trans-isomer energy difference are available. The cis-effect is determined on the basis of the much larger values of the bond-flexing Fσ for the ciscompared with the trans-isomer.
We provided a physical explanation as to why the cis-effect is the exception rather than the rule, by defining a dominant bond-flexing F σ component of the bond cross-section set {T σ ,F σ } as characterizing the cis-effect. This is on the basis that it is more difficult to bend (F σ ) than to twist (T σ ) the C1-C2 BCP bond path and N1-N2 BCP bond path. This difference in the difficulty of performing bond-bending (F σ ) and bond-twisting (T σ ) distortions is explained by their construction, using the least preferred e 2σ and most preferred e 1σ eigenvectors, respectively, that determine the relative ease of motion of the electronic charge density ρ(r b ).
Suggestions for future work include the exploration of the newly discovered NG-QTAIM bond cross-section set {T σ ,F σ } for cisand trans-isomers, which could be undertaken by manipulating the cisand trans-isomer character in U σ space with laser irradiation. We make this suggestion since NG-QTAIM chirality has already been found to be reversed by the application of an electric field [46]. Reversing the cisand trans-isomer character in U σ space is possible with laser irradiation that is fast enough to avoid disrupting atomic positions. Such a reversal in U σ space could result in the cisand trans-geometric isomers comprising transand cis-isomer assignments in U σ space, respectively.
|
v3-fos-license
|
2019-05-07T13:28:48.575Z
|
2019-04-21T00:00:00.000
|
146066574
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/joph/2019/1959082.pdf",
"pdf_hash": "83b57be666890aac255cb69dc90485b9297b882e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46190",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "83b57be666890aac255cb69dc90485b9297b882e",
"year": 2019
}
|
pes2o/s2orc
|
Comparative Study between Pars Plana Vitrectomy with Internal Limiting Membrane Peel and Pars Plana Vitrectomy with Internal Limiting Membrane Flap Technique for Management of Traumatic Full Thickness Macular Holes
Purpose To compare the efficacy of PPV and ILM peel versus PPV and IFT in patients with traumatic FTMH. Methods Retrospective interventional comparative case series including two groups of patients with traumatic FTMH. Patients were divided into group I (ILM peel) and group II (IFT). The main outcome measure was closure of the macular hole and restoration of the foveal microstructure. The independent-samples T-test and ANOVA test were used to study the mean between 2 groups and calculate the P value, whereas the bivariate correlation procedure studied the interaction between the variables tested. Results Group I included 28 patients. Mean preoperative MLD was 757 µm. Mean preoperative BCVA was approximately 20/320. Group II included 12 patients. Mean preoperative MLD was 529.5 µm. Mean preoperative BCVA was 20/320. Group I had a macular hole closure rate of 75% versus 92% in group II P=0.05. Mean BCVA improvement was 2.5 lines in group I versus 5 lines in group II P=0.02. Disrupted ELM and IS/OS was the most salient finding in both groups. Conclusion IFT has a significantly superior anatomic and functional outcome compared to ILM peel in traumatic FTMH.
Introduction
e key success of pars plana vitrectomy (PPV) and gas tamponade for FTMH introduced by Kelly and Wendel [1,2] was attributed to counteracting the anteroposterior vitreous traction at the perifoveal area. On the contrary, the fluid-gas exchange flattened the subretinal fluid cuff surrounding the hole, and the gas bubble then prevented fluid currents from interfering with the healing process [3][4][5][6][7]. e introduction of internal limiting membrane (ILM) maculorhexis was a significant rider to the original technique that greatly spurred the success rates of surgery, although with better response in smaller size holes compared to larger size holes > 400 µm in diameter [8][9][10]. Peeling off the ILM from the vicinity of a FTMH had dual benefit. Firstly, it eliminated the tangential forces created by glial cells migrating through ILM microrips. Secondly, it induced shearing of the Müller cells' foot plates thereby triggering glial cells proliferation along the interface created by the gas bubble, eventually inducing closure of the hole [11]. While this scenario applied to primary FTMH that were caused by anomalous vitreofoveolar traction, similar success was not achieved in FTMH secondary to trauma [12,13]. e pathogenic mechanisms entailed in traumatic FTMH formation included direct injury from blunt trauma inducing the classic trampoline effect or from open globe injury, and indirect injury from a propagating shock wave of chorioretinitis sclopetaria or pressure necrosis of the foveal area by subfoveal hemorrhage [14][15][16][17]. ese mechanisms steered the pathological cascade to a common endpoint, which is hole formation due to tissue loss.
is meant posing a significant pathological element that could not be rectified by the aforementioned surgical maneuvers and that hindered anatomical closure and retinal layers' structural recovery. Accordingly, traumatic FTMH acquired notoriety of frequent initial failure, late reopening, and worse final visual outcome compared to the primary variant [18,19]. e aim of the current study is to compare the efficacy of PPV and ILM peel versus PPV and ILM flap technique (IFT) in terms of anatomic and functional outcomes in patients with traumatic FTMH.
Patients and Methods
is was a retrospective interventional comparative case series that analyzed data of 40 consecutive patients with traumatic FTMH, who were treated in a private ophthalmic center, Magrabi Eye Hospital, Tanta, Egypt, over the past 5 years. Prior to 2017, our surgical protocol for treating traumatic holes consisted of PPV and ILM peel. As from 2017, all patients underwent PPV and IFT. Preoperative data included age, gender, eye involved, type of trauma, and duration of the disease. Best-corrected visual acuity (BCVA) was measured using the Snellen notation and converted to logarithm of minimum angle of resolution (logMAR) for statistical analysis. Diagnosis of FTMH was based on biomicroscopic examination and optical coherence tomography (OCT) imaging (Cirrus HD-OCT 4000 (Carl Zeiss Meditech, Inc., Dublin, California, USA)) or Heidelberg Spectralis Spectral-Domain OCT (SD-OCT (Heidelberg Engineering, Inc., Heidelberg, Germany)), using highdefinition 5-line raster scans and 3-dimensional 512 × 128 macular cube scans passing through the fovea. Biomicroscopically, an FTMH was defined as a central round retinal defect with a rim of elevated retina. Weiss's ring and/ or prefoveolar opacity may be present or absent. e size of the central retinal defect was calibrated against the diameter of one of the large tributaries of the central retinal vein close to the optic disc margin [3]. On OCT imaging, an FTMH was defined as an anatomic defect in the fovea involving all neural layers from the ILM to the retinal pigment epithelium (RPE) detected on at least one OCT B-scan. e size of the hole was assessed using the minimum linear diameter (MLD), which was measured using software built-in calipers feature. e MLD was determined by drawing a horizontal line parallel to the RPE between the narrowest hole points in the midretina, i.e., at the shortest distance across the full thickness defect [20]. e study recruited exclusively patients with naïve FTMH with unequivocal history of blunt ocular trauma. Exclusion criteria included recurrent or persistent macular holes following previous surgery, associated retinal detachment or proliferative vitreoretinopathy, significant corneal opacity that would hinder ILM surgery, associated consecutive optic atrophy due to traumatic optic neuropathy, or a follow-up duration less than 3 months. Recruited patients were classified into two treatment arms. Group I included patients who underwent PPV with ILM peel, whereas group II included patients who underwent PPV with IFT. e main outcome measure was closure of the macular hole and restoration of foveal microstructure. Secondary outcome measures were correlation between preoperative MLD, baseline BCVA, duration of the hole prior to surgical intervention, and the anatomical outcome (type of closure and status of foveal microstructure) and the functional outcome (postoperative BCVA), in addition to development of complications. Selection of patients for enrollment in the study and all surgical procedures entailed were undertaken by single vitreoretinal surgeon (HG). e current study was approved by the Institutional Review Board of Magrabi Eye Hospital in Egypt. e study adhered strictly to the tenets of the Declaration of Helsinki (2013 revision). All individuals enrolled in the study received thorough explanation of the surgical procedures entailed, the expected outcomes and possible complications. Afterwards, they were asked to sign an informed consent before undertaking surgery. e consent included a statement that authorized the authors to publish patients' data for research purposes in an anonymous manner that does not allow identification of the patient.
Surgical Technique.
Recruited patients who presented with concomitant significant cataract that could hinder adequate visualization during PPV and ILM manipulation underwent PPV combined with standard phacoemulsification with foldable intraocular lens implantation within the capsular bag. Surgical technique in all cases consisted of 23-gauge 3-port PPV, followed by triamcinolone acetonide-(TA-) assisted induction of posterior vitreous detachment (PVD) that was accomplished by applying aspiration over the optic nerve head (ONH) using the vitreous cutter. Once induced, PVD was carried up to the equator. We routinely used TA for ILM peel. After PVD induction, 0.2 ml TA suspension was sprayed onto the macular area. e supernatant suspension is aspirated from the vitreous cavity while allowing large-sized TA particles to settle down over the ILM. Peeling was started by directly pinching the ILM at a point of natural weakness over the inferior temporal arcade using a 23-gauge Eckardt end-gripping forceps (D.O.R.C. Dutch Ophthalmic Research Center (International) B.V., Netherlands). Once a flap is created, it is slightly elevated over the retinal surface then ripped tangentially in a rhexis fashion for at least 2 disc diameters (DD) from the hole. ILM peeling maneuver was performed centripetal to the fovea to avoid enlargement of the hole. During ILM removal, the peeled ILM flap with overlying TA particles was easily identified from the unpeeled ILM. Additional clues for ILM identification included its peculiar glistening reflex, which provided clear contrast with the dullwhite appearance of the denuded retina in the peeled area, and/or petechial surface hemorrhages in the peeled area. In cases with inadequate visualization of the ILM, we resorted to ILM-blue stain (D.O.R.C. Dutch Ophthalmic Research Center (International) B.V., Netherlands).
In group I, the ILM was completely removed off the macular hole, whereas in group II, the ILM peel was stopped at the edges of the macular hole forming an island of ILM floating into the vitreous cavity with a 360°attachment to the hole. Redundant peripheral edges of the flap were trimmed by the vitreous cutter using shaving mode with minimal suction. No attempts were made to fold over, dip, or tuck the flap inside the hole to avoid traumatizing the RPE. e retinal periphery was then inspected by scleral depression to check for iatrogenic holes, followed by fluid-air exchange. After removal of the 3 cannulas air/C2F6, hexafluoroethane gas exchange was performed using two 30-gauge needles, one for injection of 14% C2F6 and the other for simultaneous air venting (Supplemental digital content, video 1 demonstrates the IFT using TA). Postoperatively, patients were asked to adopt a drinking-bird positioning protocol, in which the patient had to maintain a face-down posture every other 15 minutes for 50% of his/her waking time for 1 week or until 50% of the gas was absorbed as judged by biomicroscopic examination.
Postoperative Follow-Up.
During the postoperative period, patients were examined at 1 day, 1 week, 1 month, and 3 months thereafter. Postoperative data included BCVA, intraocular pressure (IOP) measurement, assessment of macular hole closure biomicroscopically and on OCT examination, and development of postoperative complications.
Macular Hole Closure.
On biomicroscopy, macular hole closure was defined as complete apposition of the hole margins and restoration of the foveal light reflex. Patients were then classified according to the closure type and restoration of foveal microstructure on OCT imaging as follows.
(1) Closure Type. U-type configuration was defined as closed macular hole with normal foveal contour; V-type configuration was defined as closed macular hole with steep foveal contour, whereas W-type configuration was considered an open flat macular hole with persistent neurosensory retinal defect [21].
(2) Foveal Microstructure. Category 1 included eyes with restored external limiting membrane (ELM) and inner segment/outer segment (IS/OS) junction; category 2 included eyes with restored ELM and disrupted IS/OS junction, whereas category 3 included eyes with disrupted both ELM and IS/OS junction. Category 4 included eyes with persistent open hole after surgery.
Independent-Samples T-Test.
e independentsamples T-test procedure compares means for two groups of cases. Ideally, for this test, the subjects should be randomly assigned to two groups, so that any difference in response is due to the treatment and not due to other factors. For each variable, sample size, mean, standard deviation, and standard error of the mean were calculated. For the difference in means, mean and standard error were calculated.
Analysis of Variance (ANOVA): (F-Test).
ANOVA is a procedure used for testing the differences among the means of two or more treatments. It was noted that if means of subgroups are greatly different, the variance of the combined groups is much larger than the variance of the separate groups. e ANOVA format for the analysis of differences in means is based on this fact.
Correlation Matrix.
e bivariate correlation procedure computes Pearson's correlation coefficient that measures how variables are related. Two variables can be perfectly related, but if the relationship is not linear, Pearson's correlation coefficient is not an appropriate statistic. e results of r value were checked on r table to find out the significant level.
Correlation between Preoperative Parameters and Anatomical Outcome (Type of Macular Hole Closure and
Foveal Microstructure). Statistical analysis in group I and group II, in terms of correlation between preoperative MLD and disease duration prior to surgical intervention versus type of macular hole closure and the degree of restoration of foveal microstructure whether absent, partial, or complete, revealed no statistical significance between these variables.
Correlation between Preoperative Parameters and Final
BCVA. Statistical analysis revealed that preoperative BCVA and preoperative MLD were statistically significant parameters influencing the postoperative BCVA in group I patients only, p 0.03 and p 0.004, respectively. Conversely, duration of disease prior to surgical intervention was not a significant factor influencing postoperative BCVA in both groups (Tables 4 and 5).
Discussion
In the present study, we report our experience in using PPV and ILM peel versus PPV and IFT for management of traumatic FTMH. Analysis of the anatomical and functional outcomes in group I revealed macular hole closure rate of 75%, of which 21.4% was U-type. BCVA improved by a mean of 2.5 lines. ree patients (11%) had final BCVA 0.3 logMAR or better (Snellen ≥ 20/40). In group II, the macular hole closure rate was 92%, of which 50% was U-type. BCVA improved by a mean of 5 lines. In comparison, Kuhn et al. [22] reported 17 eyes with traumatic macular hole that were treated with PPV and ILM peel. e authors had macular hole closure rate of 100% and improvement of BCVA ≥2 lines in 94% of eyes. It is worthy of note that all eyes in that series had either stage 2 or stage 3 holes at presentation, in comparison to the present study in traumatic macular hole secondary to retinal hemorrhages in shaken baby syndrome. e authors performed PPV and ILM peel for 4 patients and reported macular hole closure rate of 75%. e mean macular hole diameter was 700 µm. A more recent retrospective comparative case series by Ghoraba et al. [23] compared the use of C3F8 and silicone oil in 2 groups of patients who underwent PPV and ILM peel for traumatic macular holes. e authors reported primary closure rate of 81.8% and final closure rate of 90.9% after reoperation, although no information was provided on the preoperative macular hole diameter. e authors mentioned that the overall mean improvement of BCVA was 3 and 4 lines in the silicone oil and C3F8 groups, respectively. ere was no information on subgroup stratification in terms of lines of vision gained, lost, or unchanged and how did that correlate with foveal microstructure (Table 6).
To our knowledge, the present study is the first report comparing the outcome of ILM peel technique and IFT in traumatic macular holes. e paucity of literature on outcomes of comparison of both techniques in this particular Journal of Ophthalmology category of macular holes is a significant deterrent to purposeful validation of our findings in the current study. Moreover, most of the published data on traumatic macular holes were derived from retrospective studies and case reports. Nevertheless, we could compare our results to other studies that compared both techniques in other categories of macular holes that are categorized as recalcitrant macular holes such as large holes and myopic macular holes. Table 7 summarizes the outcome of different studies that compared PPV and ILM peel versus inverted ILM flap technique in treating different recalcitrant macular holes in comparison with the outcome of the present study.
In summary, the results of the present study demonstrated that IFT is significantly superior to ILM peel in terms of more anatomical macular hole closure and final BCVA. It is worthy of note that, in the IFT group in the present study, we adopted Casini et al. [28] modification of the classic inverted ILM flap technique described by Michalewska et al. [24] in the sense that we did not attempt to invert the ILM flap and fold it inside the hole to avoid damaging the RPE at the bed of the hole. e rationale of our modified approach is that shearing of the foot plates of the Müller cells during ILM peel and residual attachment of the ILM flap to the hole edges would suffice to incite glial cell proliferation and that eventually fills up the macular hole and promotes its closure [11,24,31].
Limitations of the current study included its retrospective design that dictated inhomogeneity of the compiled data under the ILM peel group and the IFTgroup, in terms of number of eyes recruited, macular hole MLD, and duration of follow-up. For instance, 50% of patients in group I had baseline MLD >800 µm versus 8.3% in group II. Given that our statistical analysis revealed significant correlation between preoperative MLD and final BCVA in group I, this could mean that group I patients had worse visual outcome due to initial much larger MLD. However, we could argue that statistical analysis revealed no significant correlation between baseline MLD and macular hole closure or restoration of foveal microstructure in both groups. By extrapolation, the cause of worse final BCVA in group I was related to inferior macular hole closure rate rather than baseline MLD, which adds to the strength of our results as it corroborates the higher efficacy of IFT.
Conclusion
PPV and IFT is associated with significantly superior anatomic and functional outcomes of traumatic FTMH compared to PPV and ILM peel. Randomized comparative clinical trials focused on surgical management of traumatic Data Availability e statistical data used to support the findings of this study are included within the article. e data collected from history taking and clinical examination of patients recruited in the current study are confidential. Access to these data is restricted by Magrabi Eye Hospital, Tanta, Egypt, in accordance with the hospital's patients' data protection policy. Data are available for researchers who meet the criteria for access to confidential data through contacting the hospital's medical director Professor Hammouda Ghoraba.
Additional Points
Traumatic FTMH acquired notoriety of morbid outcome due to sequelae of trauma as tissue loss and retinal atrophy. Despite a myriad of surgical maneuvers, there is no current consensus on the ideal surgical technique. IFT is a promising approach that would improve the final outcome compared to ILM peel. Disclosure e study was conducted in Magrabi Eye Hospital, Tanta, Egypt. e manuscript involves the use of triamcinolone acetonide (TA) as an adjuvant for ILM peel. Currently, TA is used as an off-label ocular therapeutic agent that is not approved by FDA.
Conflicts of Interest
None of the authors has competing financial, professional, or personal interest that might have influenced the performance or presentation of the work described in this manuscript. None of the authors has commercial associations that might pose a conflict of interest in connection with the submitted article. None of the authors has proprietary interest in any of the materials discussed in the study.
Supplementary Materials
Supplemental digital content 1. Video demonstrates the IFT using TA.mp4. (Supplementary Materials)
|
v3-fos-license
|
2023-08-13T15:16:55.722Z
|
2023-01-01T00:00:00.000
|
260850442
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/50/e3sconf_interagromash2023_06008.pdf",
"pdf_hash": "481d1b5aa34c10730ff2e875d6af9ff30c60d709",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46191",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"sha1": "12a3c3df5ffc87faeca57e1a6b097a9ad129fe69",
"year": 2023
}
|
pes2o/s2orc
|
Implementing a cluster-based method for managing solid waste in the coastal zone of the Black Sea
. The analysis of existing problems of intensively developing infrastructure of the coastal zone of the Black Sea in the field of accumulation of wastes is presented and the optimal ways of its solution at the initial stages of designing of the refuse processing enterprises are offered. The purpose of the work to show that the creation of a full cycle of recycling solid waste will significantly improve the economic and environmental performance of the Ministry of Ecology and Nature Conservation of the Republic of Crimea. Calculation of required industrial capacities of the rubbish processing enterprise is made, the approach in the field of the waste treatment, differing from the existing archaic methods of disposal and based on the cluster distribution is offered. Clustering will make it possible to carry out deep processing of raw materials, to reduce to zero the volume of burial of waste, to transport it to the place of final processing by railway and automobile transport, which will considerably simplify logistics. Such measures will significantly reduce the effect of negative impact on the environment, generate useful capacities in the form of heat and electricity for domestic and industrial consumption, and attract additional investments to the region, increasing the revenue part of the budget of the region as a whole. The proposed approach is well aligned with the national Ecology Project.
Introduction
The problem of protecting the aquatic environment of the Black Sea from adverse anthropogenic impacts is of a complex nature. In this field there are closely intertwined the issues of parameters, adequately reflecting the state of the aquatic environment and methods of their study, rationing the most dangerous for the ecosystem impacts, engineering methods of regulation of anthropogenic load on marine ecosystems [1][2][3][4].
Ideally, one would certainly like to have environmentally friendly industrial plants on the Black Sea coast and along the rivers flowing into it, environmentally sound agriculture, environmentally friendly transport, no uncontrolled potential pollution sources, etc. This should be aspired to, but something can be attempted without waiting for a major overhaul of industrial and agricultural technology [5][6][7][8][9].
Protecting the natural environment and managing its quality is only possible if the state of the environment is constantly monitored, changes related to anthropogenic activities are identified and future trends in this state are foreseen. The environmental monitoring system should provide information: -on the sources of environmental pollution; -on environmental factors (chemical, physical, biological) that lead to pollution of the environment [9][10][11][12][13]; -on the state of biosphere elements (reactions or responses of fauna and flora, atmosphere, hydrosphere, soil to external impacts); -on the quality of biosphere elements (biotic and abiotic components of the biosphere).
The objects of monitoring are man and his health, populations of fauna and flora, microorganisms, atmospheric air, surface and groundwater, soil, subsoil, near-Earth space, industrial and domestic wastes, effluents, emissions, physical impacts, biogeocenoses and, finally, the biosphere as a whole. Each of the objects corresponds to its own type of monitoring. In order to preserve the ecological balance on the territory of the Russian Federation, the authorities, science and civil society should work out through an open dialogue a new concept and technology in the issues of: reduction of buried waste, increase of recycling volumes, safe utilization of industrial and domestic waste.
It is known that about 94% of waste in Russia remains in special landfills due to an acute shortage of production capacity and a lack of deep processing technology. At the same time, the greatest concern is industrial waste, which has been accumulated for years, decades, and so far its volume is only increasing. The current situation in the Republic of Crimea may create additional problems for the leadership of the region and increase environmental risks already in the current decade, while the delayed development of the "rubbish" industry and insufficient attention from the state authorities not only deprive the Russian economy of new technological capacities, but also limit the market for additional investments from partner countries. The past year 2020 was rich in the number of records in many areas of the Russian and global economy. The "rubbish" industry was not left out either, with an absolute record for the amount of accumulated rubbish. According to the State programs of the Russian Federation "Environment protection", calculated for the period from 2012 to 2020, "Development of industry and increase of its competitiveness", the Strategy of production and consumption waste treatment, recycling and neutralization for the period up to 2030, the Strategy of ecological safety of the Russian Federation for the period up to 2025, development and implementation of projects on solid waste disposal (SDW) is a priority in the Russian economy in general and the regions. At present, there are 4 companies in the Republic of Crimea that provide services for disposal, burial and neutralization of biological waste (LLC "Krym-Eco Hydrotech", LLC "Plant for neutralization of epidemiologically hazardous waste", LLC "Ecoservice Group" and State Budgetary Institution of the RK "Crimean Republican Clinical Center of Phthisiology and Pulmonology"), but their activities do not solve the system problem and often have monopoly character. The main problem with the management of biological waste of animal origin in the territory of the Republic of Crimea remains the lack of infrastructure for waste neutralization. While the services of collection, transportation, disposal and neutralization of biological waste of animal origin are not regulated by the state, free market relations in this area, especially in the absence of infrastructure, cannot solve the accumulated problems [13][14][15][16][17].
The problem of waste removal and disposal [17][18][19][20][21] in Crimea is a legacy of the Ukrainian period, when the necessary infrastructure for this was lost and no new infrastructure was created. The problem of MSW collection and disposal is still very acute on the peninsula -there are not enough landfills for their collection, and there are no waste processing plants. At the beginning of 2020, nine landfills are in operation, with a residual life span of two months to seven years. Their occupancy rate ranges from 33.2% to 95%, most of them have been in operation since the 1970s.
Materials and research methods
The methods presented, which consist of minimal loading of waste sorting facilities on the basis of archaic landfill disposal of MSW are morally outdated and need a different recycling approach. At the same time, the acute shortage of industrial recycling capacity has serious consequences, the solution to which has remained only in the plans of the region's executive authorities for many years. The existing problems underline the need for serious measures and solutions today. Designing and constructing new cluster sites, optimising the stages of waste detection, collection and sorting, and transporting it to its final destination in the form of a recycling plant will make it possible: Stop the growth of tariffs for the population, improve the quality of waste disposal, reduce the number of unauthorised dumps; Increase the number of container sites, bring the existing ones in line with Russian legislation, and sort waste by citizens by type of container for collection; Support small municipalities with the necessary federal subsidies for MSW management; Stop the massive non-payment by legal entities through direct payments for each legally disposed cubic meter of waste; Change the existing type of disposal in the form of archaic landfills, to clustering, through the gradual construction of a waste processing complex; Reclassify the budget line 'expenditure' partly as 'revenue'.
MSW detection method using remotely sensed data from the earth's surface
The efficient use of natural resources is directly linked to the use of information and telecommunication technologies. An example of means to study the surface of the Earth for MSW contamination are remote sensing (RS) methods of the Earth's surface using constellations of aerospace vehicles. Development of a set of software and hardware tools for the analysis of data received from spacecrafts will serve as the basis for the cluster approach to MSW detection and disposal, thus ensuring the speed of obtaining legitimate information about the state of the environment and the correct managerial decisions by the executive branch of the region. This method implies improving the accuracy of determining the sources of pollution remotely [21][22][23][24][25], which is especially important in the development of natural resources of the Black Sea coast and Krasnodar Territory. At the same time one of the main tasks in the field of safe ecological situation in the region is to test the hypothesis of the distribution of multidimensional random variables, assessment of the states of the objects under study and their mapping. The method of MSW detection using remote sensing data implies the use of nonparametric algorithms of multivariate pattern recognition, evaluation of the results, testing the results of the study in the Black Sea water area.
In order to maximize the results of the method of detection of remote sensing of SST it is necessary to create a set of software tools NAMAPR (Nonparametric algorithm multialternative pattern recognition), which is aimed at solving problems of processing data obtained from the spacecraft, making management decisions based on nonparametric statistics [4] cluster type. The software product is implemented in Python -3 development environment.
The functionality of the software should include: processing of the primary RS data providing object classification by means of signal image n transformation into a vector of features; evaluation of scale and weight of the signal image n through a normalizing vector ; spatial distribution of the object states on the RS data; estimation of the Black Sea objects' and pollution sources' states according to the spectral data; a multi-alternative algorithm of pattern recognition should be taken as the base.
When calculating the primary RS data, the source information is the image of the signal n mapped to the feature vector . In doing so, each of the received signal images n is part of the training sample = ( 1 , … , , ( ), = 1, ), where = ( 1 , … , ) spectral data from the spacecraft, ( ) directives on the affiliation of detected objects to MSW on the ground surface .
Each of the processed signal images n of the training sample Z must be expertly evaluated through a normalising vector ℎ : where ℎ , the degree to which the n signal image belongs to the j directive class ( ). Value ℎ , can be interpreted as belonging to classes , = 1, . This determines the properties of the MSW on the ground surface . For example, density, moisture, mechanical connectivity, and others. Meanwhile, the training sample Z is used to make decisions based on non-parametric cluster-type statistics and is present in NAMAPR module 3 "Localisation, distribution and mapping information" and module 1 "Processing of primary SRS data for TSS". Based on the Z-sample, feature vector and normalizing vector, the task contained in module 2 "Classification of detected MSW objects" is solved. Module 3 of the software package enables localization of the pollution source by mapping information using spectral data from the remote sensing spacecraft. Module 4 based on control sampling ′ = ( 1 , … , , , = 1, ′) where = ( 1 , 2 ) directives for the coordinates of elements of the earth's surface characterised by spectral data = ( 1 , … , ) assesses the pollution of the Black Sea coastal zone.
In conclusion of the method it should be noted that the developed interface of the NAMAPR program allows: to carry out manual control of the initial data received from a remote sensing satellite, their sequential processing and presentation of results [25][26][27][28][29] in solving problems of environmental monitoring, detection of sources of pollution and accumulation of solid waste in the Krasnodar Territory. The cluster method for MSW recycling contains many functions, while forming a complete recycling and disposal cycle [29][30][31][32][33].
This method represents a complete industrial production cycle and includes: -A recycling line which enables automated sorting and deep processing. As a result, the company can produce finished products from recovered polymer: containers of any size and stiffness factor, shakers, disposable tableware, and other food and industrial containers; -a loading line for the primary reception hopper, which receives the waste not suitable for recycling. Deliveries can be made by rail or road. Trains arriving at the incineration plant are subject to mandatory weighing, metering and radiation control procedures. Unloading is carried out into the briquette press. The volume of the plant is designed for a temporary storage period of up to three weeks. The finished briquettes enter the main combustion plant, with two zones. The first zone undergoes thermal treatment at temperatures above 12600 C, which removes harmful substances and dioxins. The second zone represents the afterburning compartments of the generated gases and allows for the complete removal of organic compounds, neutralising the flue gases and lowering their real temperature; -After the combustion at high temperatures, steam is generated in the main body of the boiler, which is fed to the turbine generator to generate heat and power. The useful capacity is delivered to the end-users via newly created or existing energy backbone networks.
Results and discussion
In this study, the authors present a science-based methodology for the detection of MSW on the territory of the Crimean peninsula by means of remote sensing. Particular attention is paid to the processes of monitoring and detection of environmental pollution by household and industrial waste. The structure of the software on the basis of nonparametric statistics of the signal images is offered. The result part in the form of algorithmic tool NAMAPR is intended for operative estimation of anthropogenic influences, early diagnostics of infringements in sphere of ecological safety. Testing methods are based on the study of the signal n received from a spacecraft, using the feature vector and the normalising vector. Approbation of the results will allow to manage the bioenergetic potential of natural resources and anthropogenic systems in Krasnodar region. The basis of methodology of formation and functioning of technogenic systems is the principle of ecological safety of the region. A safe environment for man and society is only possible with a comprehensive scientific analysis, early forecasting of the possible damage of harmful industries, respect for natural energy resources, and the full cycle of interaction between such systems. The effectiveness of human (society) -environment interaction is largely determined by the recyclability of waste as a result of the production cycle. At the same time slowing down the processes of depletion of natural energy resources is possible with their full or partial replacement. For this purpose the authors have developed a scientifically substantiated methodology of cluster waste recycling, technology of renewable energy resources based on the stages of deep recycling [33][34][35][36]. It is based on principles of controlling physical and mechanical properties of energy materials, complex disperse systems formed in the technological cycle.
Conclusion
Thus, new technologies and introduction of the cluster approach in the sphere of processing and utilization of MSW on the territory of the Crimean peninsula provide a unique opportunity for transition from archaic methods of waste management to their industrial processing. The commissioning of the deep processing waste recycling complex will reduce the effects of negative impact on the environment by reducing the volume of buried waste and involving it in economic turnover.
Acknowledgments
This study was supported by the Russian Federation State Task № FNNN-2021-0005.
|
v3-fos-license
|
2022-04-27T07:44:07.823Z
|
2022-04-26T00:00:00.000
|
248393967
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/cin/2022/5273698.pdf",
"pdf_hash": "3f47ad64b8b12c1a0fd4d5f35587732015ee994e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46193",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "c9ab1569db440c5878a5b05ca39359bb12a9bfe6",
"year": 2022
}
|
pes2o/s2orc
|
Automatic Gray Image Coloring Method Based on Convolutional Network
Image coloring is a time-consuming and laborious work. For a work, color collocation is an important factor to determine its quality. Therefore, automatic image coloring is a topic with great research significance and application value. With the development of computer hardware, deep learning technology has achieved satisfactory results in the field of automatic coloring. According to the source of color information, this paper can divide automatic coloring methods into three types: image coloring based on prior knowledge, image coloring based on reference pictures, and interactive coloring. The coloring method can meet the needs of most users, but there are disadvantages such as users cannot get the multiple objects in a picture of different reference graph coloring. Aiming at this problem, based on the instance of color image segmentation and image fusion technology, the use of deep learning is proposed to implement regional mixed color more and master the method. It can be divided into foreground color based on reference picture and background color based on prior knowledge. In order to identify multiple objects and background areas in the image and fuse the final coloring results together, a method of image coloring based on CNN is proposed in this paper. Firstly, CNN is used to extract their semantic information, respectively. According to the extractive semantic information, the color of the designated area of the reference image is transferred to the designated area of the grayscale image. During the transformation, images combined with semantic information are input into CNN model to obtain the content feature map of grayscale image and the style feature map of reference image. Then, a random noise map is iterated to make the noise map approach the content feature map as a whole and the specific target region approach the designated area of the style feature map. Experimental results show that the proposed method has good effect on image coloring and has great advantages in network volume and coloring effect.
Introduction
With the emergence of digital media technology and the popularity of the Internet, the animation industry has been greatly developed and advanced [1][2][3]. Animation works usually have two forms of expression, two-dimensional animation and three-dimensional animation, among which two-dimensional animation works have strong representation, character drawing and coloring are more natural, and three-dimensional animation works are not limited by the physical engine. At present, two-dimensional animation works still have a wide influence. Generally, ordinary animation video requires at least 25 frames per second to ensure the continuity of the video, while a 25-minute animation video requires 37,500 frames of images [4,5]. Although the drawing of intermediate frames can be made according to the reference of key frames, the heavy task still requires the cooperation of multiple workers. In addition, after the middle frame line draft image is completed by the ordinary painter, it should be checked and modified by the animation instructor to maintain the consistency of the middle frame action and color and ensure the continuous effect of the character action in the line draft video [6]. erefore, the research on the coloring and auxiliary rendering of animation line draft image can not only help the new artist to improve the drawing efficiency but also to reduce the manpower and material resources required for drawing line draft and coloring. In general, the key frame usually refers to the first and last frame of the animation shot. e post-production mainly completes the synthesis of characters and backgrounds, the addition of light and shadow effects, and the work of film editing and dubbing [7,8].
Coloring is a very important stage after animation line draft image creation, time-consuming, and tedious; current cartoon makers will adopt some business equipment and software speed line art image color work but did not greatly improve production efficiency.In these papers, a new automatic coloring method for line art images is proposed based on the reference of color images, and the method is extended to similar color areas [9][10][11][12]. Inspired by the successful application of generative models in image synthesis tasks in recent years, the researchers use deep convolutional neural networks (DCNNs) and put forward many methods for automatic coloring of line draft images, but the coloring results of these methods are not controllable and often accompanied by color artifacts. In recent years, thanks to the prosperity of Internet technology, digital media industry has also become the core industry of the 21st century knowledge economy, such as film and television advertising online games, and a film and television work or an online game to attract people's attention often needs a dazzling poster [13][14][15]. Good works are not only reflected in content design but also in color collocation. Whether for the creation of pictures or videos, colorization is an extremely important link. However, this is not an easy task; the choice and collocation of color are a test of the artist's artistic foundation and time-consuming. Moreover, for a good work that has been created, if the color of one of the objects is not satisfied with the need to recolor, the existing method is direct gray recolor, which is a huge project. erefore, the multiarea coloring of images is a significant research work for both academia and industry [16,17].
Convolutional neural network is a simple and efficient network completely different from previous deep learning models. General deep learning models contain an overall neural network, but CNN breaks this structural mode. It consists of two subnetworks, namely, generator subnetwork and discriminator subnetwork. e generator can be used to extract image features and generate false images. Discriminators can be used to discriminate between real and fake images, giving a probability to conclude that the image is more likely to be real or to generate a fake image. In this process, both the generator model and the discriminator model are constantly trained [18]. With the increase in iteration times, the generator model's false image forgery ability becomes stronger and stronger, and the discriminator's accurate identification ability of true and false images becomes stronger and stronger and finally tends to converge. erefore, the CNN model is widely used in image processing and is one of the most commonly used models in the field of image coloring [19][20][21][22].
Related Works
An et al. [23] asked users to draw a color curve for graffiti and set the gradient range of the curve to control the spread of graffiti. Specifically, the method takes a set of diffusion curves as constraints and obtains the final image by solving Poisson's equation. However, all the above methods require a lot of manual interaction to achieve target coloring. In order to reduce the manual work and realize the specified color style coloring, researchers proposed a coloring method based on reference image. JWA et al. [24] used graph structure to represent the relationship between different regions of line draft image and solved the matching problem through quadratic programming. However, complex line draft image is usually difficult to be accurately segmented, and the same semantic region will be divided into multiple blocks. At present, researchers have proposed line draft map guided by reference image based on the deep learning image coloring method which avoids the requirement of image segmentation accuracy. Chen et al. [25] used conditional generative adversarial networks (cGANs) to color grayscale images without requiring users to interactively fine-tune the coloring results. However, this method is only suitable for learning the relationship between grayscale and color image, not line image. Active learning framework learns domain classification labels in small data sets and helps users to select the data to be labeled in unlabeled sets, so as to continuously update the learning model parameters and improve the accuracy of the model for unlabeled region classification labels. Zeng et al. [26] proposed an adaptive active learning method, which combined information density calculation with least uncertainty calculation to select marked instances, different from previous methods of selecting marked data based on uncertainty. Farid et al. [27] proposed the coloring task which involves specifying a three-dimensional information, such as RGB channel, from the one-dimensional information of grayscale image, that is, intensity or brightness. e mapping between the one-dimensional information and three-dimensional information is not unique. Colorization is ambiguous in nature, and appropriate external information needs to be provided. erefore, the coloring algorithm based on image brightness weighted color mixing and fast feature space distance calculation can achieve high quality static image at the cost of a small part of the calculation cost and improve the speed of the algorithm.
Kotecha et al. [28] trained an automatic system for colorization of black and white images and trained the model to predict the color information of every pixel in the black and white images by using deep network to learn the detailed features of the color images. Berger et al. [29] proposed a new automatic coloring method for comics. Image features include global features and local features. Global features include the overall outline of the image, while local features include some details of the image. Based on convolutional neural network, the network contains two subnetworks, local feature extraction network and global feature extraction network, in order to achieve the purpose of cartoon image processing at any resolution. In recent years, with the popularity of deep learning, the mainstream of cartoon coloring gradually developed is to use two different methods to realize simple coloring algorithm and deep learning model. Among the methods based on deep learning, a variety of deep learning models have been used to complete the task of image translation. Many researchers have tried to use human intelligence to solve the task of automatic coloring of pictures. A akur et al. [30] pointed out that there are few papers on image processing using unsupervised learning CNNs network, so they proposed DCGAN, a deep volume network, to realize CNN's attempts in supervised learning and unsupervised learning, respectively. How to accurately cut the image region is also a major factor affecting the final image color accuracy and quality; for image region segmentation, many researchers have done related research. Oladi et al. [31] decided to consider not only adjacent pixels with similar intensity but also distant pixels with the same texture, so they combined the two to enhance the visual effect. rough experiments, they found that better results can be obtained when coloring pixels near edges based on texture similarity and pixels in smooth regions based on intensity similarity. e method can also be used to color comics, and they have developed a set of interface tools that allow users to tag and color and modify target images. Qiao et al. [32] implemented two coloring methods based on U-NET. e main innovations of this method are as follows: one is to train a deep neural network to directly predict the mapping from grayscale images with colored points to color images; second, the network will also provide users with a data-driven color palette, suggesting the ideal color of the gray map in a given location. is approach can also bring the benefit of reducing the workload for users, and it can also calculate the global histogram of a color reference map to color the gray map [10,33].
From the above analysis, we know that the above methods have studied the automatic gray image coloring widely. However, some problem still exists. For example, no scholar has applied the CNN model to this field till now, so the research here is still a blank, which has great theoretical research and practical application value for logistics enterprises [34].
is paper consists of five parts. e first and second parts give the research status and background. e third part is the automatic gray image coloring by the CNN model. e fourth part shows the experimental results. e experimental results of this paper are introduced and compared and analyzed with relevant comparison algorithms. Finally, the fifth part gives the conclusion of this paper.
e Process of Automatic Gray Image
Coloring. For any enterprise, capital is the source of its life; financing is a way to revitalize the enterprise capital, improve the effective utilization rate of capital, and obtain profits. With the modern new production organization mode-the new financing mode produced by supply chain-supply chain finance financing has become a hot spot, supply chain finance is called the general trend, and enterprises must have its reasons and conditions for supply chain finance financing. As a result, those with the greatest impact of stress and the greatest capacity to take the most drastic and effective action for change are the most likely to achieve the best performance. is paper will establish the supply chain financial performance evaluation index system of warehousing and logistics enterprises from the four dimensions of pressure dimension, action dimension, ability dimension, and driving factor dimension. e whole system of the method is given in Figure 1.
Except for the CNN model, VGG network achieved good results in ILSVRC positioning and classification tasks, respectively. VGG network inherited the main convolutionpooling network in AlexNet. ey abandoned large convolution kernels and replaced them with multiple convolution kernels with a size of 3 × 3, which could reduce the number of network parameters and increase the network depth. is can be said to be the in-depth version of AlexNet; deeper network better fits complex nonlinear problems. Even so, the number of VGG network parameters is still very large.Generally speaking, a VGG network contains 500 parameters, so the model takes up a lot of storage space. However, thanks to its excellent feature extraction ability, it is very suitable for the auxiliary task of feature extraction in some image processing tasks. Differential network does not refer to a specific network, but a structure that can be used in any network model. Residual structure is a connection mode that prevents network degradation through a hop connection. In addition, even with the use of ReLU activation function, the phenomenon of gradient disappearance will occur with the increase in the number of network layers, while the residual structure can solve the above problems. e residual structure adopts the method of skip connection in the network structure. In conclusion, the CNN model shows better performance than the VGG network; hence, the CNN is selected in this paper.
Convolutional Neural Network.
In the process of image processing, we often use matrix convolution to calculate the feature of image. ere are two types of matrix convolution: full convolution and valid convolution. e definition of full convolution is as follows: Assuming that X is the m-order matrix and k is the norder matrix, the definition of effective convolution is Assume that χ(i, j) � 0 or 1.
Convolution layer (the previous layer is the input layer): in the convolution, the data is input to the input layer in 3d form, and then the convolution kernel of the first layer and the corresponding functional module convolve the input data. We add a bias term to each output, and the output of the convolution layer is as follows:
Computational Intelligence and Neuroscience
To ensure that variance is positive, that is, variance exists and is finite as follows: After the input feature graph passes through the convolution layer, we get its feature graph. Now, we hope to use these feature graphs to train the classifier. eoretically, we can use all the extracted feature graphs to train the classifier. In order to solve this problem, we can use the aggregation of the statistical method. For example, we can replace all the original features with the average of the image features, which is faster and less prone to over-fitting than all the features used. So, this aggregation is called pooling, and pooling is divided into average pooling and max pooling. Here, we take the average pooling method as an example and use the weight of each unit of the convolution kernel. After each convolution operation, a bias unit is still added. e output of the subsampling layer is as follows: If the subsampling layer is followed by the convolution layer, the calculation method is the same as that described by the multilayer neural network, and the output is as follows: e schematic diagram of a typical convolutional neural network is shown in Figure 2:
Introduction to Experimental Environment and Data Set.
is line of business on Windows 10 OS runs a HMP RTX 2070S 8 GB video memory, AMD Ryzen 2400G CPU, 16 GB DDR4 software environment: Deep learning framework Pytorch 1.8. Python 3.7 CUDA 10.0 data set uses Place 365 outdoor landscape, including buildings, cabins, landscapes, courtyards, and more than 50 categories. e Epoch is 10. e comparison algorithm is trained by a business line with the same data set, and the Epoch trained is also 10. In this experiment, the initial learning rate is set to 0.02 by experience, the attenuation weight coefficient is 0.0001, the updated weight is 0.1, the updated weight attenuation system is 0.0002, the maximum number of iterations is 10000, epoch is 600 times, and the random gradient descent method Batch is selected. Batch training options are 50.
Experimental Results
Analysis. Firstly, the comparison of the results of the real standard foreground coloring using random noise graph and image conversion network, respectively, includes the comparison of its coloring efficiency, and then the two foreground coloring methods, respectively, combined with U-NET and Poisson fusion, are shown to achieve the whole picture coloring. And the multiregion color rendering method based on random generated noise graph and basis forward transformation network is proposed, respectively. Figure 3 shows the coloring effects of different categories of images. Transformation of regional colours is possible in random noise images and image transformation networks, but there are massive differences between the two speeds and the colours of the ginseng, and each colour needs to be trained, while the image transformation network taken is made up of cyclic transformations. e network generates the noise graph, then updates the parameters of the image conversion network training, and saves the optimal solution.
When one or more shells appear in the picture, you can select a different reference image for each earmark in the picture and color all ICONS in the picture at the same time by selecting the reference image. According to the input semantic map of gray image, one or more marks in the image are colored. In addition, due to the addition of semantic information as a strong constraint condition, the two color labeling methods in this paper have stronger constraints, so as to obtain better graphic coloring effect. Figure 4 shows the coloring results of the same and different categories, which can make the coloring results more diverse and conducive to the user's image creation. e results show the combination of foreground coloring, back coloring, and Poisson melting.
It can be seen that the simple fusion effect directly depends on the result of image instance segmentation, and the quality of the result of instance segmentation determines the fusion effect. However, the current instance Computational Intelligence and Neuroscience segmentation technology can only circle the general target, and there is still a lack of edge processing. erefore, this paper uses the CNN algorithm to fuse the background and foreground after coloring, so that the edge can smoothly transition.
Because the final effect of coloring is difficult to measure with mathematical way, the richer the color, the greater the final loss, but the final effect is acceptable. Figures 5 and 6 show the comparison of some experimental effects. Figure 6 shows the recoloring effect of color photos, and Figure 6 shows the coloring effect of black and white photos. From the perspective of recoloring effect of color images, the images colored by the proposed method are more colorful and have better processing of details and light and shadow effects than other algorithms. DCGAN's effect is always dark, and the algorithm in this paper does not recognize the ground in group E, but other groups of images have relatively good effects.
In the colorful image coloring shown in Figure 5, the method presented in this paper has good semantic properties, vivid colors, and good restoration of sky. e BP algorithm is also good in general, but the restoration of sky is not accurate, and RNN is dull. e algorithm in this paper has a better coloring effect at the gap between leaves and sky and is more accurate in coloring buildings, while the BP algorithm is green. e proposed algorithm is relatively accurate to color the ground, while the BP algorithm may treat the map as the ocean. e coloring effect of several groups of results of the RNN algorithm is too dull.
Original RNN BP CNN
Computational Intelligence and Neuroscience
As shown in Figure 6, it can be seen from the comparison with the original picture that the color of CNN model proposed in this paper is more realistic and natural; that is to say, this paper has obtained the best image coloring effect. .
As shown in Figure 7, just add some color prompt graffiti lines to each part of the scene, and the model can render other uncolored parts in the image area according to the prompts. Moreover, this color rendering is not pure color filling. e transition of light and dark colors makes the whole coloring effect more natural and not rigid. For example, in the pictures of the second group of buildings, only a pure blue graffiti hint line is given instead of a pure color. However, the interaction between colors is not very obvious there are some rise in space, such as in the first set of interaction diagrams had two yellow graffiti decorate tip lines, in addition to other part is not in the yellow line, but the generated version of athletes central sleeve are also be rendered into yellow, with the expected painting effect exists some gap. However, MSE generally showed a downward trend. Figure 8 shows the effect comparison before and after color refinement. In the CNN algorithm, each gray pixel performs full-image search and matching, and the pixel with the smallest error is selected in the source image for matching. erefore, all pixels in the target image should be able to find matching pixels in the source image. However, we are still unable to obtain a very satisfactory method for both color quality and color speed. How to make before and after color refinement achieve a win-win situation is the ultimate goal of this study. In terms of improving quality, more and more matching guiding factors have been proposed, instead of relying only on the brightness mean and brightness variance as the only criteria for matching search. For example, the slope and kurtosis containing nonlinear Input Before refinement A er refinement Before refinement A er refinement Input Figure 8: Effect comparison before and after color refinement. Computational Intelligence and Neuroscience properties of images can guide pixel matching. In order to improve the speed, the tree structure is used to classify pixels as much as possible, so that the matching process has a clear goal, rather than blindly carrying out full image search.
Conclusions
In recent years, with the vigorous development of computer hardware equipment, artificial intelligence has appeared more and more frequently in people's vision. e image automatic color not only can increase the interest in everyday life but also can improve the efficiency of some work, such as poster making, etc. So, the gray image coloring technology has also attracted the attention of many researchers. In this article through the large number of literatures, the survey found existing in coloring technology can meet the needs of most of the color but still exist with some disadvantages, such as not to picture one or more specific areas in color and not only change the image of one or more specific areas in color. Aiming at these deficiencies, this paper uses semantic and image fusion technology to realize the image segmentation of regional color more, separately from the multiple target and the background color of the image. Background coloring is carried out end-to-end coloring by CNN, while target coloring needs to be colored according to the color reference graph. Target coloring is divided into two coloring methods, one is iterative image coloring and the other is training image conversion.
Gray image automatically chromatically is an important research direction in the field of image . With the continuous development of deep learning in recent years, automatic coloring of gray image is gradually realized based on the deep learning model. e simple coloring algorithm is generally inferior to the deep learning model. Both interactive coloring reference coloring and automatic coloring can be realized by deep learning. However, there are still some limitations in using deep learning to complete the gray image automatic coloring task; for example, the color effect learning is not in place, the gray image line contour recognition is not accurate evaluation method and is not unified, and there is no inclusive professional gray image coloring platform and so on.
Using the color based on classification of network, this paper proposes a new CNN network using a traditional Gaussian convolution encoder and a hollow convolution stack structure to perform automatic coloring of gray images. Compared with the results of other business methods, it has advantages in the final coloring effect and volume control.
Data Availability e dataset can be accessed upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
|
v3-fos-license
|
2017-04-14T08:11:48.341Z
|
2010-11-01T00:00:00.000
|
15293076
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0000899&type=printable",
"pdf_hash": "cd2b362106153b1c8bac0efb5313f686ae8b562b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46194",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "cd2b362106153b1c8bac0efb5313f686ae8b562b",
"year": 2010
}
|
pes2o/s2orc
|
Chagas Cardiomyopathy Manifestations and Trypanosoma cruzi Genotypes Circulating in Chronic Chagasic Patients
Chagas disease caused by Trypanosoma cruzi is a complex disease that is endemic and an important problem in public health in Latin America. The T. cruzi parasite is classified into six discrete taxonomic units (DTUs) based on the recently proposed nomenclature (TcI, TcII, TcIII, TcIV, TcV and TcVI). The discovery of genetic variability within TcI showed the presence of five genotypes (Ia, Ib, Ic, Id and Ie) related to the transmission cycle of Chagas disease. In Colombia, TcI is more prevalent but TcII has also been reported, as has mixed infection by both TcI and TcII in the same Chagasic patient. The objectives of this study were to determine the T. cruzi DTUs that are circulating in Colombian chronic Chagasic patients and to obtain more information about the molecular epidemiology of Chagas disease in Colombia. We also assessed the presence of electrocardiographic, radiologic and echocardiographic abnormalities with the purpose of correlating T. cruzi genetic variability and cardiac disease. Molecular characterization was performed in Colombian adult chronic Chagasic patients based on the intergenic region of the mini-exon gene, the 24Sα and 18S regions of rDNA and the variable region of satellite DNA, whereby the presence of T.cruzi I, II, III and IV was detected. In our population, mixed infections also occurred, with TcI-TcII, TcI-TcIII and TcI-TcIV, as well as the existence of the TcI genotypes showing the presence of genotypes Ia and Id. Patients infected with TcI demonstrated a higher prevalence of cardiac alterations than those infected with TcII. These results corroborate the predominance of TcI in Colombia and show the first report of TcIII and TcIV in Colombian Chagasic patients. Findings also indicate that Chagas cardiomyopathy manifestations are more correlated with TcI than with TcII in Colombia.
Introduction
Chagas disease caused by the parasite Trypanosoma cruzi is a complex zoonosis that is widely distributed throughout the American continent. The infection can be acquired by triatomine faeces, blood transfusion, oral and congenital transmission and by laboratory accidents. Chagas disease represents an important public health problem, with estimates by the Pan American Health Organization in 2005 of at least 7.7 million people having T. cruzi infection and other 110 million being at risk [1]. Also, immigration of infected people from endemic countries is now making Chagas disease a relevant health issue in other regions, including Europe and the United States [2]. Chagas disease comprises two stages where the acute phase occurs about one week after initial infection and about 30-40% of the infected patients develop the chronic phase of the disease, when the cardiomyopathy is the most frequent and severe clinical manifestation. [2] The T. cruzi parasite comprises a heterogeneous population that displays clonal propagation due to the different cycles of transmission, and the possibility of recombination exchange that can be found in nature and has been previously reported in vitro [3,4,5]. Recently a new nomenclature for T. cruzi has been adopted and includes six Discrete Taxonomic Units (DTUs) named as T. cruzi I (TcI), T. cruzi II (TcII), T. cruzi III (TcIII), T. cruzi IV (TcIV), T. cruzi V (TcV) and T. cruzi VI (TcVI) based on different molecular markers and biological features [6]. Recent studies based on mini-exon gene sequences have shown polymorphism on this region reporting four genotypes within TcI Colombian isolates, these genotypes have also been reported in other regions of South America where five TcI genotypes have been detected [7]. Also different molecular markers including a 48 set of microsatellite loci have shown the great diversity in TcI [8,9,10,11]. Primers designed based on the sequences of TcI Colombian isolates confirmed the existence of three genotypes (Ia, Ib and Id) and a new genotype found in the southern cone countries named as TcIe [7,12], also the use of Internal Transcribed Spacers 1 and 2 clustered the genotypes Ia, Ib and Id as being related to transmission cycles of Chagas disease [13].
Genetic variability has been clearly demonstrated in T. cruzi reporting homogeneous (TcII) and heterogeneous groups considered hybrids due to recombination events (TcIII-TcVI) [3,4,5,14,15,16,17]. Hybrids are considered within T. cruzi showing TcV and TcVI as products of recombination of TcII and TcIII and TcIII/TcIV as potential products of recombination of TcI and TcII [5,16], although this last statement is still controversial.
The molecular epidemiology of T. cruzi may have important implications on the disease features. However, few correlations have been relating T. cruzi genetic variability and the disease outcome, showing TcI more related to patients with cardiomyopathy in Colombia and Venezuela and TcII-TcVI more related to patients with digestive syndrome (megaesophagus/megacolon) [2,18]. In Colombia, TcI is predominant in patients, insect vectors and reservoirs, but TcII has also been reported. The first description of nine chronic Chagasic patients infected with TcII was reported by Zafra et al., 2008 [19] and also mixed infection with TcI and TcII in the same patient was reported [20]. The direct detection of T. cruzi DTUs in the blood of chronic Chagasic patients was established by amplification of the 24Sa rDNA divergent domain and the use of mitochondrial house-keeping genes [19]. In this study, molecular characterisation of T. cruzi DTUs showed that most of the patients were infected with TcI and some patients were found infected with TcII (9.9%). Recently, a new approach of T. cruzi DTUs detection in chronic Chagasic patients was developed showing that TcI was the predominant DTU and TcII was also detected reporting that the genetic characteristics of the TcII parasites found in Colombia were similar to those TcII found in Bolivia and Chile [21].
The objective of our study was to characterise and determine T. cruzi DTUs in chronic Chagasic patients from Colombia and to correlate the molecular variability of the parasite with the presence or absence of cardiac disease manifestations exhibited by the patients.
Ethics statement, sample collection and DNA isolation
A total of 240 seropositive chronic Chagasic patients were included in the study, as part of the Colombian population recruited for the BENEFIT trial (BENznidazol Evaluation For Interrupting Trypanosomiasis project (BENEFIT). Samples were taken as part of the main BENEFIT trial that has recruited to date 2150 patients from Argentina, Bolivia, Brazil, El Salvador and Colombia. Written and oral consent was obtained in all patients included as part of the BENEFIT trial, the study is approved by all local and national IRBs. Furthermore the study is approved by the Ethics Research Committe of the WHO as one of the funding agencies of the BENEFIT trial, [22]. All patients regardless of a positive or negative baseline PCR are being followed as part of the main trial which outcome is a composite of clinical events referenced in the text [22]. Following the inclusion and exclusion criteria for the BENEFIT study, all patients had cardiomyopathy, as defined by the pre-established ECG or Echo abnormalities. Twenty serologically negative control patients from non-endemic regions were also included. A 10-mL blood sample was collected from all patients and control subjects. Blood samples were mixed with an equal volume of 6 M guanidine HCl/0.2 M EDTA solution immediately after sample collection. The samples were immersed in boiling water for 15 min. After cooling, two 200-mL aliquots were taken from each patient blood lysate and successive phenol-chloroform extractions were performed on this material as previously reported [23]. The DNA was then stored at 220uC. The DNA purity and concentrations were determined using an Eppendorf Biophotometer 6131 at 260/280 nm.
Author Summary
Trypanosoma cruzi the aetiological agent of Chagas disease infects over 8 million people in Latin America. Currently, six genetic groups or DTUs have been identified in this highly genetic and diverse parasite. Many authors have considered that disease installation is induced by this genetic variability in T. cruzi but few comparisons have been made to make an approach of this premise. We performed an analysis including 240 chronic ascertained Chagasic patients evaluating cardiac alterations in electrocardiogram, radiology and echocardiogram. Also, we developed molecular characterisation on samples from these patients showing that in Colombia T.cruzi I is the predominant but others such as TcII, TcIII and TcIV can be found in low proportions as mixed infections TcI/TcII-TcIV. We conclude that TcI is more related to cardiomyopathy than TcII and we show the first report of TcIII and IV in chronic Chagasic patients from Colombia. These results will help to elucidate the molecular epidemiology of T. cruzi in this country.
50 U/mL of iTaq polymerase, 6 mM MgCl2, SYBR Green I, 20 nM fluorescein); 50 pM of TcZ1 and SatRv primers and 3 mL of DNA, the thermal profile and acquisicion of fluorescence was used as previously reported [27]. Molecular identification of genotypes in TcI was accomplished using primers 1-A (59TGT GTG TGT ATG TAT GTG TGT GTG [12]. Each reaction was carried out in duplicate, and twenty microliters of PCR product for each reaction were analysed by electrophoresis on a 2% agarose gel and visualised by staining with ethidium bromide. Positive controls were always included in the PCR assays using CG strain (TcI), VS strain (TcII), CM17 strain (TcIII), CANIII strain (TcIV), MN cl 2 (TcV), CL Brener strain (TcVI) and 444 (T. rangeli strain).
Characterisation of presence/absence of electrocardiographic, radiologic and echocardiogram alterations
Seventeen cardiac abnormalities were evaluated during electrocardiographic, radiologic and echocardiographic characterisation of each patient. The 17 alterations are as follows: right bundle-branch block (1), left bundle-branch block (2), left anterior fascicular block (3), left posterior fascicular block (4), ventricular premature beats (5), first degree atrioventricular block (6), Mobitz type I atrioventricular block (7), sinus bradycardia (8), primary ST-T changes (9), abnormal Q-waves (10), low voltage QRS (11), atrial fibrillation (12), Mobitz type II atrioventricular block (13), complete atrioventricular block (14), complex ventricular arrhythmias (15), evidence of regional wall motion abnormality (16), reduced global left ventricular function and increased cardiothoracic ratio (17) [22]. Variables were taken as categorical and the result was analysed by presence or absence of each abnormality. The prevalence of the abnormalities was determined based on the Heart Alterations and T. cruzi Genetic Variability www.plosntds.org 240 patients evaluated. Independence tests using Chi-square test (p,0.05) were performed in 20 random TcI and 20 TcII patients to find possible associations between the presence/absence of cardiac abnormalities in patients characterised as TcI, TcII-TcIV and possible mixed infections TcI/TcII-TcIV; also to evince statistically significant differences in the effect of specific DTUs with the presence/absence of cardiac abnormalities. Student t test (p,0.05) was developed followed by a Tukey test (p,0.05) to observe the statistically mean differences among each cardiac abnormality and T. cruzi DTUs. Lastly, to ensure that all the results obtained were not attributed to randomness, all the results were randomized in PopTools 3.1.0 with 10,000 replicates (p,0.05). Figure 2B). Regarding the qPCR strategy to genotype T. cruzi, this assay was only performed to confirm TcIII and TcIV according to the melting temperatures previously established [25].
T. cruzi I genotypes and TcII-TcVI DTUs identification
In the single infections, TcI genotypes were detected (95/240), and amplification results were observed for TcIa (46/95) and TcId (8/95) genotypes. In the mixed infections, only genotype Id was detected (16/22). Due to the low sensitivity of this molecular marker no amplification was observed in some samples that were positive by amplification of the intergenic region of the mini-exon gene.
When the rDNA 18S region and 24Sa D7 domain patterns of amplification were obtained 24/25 patients were infected with TcII in the single infections. Regarding the results of mixed infections obtained by Mini-exon TcI/TcII-TcVI a markedly low frequency of TcII was observed, only 1/22 patients were infected with TcII, 5/22 with TcIII and 10/22 with TcIV, all in mixed infection with genotype TcId (Figure 3A-3B). The presence of TcIII and TcIV was corroborated using the melting temperature analysis in the qPCR assays based on the satellite DNA region showing that those samples characterized as TcIII (5/22) and TcIV (10/22) were ascertained those DTUs (Figure 1; Figure 3C).
Characterisation of cardiomyopathy and correlation with molecular characterisation of the T. cruzi infection
Statistical analyses were performed in terms of trying to define the correlation between T. cruzi genetic variability and the presence/absence of cardiac abnormalities in chronic Chagasic patients. The prevalence of the cardiac alterations was estimated ( Figure 4). Due to the predominance of patients infected with TcI and the low number of patients infected with TcII, twenty random TcI and 20 TcII samples were selected to conduct the statistical analyses. Independence test (Chi-square p,0.05) showed associations between the presence/absence of cardiac alterations and the infection by specific T. cruzi DTUs (p = 0.037 for TcI and p = 0.039 for TcII) Student t-tests showed that there were mean differences in the presence of cardiac alterations in patients Heart Alterations and T. cruzi Genetic Variability www.plosntds.org characterised as TcI or TcII (p = 0.033). The prevalence of cardiac alterations was estimated in TcI and TcII based on the 20 samples previously selected (Figure 4). Significant and non-significant mean pair-wise comparison using Tukey test was developed on the 20 samples selected showing that the prevalence of most cardiac alterations was elevated depending on TcI or TcII infection ( Figure 4). Likewise, to ensure that the results were not attributed to randomness, all the data was randomised in PopTools 3.1.0 with 10,000 replicates and it was observed that the data were not attributed to randomness (p = 0.037). The cardiac abnormalities were also assessed comparing the prevalence of these alterations between the genotype TcIa and the genotype TcId, Chi-square test based on 8 random TcIa samples and 8 TcId samples were performed showing association between cardiac alteration and TcIa (p = 0.011) and no association between cardiac alteration and TcId (p = 0.061). Furthermore, the t-student test showed strong Heart Alterations and T. cruzi Genetic Variability www.plosntds.org mean differences between cardiac alterations of TcIa and TcId (p = 0.023) demonstrating that the genotype related to the domestic cycle of transmission presents more cardiac alterations than those patients infected with the genotype related to the sylvatic cycle of transmission.
Discussion
The main purpose of defining T. cruzi nomenclature must be related with the biological, clinical and pathological characteristics associated with specific populations of T. cruzi [14,18]. So far to our knowledge, few correlations reported have been evidenced in the difference of the host humoral response to specific T. cruzi genotypes; however, these findings were flawed because of the low reliability of the diagnostic tests used, leading to a high proportion of false negatives due to variability in the T. cruzi strain used for the diagnosis, the incrimination of TcI with severe forms of myocarditis in cardiac samples from chronic chagasic patients in Argentina and no specific clinical manifestation related to T. cruzi DTUs in Bolivian chagasic patients show the pleomorphism of T. cruzi [23,28,29,30]. Regarding the genetic variability of the parasite, prognosis markers based on mitochondrial genes where the presence of specific mutations can trigger the complications of the chronic phase of disease in asymptomatic patients have also been demonstrated [31,32]. Despite the genetic variability, it is important to consider the presence of T. cruzi clones that can be found in different tissues. Several studies have demonstrated a specific histiotropism of T. cruzi in mice showing differences in the pathological, immunological and clinical features the parasite can elicit in the host [18,33,34]. Moreover, some authors have shown that the T. cruzi population in a patient's bloodstream could be dissimilar to the parasite population that causes tissue damage [35]. Differences were found in T. cruzi populations in the bloodstreams of patients with chronic Chagasic cardiomyopathy and of Chagasic patients without cardiomyopathy [36]. Also, microsatellite analyses have shown multiclonality in samples of heart and in the bloodstream of infected patients [37,38] demonstrating that probably specific populations of T. cruzi can determine the disease outcome.
The presence of TcIII and TcIV could possibly be explained by the selection of T. cruzi in the amplification procedures; as mentioned before the predominant DTU in Colombia is TcI and (12), Mobitz type II atrioventricular block (13), complete atrioventricular block (14), complex ventricular arrhythmias (15), evidence of regional wall motion abnormality (16), reduced global left ventricular function and increased cardiothoracic ratio (17). (*) Statistically significant comparisons of TcI vs. TcII cardiac alteration using Tukey test. doi:10.1371/journal.pntd.0000899.g004 Heart Alterations and T. cruzi Genetic Variability www.plosntds.org the amplification procedures are being subjected to this specific DTU not allowing the amplification in the axenic culture of other low parasite density DTUs. This has been evidenced in T. cruzi isolates that are considered mixed infections especially in congenital cases [37,39,40,41]. Another factor is that sylvatic reservoirs might be selecting specific clones of T. cruzi. Recently, the possible association between T. cruzi DTUs and sylvatic reservoirs has been shown. TcI is related to opossums in the arboreal ecotope and TcII-TcVI to armadillos in the terrestrial ecotope where it was not possible to find opossums infected with TcII-TcVI, suggesting the possible selection of T. cruzi DTUs in the reservoirs [38,39,40]. Selection of T. cruzi populations could also be caused by the factorial contact of reduviid insects in their bloodmeal with humans and sylvatic reservoirs, such that they can acquire different T. cruzi populations each time they feed [14,16].
Most of the patients recruited in this study came from an endemic region of Colombia (Santander) where wild reservoirs and sylvatic triatomines have been reported [19,20,42]. This diversification of sylvatic triatomines could explain the unexpected transmission of the foreseen genotypes TcIII and TcIV. Also, the possible interaction of sylvatic triatomines in the domestic cycle of T. cruzi transmission might be an explanation for the appearance of TcII, TcIII and TcIV in the chronic Chagasic patients. Infection of TcII-TcIV in Rhodnius prolixus and Panstrongylus geniculatus has been reported and might explain the presence of TcII-TcIV and its association with the domestic cycle where parasites with these DTUs are infecting the patients [43]. Some hypothesis show that reservoirs from the arboreal ecotopes such as didelphids and primates are always infected with TcI and those associated with the terrestrial ecotopes such as armadillos that have been found infected with TcIII and some sylvatic rodents are infected with TcII-TcVI [44]. Recent reports show that this distribution is not absolute because Monodelphis brevicaudata and Philander frenata have been found infected with TcIII and D. aurita, primates and wild non-human primates have been infected with TcII, TcI and TcIV respectively [45,46,47,48]. The reservoirs play an important role in the epidemiology of Chagas disease and represent the basis of finding patients infected with TcIII and TcIV showing rodents such as P. semispinosus and Rattus rattus infected with TcIII or TcIV in Colombia [49,50].
The most interesting observation is the presence of TcIII (5/ 22 patients) and TcIV (10/22 patients) in the chronic Chagasic Colombian patients. Hybridisation events have been observed in T. cruzi based on the use of satellite DNA, rRNA sequences and phylogenetic inferences [5,51,52,53,54]. TcIII and TcIV are considered the Zymodeme III related to the sylvatic cycle of transmission in the Amazon basin and also related to TcI [55,56,57]. TcIII and TcIV are reported as a possible product of an event of recombination between TcII and TcI. Herein, we found the presence of genotype Id in mixed infections and the decrease of the number of patients infected with TcII. We recently discovered the existence of genotypes within TcI isolates [8,9,12]. These results have been corroborated using the internal transcribed spacer 2 (ITS-2) where three genotypes were clearly grouped [13]. A set of 48 loci microsatellite analyses corroborated the sylvatic and domestic-peridomestic genotypes (Ia and Id) and the recent report of the genotype TcIe supports the idea that TcI is a really diverse DTU that requires further investigation in order to obtain hidden information of this DTU [7,9]. Our results demonstrate the presence of TcIV in infected patients as previously reported in Venezuela [58]. In addition, we report the presence of TcIV in human mixed infection with genotype Id. Recent studies have shown the presence of TcIa and TcId genotypes in chronic chagasic patients from Argentina. Interestingly, the most prevalent genotype is Ia in bloodstream and Id more prevalent in cardiac tissue explants suggesting TcI genotype histiotropism [29]. Our results agree with these findings when TcIa was found in 46 patients and TcId in 24 patients (8 from single infections plus 16 from mixed infections).
Statistical significance was obtained when independence tests were performed using categorical data for the presence or absence of cardiac alterations detected by the electrocardiographic, radiologic and echocardiograpic methods. Independence was observed when TcI and TcII were determined reflecting that the genetic variability in T. cruzi may represent an important factor for disease installation. Moreover, the findings of this study demonstrate that TcI is the predominant genotype associated with manifestations of cardiomyopathy in chronic Chagasic patients. These results have already been confirmed where severe myocarditis was found in patients from Argentina infected with TcI and moderate myocarditis was caused by TcV and TcVI despite of TcII can also cause a lower grade of severe myocarditis [29]. The T. cruzi population distribution in the bloodstream and in the cardiac tissue has been shown to be quite different in previous studies [18,33,34]. Therefore, it is now really necessary to conduct studies based on the use of cardiac tissue and bloodstream samples to compare the T. cruzi genotypes that are circulating in Colombian Chagasic patients and those that are probably involved in producing organ damage in infected patients.
In conclusion, we report for the first time the presence of TcIII and TcIV in chronic Colombian Chagasic patients. We also confirm the presence of TcI and TcII in chronic Chagasic patients, and found that TcI is associated with more cardiomyopathy abnormalities than TcII. In addition, we describe the predominance of TcI in Colombia and the mixed infection in the same patient with TcI/TcII-TcIV. It is important to consider that our study was accomplished in a restricted area and the whole T. cruzi diversity in chronic Chagasic patients was not considered, also the detected genotypes in bloodstream and populations that cause organ failure may be dissimilar as has been previously reported in Colombia: TcI in bloodstream and TcII in cardiac tissue in a same patient [20,29,33,34]. New studies regarding most of the endemic areas from Colombia and molecular characterization directly from infected organs are required to determine the T. cruzi populations circulating in Colombian patients. New studies are also necessary to understand the specific T. cruzi populations that are generating the tissue damage in the infected patients.
|
v3-fos-license
|
2024-07-13T06:17:31.002Z
|
2024-07-01T00:00:00.000
|
271112337
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "9fb64c3d4ee19537d71a3bcd2b254d869e3b33a8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46197",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "ab75c48575a267b05490665328cdd339c7bfe37b",
"year": 2024
}
|
pes2o/s2orc
|
Trajectories in long-term condition accumulation and mortality in older adults: a group-based trajectory modelling approach using the English Longitudinal Study of Ageing
Abstract Objectives To classify older adults into clusters based on accumulating long-term conditions (LTC) as trajectories, characterise clusters and quantify their associations with all-cause mortality. Design We conducted a longitudinal study using the English Longitudinal Study of Ageing over 9 years (n=15 091 aged 50 years and older). Group-based trajectory modelling was used to classify people into clusters based on accumulating LTC over time. Derived clusters were used to quantify the associations between trajectory memberships, sociodemographic characteristics and all-cause mortality by conducting regression models. Results Five distinct clusters of accumulating LTC trajectories were identified and characterised as: ‘no LTC’ (18.57%), ‘single LTC’ (31.21%), ‘evolving multimorbidity’ (25.82%), ‘moderate multimorbidity’ (17.12%) and ‘high multimorbidity’ (7.27%). Increasing age was consistently associated with a larger number of LTCs. Ethnic minorities (adjusted OR=2.04; 95% CI 1.40 to 3.00) were associated with the ‘high multimorbidity’ cluster. Higher education and paid employment were associated with a lower likelihood of progression over time towards an increased number of LTCs. All the clusters had higher all-cause mortality than the ‘no LTC’ cluster. Conclusions The development of multimorbidity in the number of conditions over time follows distinct trajectories. These are determined by non-modifiable (age, ethnicity) and modifiable factors (education and employment). Stratifying risk through clustering will enable practitioners to identify older adults with a higher likelihood of worsening LTC over time to tailor effective interventions to prevent mortality.
only 5 clusters as I can see.Were the fit statistics "worsening" when adding more clusters?This could have been included in the supplementary table.I understand the model as there is a number between 0 and 10 describing MLTC at each wave of the survey.I miss information detailed information about missing.And also, on how many participants have been followed over how many waves of the survey, see comments above.Some data on this is given at the start of the results, but how many of the persons participated in only one wave?How could they be used in trajectory analyses?Table 1 gives an informative overview of the background variables related to trajectories.It is a bit confusing when the first column gives percentages vertically but the other horizontally.Could this be indicated in some way?The identified clusters were interesting.They seem parallel to each other except for the No-LTC group which has no increase.Aren't they in sum showing that there is a similar growth in all groups concerning the number of chronic conditions?What does this modeling add to the one-time counting?It is peculiar that there is no group that seems to be one with a stable number for chronic conditions, except for those with no conditions.The discussion should elaborate on these possible problems using these trajectory modeling.On page 14 line 18 it is stated "An interesting finding was that clusters with different initial levels and rates of change in MLTC indicating individual differences in the process of health deterioration."As mentioned above, doesn't this show that the numbers are growing with the same slope by time for all, except for those who were "well" when included?This is if I understand this right, not quite what is stated in the conclusion in the abstract: "The development of MLTC and the increase in the number of conditions over time follow distinct trajectories."It may depend on when you start following the person?There are shown clear associations between cluster and ethnicity, education, and employment, well-known risk factors.The gender difference is more questionable as there is only one significant association between sex and clustering with a CI that is very near to 1, and there is no clear trend.Perhaps this finding is given too much weight?The discussion ends with arguing for this study can help provide knowledge for policy and planning, but one could question the novelty of the result.The topic of MLTC and challenges for health care are surly a very central issue for studying and modeling latent clusters is an interesting method to gain new knowledge, but maybe this way of handling data in the present study did not add much to the well-known risk factors of ethnic minorities and low socioeconomic status?
REVIEWER
DAISUKE KATO Mie University Graduate School of Medicine, Department of Family Medicine REVIEW RETURNED 29-Aug-2023
GENERAL COMMENTS
I appreciate very much the opportunity to review this paper, which is of great academic value.The authors have, in my opinion, addressed a very important research topic in this study.
In this study, the authors stated that they have succeeded in identifying an association between multiple long-term conditions (MLTC) trajectories and mortality.I agree with that.
On the other hand, I would like you to make one change to the description of the paper that I hope will enhance the value of this study.
In the abstract, the authors stated in the objective part that the aim was to clarify the association between clusters and mortality, but in the conclusion part, they emphasized the importance of identifying older people at high risk of MLTC (i.e., prone to increasing numbers of diseases over time) and providing them with effective interventions.
In other words, the current description will make it difficult for readers to find consistency in the aims and conclusions of the abstract.I would like the authors to revise this point.
There were no particular points of concern with regard to the content of the main text.
No Reviewer comments Author response 1
The paper "Trajectories of multiple long-term conditions and mortality in older adults: A retrospective cohort study using English Longitudinal Study of Ageing (ELSA)" is based on repeated surveys among older adults (defined as above 50 years), including approx.15 000 persons.The paper is overall well written and concise about this very important theme.
Thank you for the time to review our manuscript.
2
The introduction is brief but to the point introducing MLTC as an increasing challenge and pointing out a lack of longitudinal studies on how patterns of diseases evolve over time.It could have been given some more details about the known risk factors.They also describe the gap in knowledge about trajectories as "critical", which should be justified with more theories about the possible usefulness of identifying such trajectories.The introduction presents aims to (1) classify older adults with MLTC into clusters based on the cumulation of conditions as trajectories over time; (2) characterize clusters, and (3) study associations between derived clusters and allcause mortality.
Thank you for these insights.We have kept the introduction relatively brief and referenced a range of key sources which shed further light on known risk factors.
We take onboard the reviewer's comment about the gap in knowledge being critical.Whilst there is a gap, we acknowledge that a 'critical gap' may not reflect the current state of knowledge in this field.We have removed the word 'critical' from our text.
We acknowledge the reviewer's question about known risk factors regarding MLTC.We think that our study addresses this in both the discussion and conclusion sections of the manuscript.
3
Concerning the number of participants: Since approx.12 000 was included from the start, how are the persons included later than 2002 handled?How many persons were participating in the 2004/5 t wave used as the baseline in this study?What about participants included later, and those dying, are they all included in the analyses?And how are the missing answers treated?Thank you.We mention in the "Data sources and study population" some general information about the ELSA dataset.We mentioned that "it included 12,099 people at study entry in 2002 (wave 1)", so this is not our study baseline population.Below, we mention that our study population was between waves 2 and 6, as wave 2 was the first collecting time point of long-term conditions and the most recent wave with available data on all-cause mortality status.
Our study is cross-sectional, so we captured the number of their long-term conditions at a single time point of all participants irrespective of when they were included in wave 2 or later.
With respect to the missing number of longterm conditions, we excluded these participants from the analyses (n=123).This information has already been included in the manuscript.
4
In the MLTC paragraph (P7L28++)10 conditions are listed and these rather few conditions are mentioned as a limitation earlier, but in line 35++ there is described combination of more specific diseases.Was there a more detailed list of diagnoses available?
Thank you for your comment.The 10 LTC we have included in our analysis are the following: hypertension, diabetes, cancer, lung disease, cardiovascular disease, stroke, mental health disorder, arthritis, Parkinson's disease, and dementia.
However, there were some other LTC including depression, asthma, Alzheimer's disease, heart attack, angina, heart murmur, abnormal heart rhythm, and congestive heart failure, that were combined in the corresponding above 10 categories as the numbers were small, as we have already made this clear in the manuscript.
No more information is provided in ELSA regarding a more detailed list of diagnoses.Covariates (P8L4++) are used from baseline, how is this handled when new persons were included?
In the statistical analyses part, the group-based trajectory modeling (GBTM) and fit statistics are described adequately.On page 12 L3 six clusters are mentioned but five were chosen, but figure 2 shows only 5 clusters as I can see.
Were the fit statistics "worsening" when adding more clusters?This could have been included in the supplementary table.
Thank you.All the covariates were handled at the baseline for each participant's relevant baseline time point.
We have added in the supplements the statistics for the sixth cluster which were worse than the fifth cluster, so this is the reason why we chose five clusters.I understand the model as there is a number between 0 and 10 describing MLTC at each wave of the survey.I miss information detailed information about missing.And also, on how many participants have been followed over how many waves of the survey, see comments above.Some data on this is given at the start of the results, but how many of the persons participated in only one wave?How could they be used in trajectory analyses?Thank you.With respect to missing covariates at baseline, we used data provided in the nearest subsequent waves.With respect to the missing number of LTC, we excluded these participants from the analyses (n=123).All this information has already been included in the manuscript.
We used the participants with participation in at least 2 waves to be able to incorporate into the trajectory model.There were 4,965 participants who participated in all waves and 5,555 who participated in at least 2 waves.Table 1 gives an informative overview of the background variables related to trajectories.It is a bit confusing when the first column gives percentages vertically but the other horizontally.Could this be indicated in some way?Thank you for your comment.We have added the following footnote in Table 1 to make that clearer: "Note: The percentages in the "total" column are presented vertically, whereas in the other five columns horizontally."The identified clusters were interesting.They seem parallel to each other except for the No-LTC group which has no increase.Aren't they in sum showing that there is a similar growth in all groups concerning the number of chronic conditions?What does this modeling add to the one-time counting?They might be similar to each other, but the number of long-term conditions they start, and end is different, so this is what our study.The development of multimorbidity and the increase in the number of conditions over time follow distinct trajectories.It is peculiar that there is no group that seems to be one with a stable number for chronic conditions, except for those with no conditions.The discussion should elaborate on these possible problems using these trajectory modeling.
Thank you for your comment.There is no group with a stable number of conditions as the study population is older people and it is anticipated that the mean number of conditions will increase as we follow them over time (waves).We have added the following text in the discussion: "…or due to the older population as it is anticipated that the mean number of conditions will increase as we follow them over time (waves)."On page 14 line 18 it is stated "An interesting finding was that clusters with different initial levels and rates of change in MLTC indicating individual differences in the process of health deterioration."As mentioned above, doesn't this show that the numbers are growing with the same slope by time for all, except for those who were "well" when included?This is if I understand this right, not quite what is stated in the conclusion in the abstract: "The development of MLTC and the increase in the number of conditions over time follow distinct trajectories."It may depend on when you start following the person?Thank you.We have removed the word "increase" as you stated the slopes are similar.However, the development in the mean number of conditions is different.
There are shown clear associations between cluster and ethnicity, education, and employment, well-known risk factors.The gender difference is more questionable as there is only one significant association between sex and clustering with a CI that is very near to 1, and there is no clear trend.Perhaps this finding is given too much weight?Thank you for your comment.We have revised the abstract and the key findings in the discussion, not focusing on this specific finding as you suggested.
The discussion ends with arguing for this study can help provide knowledge for policy and planning, but one could question the novelty of the result.The topic of MLTC and challenges for health care are surly a very central issue for studying and modeling latent clusters is an interesting method to gain new knowledge, but maybe this way of handling data in the present study did not add much to the well-known risk factors of ethnic minorities and low socioeconomic status?Thank you for this comment.A key aspect we present in our paper, as the reviewer identifies, is the method used in the study to model clusters.In this context, we think that this work does add to the literature on known risk factors although we recognise that this is something that should not be overstated.We have been careful in this paper not to overstate our findings and have situated these in the context of existing literature.In the limitations, we have been careful to acknowledge that 'the results of this study should be interpreted with some caution,' although we do feel that our analysis does add to the current evidence base and as such, it is an additional contribution to research and future policy and planning in this field.We have amended the wording in the last paragraph of the manuscript to frame, more precisely, our conclusions: 'Considering LTC clusters has potential to enable future researchers and practitioners to provide evidence in identifying older adults in England at a higher risk of worsening MLTC over time and further tailoring effective interventions for at-risk individuals.'Reviewer: 2 13 I appreciate very much the opportunity to review this paper, which is of great academic value.The authors have, in my opinion, addressed a very important research topic in this study.In this study, the authors stated that they have succeeded in identifying an association between multiple long-term conditions (MLTC) trajectories and mortality.I agree with that.On the other hand, I would like you to make one change to the description of the paper that I hope will enhance the value of this study.In the abstract, the authors stated in the objective part that the aim was to clarify the association between clusters and mortality, but in the conclusion part, they emphasized the importance of identifying older people at high risk of MLTC (i.e., prone to increasing numbers of diseases over time) and providing them with effective interventions.In other words, the current description will make it difficult for readers to find consistency in the aims and conclusions of the abstract.I would like the authors to revise this point.Thank you for taking the time to review our paper.The association with mortality is mentioned in the objective part as the 3 rd objective.The 1 st is to classify older adults into clusters based on accumulating longterm conditions (LTC) as trajectories, and the 2 nd to characterise these clusters.Thus, in the conclusion, we state the message of this study regarding the first 2 objectives.We have now added this sentence in terms of the 3 rd objective (morality): "Stratifying risk through clustering will enable practitioners to identify older adults with a higher likelihood of worsening LTC over time to tailor effective interventions to prevent mortality".14 There were no particular points of concern with regard to the content of the main text.
Thank you for the time to review our manuscript. VERSION
GENERAL COMMENTS
In my opinion, the introduction is still short and adding some more information about prior knowledge of the risk factors addressed in this study in the text would be useful.
Why is this study now reclassified as a cross-sectional study?Isn't this a longitudinal study, at least concerning trajectories?
The paragraph on multimorbidity is still unclear.Is the longer list what the participants were asked about?And then you combine them in the ten groups in this present study?I see that you have a lot of references also here, but this could simply be clarified in the text by rephrasing.
Regarding the number of participants, it's still unclear to me how many participants were included from the different waves, and it is mentioned anything about this in the manuscript and only partly in the response letter.Of the trajectories is probably related to how many observations you have for each participant.This information should be given in paper.And also that only two waves need to be included.Was no one excluded because they only participated once?Regarding mortality, data seems to be collected from waves 2, 3, 4 and 6 ( Why not 5?).I cannot find more information if all these deaths were included in the prediction model.Maybe I am misunderstanding the model, but it seems like the "future" trajectory is used as a predictor.This model should be explained better.Although this modelling is elegant and it's interesting to look at possible latent patterns of disease development, there is still, as mentioned in the first review, a question about the usefulness of this model for practical policy-making or clinical practice.There seems to be a main predictor for the "class-membership": the number of chronic conditions at baseline.During the study period there is added approximately one condition I all groups (expect one).In the introduction, it is stated that "Understanding the trajectory that an older adult will follow in the progression towards an increased number of LTC could help predict when intervention is needed and inform targeted and earlier preventive interventions."I'm still not convinced that this model with trajectories could help in this case beyond what's known before based on cross-sectional study of chronic conditions.This is a challenge to the authors.
VERSION 2 -AUTHOR RESPONSE No Reviewer comments Response 1
In my opinion, the introduction is still short and adding some more information about prior knowledge of the risk factors addressed in this study in the text would be useful.
Thank you for this comment.We have added the following text about prior knowledge of known risk factors.
There are a range of risk factors for multimorbidity, although these may vary 'quantitively and qualitatively across life stages, ethnicities, sexes, socioeconomic groups and geographies' (9).The most significant risk factor in multimorbidity, in virtually all contexts, is older age (9,10).
Other documented risk factors include low education, obesity, hypertension, depression, and low physical function, which were generally positively associated with multimorbidity (10).
2
Why is this study now reclassified as a crosssectional study?Isn't this a longitudinal study, at least concerning trajectories?
Thank you for your comment.As per the other reviewer's request, we have changed the title and text to cross-sectional.We have not followed-up the patients but examined the trajectories based on 5 specific time points (Wave 2 to 6).This is a repeated cross-sectional study.If the reviewer would like it changed back to longitudinal, please can the editor kindly liaise with both reviewers to find consensus on how to proceed with these differing views on the terminology used.The paragraph on multimorbidity is still unclear.Is the longer list what the participants were asked about?And then you combine them in the ten groups in this present study?I see that you have a lot of references also here, but this could simply be clarified in the text by rephrasing.
The following ten conditions were included: hypertension, diabetes, cancer, lung disease, cardiovascular disease, stroke, mental health disorder, arthritis, Parkinson's disease, and dementia.We have added additional clarification on the MLTC paragraph.The references are needed to justify the availability and the selection of the conditionsthey also provided much needed additional detail on the MLTC selection.Regarding the number of participants, it's still unclear to me how many participants were included from the different waves, and it is mentioned anything about this in the manuscript and only partly in the response letter.Of the trajectories is probably related to how many observations you have for each participant.This information should be given in paper.And also that only two waves need to be included.Was no one excluded because they only participated once?
Thank you.This is an open cohort.The number of participants is different from wave to wave.There were 9,170 participants in wave 2 and we identified 15,091 individuals participating in at least one wave during the follow-up period.The median observation was 4 and everyone included in the model had at least 2 observations.Regarding mortality, data seems to be collected from waves 2, 3, 4 and 6 ( Why not 5?).I cannot find more information if all these deaths were included in the prediction model.Maybe I am misunderstanding the model, but it seems like the "future" trajectory is used as a predictor.This model should be explained better.
Thank you for this comment.There was no mortality information in the wave 5 in the dataset, hence this is the reason why it has not been included.We have clarified this in the text.
We state in the manuscript "All-cause mortality was reported by end-of-life interviews on waves 2, 3, 4 and 6 with relatives and friends after death." Then for the identified trajectory clustering we identified the odds of death compared to the relatively healthy cluster.Although this modelling is elegant and it's interesting to look at possible latent patterns of disease development, there is still, as mentioned in the first review, a question about the usefulness of this model for practical policymaking or clinical practice.There seems to be a main predictor for the "class-membership": the number of chronic conditions at baseline.During the study period there is added approximately one condition I all groups (expect one).In the introduction, it is stated that "Understanding the trajectory that an older adult will follow in the Thank you for this insight.We do not wish to overstate our research findings.In this respect we have removed the sentence: "Understanding the trajectory that an older adult will follow in the progression towards an increased number of LTC could help predict when intervention is needed and inform targeted and earlier preventive interventions."Thank you for this comment.
Thank you for giving us the opportunity to clarify further.Initially, we had specified that this is a longitudinal study; however, after incorporating reviewer's 2 comments we changed that.However, we agree with you, and we have changed all the text accordingly.
This is in a longitudinal study where we analyse repeatedly collected data from the same population over an extended period of time.
2
The next is regarding included participant.The authors state: "There were 9,170 participants in wave 2 and we identified 15,091 individuals participating in at least one wave during the follow-up period".As this was an open cohort, there seem to be > 5000 persons that were not part of the baseline (N = 9 170), but in only 129 of the 15091 were not included, no one because they participated only in one wave.Was there no one among that participated in only one wave and not at baseline?What happened to a person who did not participate at baseline and in one wave later, is this person not among the 14,962?If so, the statement above is not correct.It is probably possible to be more precise here.And still, I do not understand why the authors are not including information about distribution of number of waves for the included participants, as it is mentioned in the response letter at least partly.
Thank you for this comment.
There were 9,170 participants in wave 2, and we identified 15,091 individuals participating in at least one wave during the follow-up period.Six participants were excluded, as they had no information on LTC.Then, after excluding those (n = 123) with missing data on covariates, 14,962 people were included in the final analysis.
If a person did not participate at baseline and in one wave later then this person was also included in the analysis.
Yes, there were some participants not included in baseline but at a wave after baseline.
As this was an open cohort, there seem to be > 5000 persons that were not part of the baseline ( N = 9 170), but in only 129 of the 15091 were not included, no one because they participated only in one wave.Was there no one among that participated in only one wave and not at baseline?What happened to a person who did not participate at baseline and in one wave later, is this person not among the 14,962?If so the statement above is not correct.It is probably possible to be more precise here.And still, I do not understand why the authors are not including information about distribution of number of waves for the included participants, as it is mentioned in the response letter at least partly.waves, most of the participants in 4 or more) and data analyses are based on data following these individuals, even if not all are found at all waves.Isn't this a longitudinal study, as the authors wrote from the start?I just ask, and think authors are the ones that must argue for what is the best description.
|
v3-fos-license
|
2022-03-08T16:27:49.477Z
|
2022-03-03T00:00:00.000
|
247281040
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.atlas-publishing.org/index.php/AJB/article/download/229/204",
"pdf_hash": "f46832533a149a238a62db89e6a92cc279bd2de4",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46198",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "49d362e1718babce7a9bebc73106c438ccb50bcc",
"year": 2022
}
|
pes2o/s2orc
|
Effect of Different Ethephon Concentrations on Shiss Removal in ‘Khlass’ and ‘Sukkary’Date PalmVarieties
After pollination, aborted date fruits known as tricarpel or shiss, remain on the bunch in most of the varieties and exert a competition on fruits for water and nutrients. In Sukkary, they drop down at the end of kimri stage, while remain on ‘Khlass’ until the harvest. Most of them remain as bisr and little turns into tamr that is not appreciated in the date market. They can only be used as paste that has low price compared to dates. In an attempt to get rid of shiss, we sprayed Ethephon at different concentrations on bunches of ‘Sukkary’ and ‘Khlass’ after fruit-set, at hababook stage. Beside the shiss drop, undesirable fruit drop also occurs. We are looking for the optimum Ethephon concentration where shiss dropped more than fruits. In ‘Khlass’, the Ethephon concentration 800 ppm showed the highest shiss drop (81%) together with a fruit drop of 20% that occurs at the same time, while in ‘Sukkary’, the concentration of 600 ppm was the best by giving a shiss drop equal to 44 % together 12% fruit drop.We consider that the concentration of 800 ppm at hababook stage is the ideal concentration to generate optimum drop in shiss with reasonable percentage of fruit drop. We, therefore highly recommend a trial with this concentration on ‘Sukkary’ as well.
Introduction
The Ethephon or the 2-chloroethylphosphonic acid (CEPA) is a systemic plant growth regulator, which in its liquid state at the proper pH does not yield ethylene; however, when the pH is elevated, it breaks down to form ethylene (Arteca, 1996). However, at pH higher than 4, it breaks down to ethylene, hydrochloric and phosphatic ions. It stimulates the endogenous ethylene production by releasing ethylene in the plant tissue as the cell cytoplasm has a pH higher than 4 (Nicotra, A. 1982).
The main role of ethylene is to make changes in fruit texture, softening, colour, and other processes involved in ripening. It is also known as the aging hormone in plants. It is well known that Ethephon can promote fruit abscission. Ethephon has been performed well as a fruit-thinning agent for many crops (Abeles et al., 1992). Ebert and Bangerth (1982) reported that ethylene inhibited the synthesis and translocation of Indole-3-Acetic Acid (IAA) within the fruits, thus reducing sink strength and ultimately inducing the separation area in the peduncle, which causes fruit drop (Roberts et al. 2002.).
El Hamadi et al. (1983) used Ethephon at different concentrations from 200 to 400 ppm after fruit set and deduced that the level of thinning increased by increasing the concentration. Mohamed et al (2015) concluded that Ethephon at 1000 ppm, ten days after pollination are This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. _____________________________________________ *Corresponding author: a.eljiati@gmail.com LLC (www.atlas-publishing.org) suitable for obtaining economic yield with best fruit quality. Bakr et al. (2006) tested the effect of Ethephon on fruit thinning compared with cytophex at different concentrations and dates of application on 'Samany' date palm variety. They concluded that, 'Samany' fruit set was decreased when Ethephon was sprayed 18 days after pollination especially at 300 ppm.
No study has been done before on the use of Ethephon in order to remove aborted date fruits known as tri-carpel or shiss that remain on the bunch and compete on fruits for water and nutrients. That was the objective of this investigation.
Materials and Methods
Location of the experiment: This trial was carried out in the Experimental farm "Naam" of Yousef Bin Abdul Latif and Sons Agriculture Co. Ltd. (YALA) in Qassim, Saudi Arabia.
The weather conditions of the farm during the 12 days of the experiment on each of the two date varieties 'Sukkary' and 'Khlass' are summarized in the Figures 1 and 2 as recorded by the weather station of the farm.
Atlas
Publishing, LLC (www.atlas-publishing.org) ranged between 4.3 and 5.8 mm day-1, the cumulative precipitation was 2 mm and the maximum wind speed was 5.5 m/s. The maximum temperature recorded during the day ranged between 35.4 and 41.2°C ( Figure 1).
For 'Khlass' variety, during the 12 days of study (2-14 July 2019), the reference evapotranspiration (ET0) ranged between 5.4 to 6.7 mm day-1, there was no precipitations during this period, and the maximum wind speed was 3.3 m/s. The maximum temperature recorded during the day ranged between 43 and 45.5°C ( Figure 2).
The Experiment: The spray started first on 'Sukkary', as it is an early maturing variety, then on 'Khlass'. 'Sukkary' was sprayed on 6 May 2019 and the evaluation was done 12 days later on 18 May 2019. 'Khlass' was sprayed on 2 July 2019 and the evaluation was done 12 days later on 14 July 2019.
The concentrations of Ethephon applied on 'Sukkary' were: 1000 ppm, 600 ppm and 400 ppm in addition to the control where only pure water is sprayed. After evaluation of results on 'Sukkary', we changed the concentrations on 'Khlass' to respectively 1000 ppm, 800 ppm, 600 ppm in addition to pure water as a control.
Three trees per variety have been used for this experiment. One bunch per tree per treatment was sprayed. We have in total three bunches per treatment.
Before the spray, the total number of fruits and shiss were counted. After the spray, the bunches are
Results and Discussion
When bunches of 'Sukkary' were sprayed with 1,000 ppm of Ethephon, 94% of shiss dropped together with 54% of fruits (Table 1). At 600 ppm Ethephon, 44% of the shiss dropped, against 12% of fruit drop. At the concentration of 400 ppm Ethephon, 28 % of the shiss dropped and 17 % of fruits. For the control where pure water is sprayed on the bunches, 4% of the shiss dropped, together with 8% of fruit drop.
The concentration of 1,000 ppm Ethephon, despite generating a desirable high shiss drop, significantly different from the one generated by the concentration of 600 ppm Ethephon, but it did unfortunately the same for fruit drop (Table 1). This is why we recorded a high correlation between Ethephon concentration from one side and shiss drop (R² = 0.9902) and fruit drop percentages (R² = 0.8164) from the other side ( Figure 3). Therefore, we have to look for an optimal concentration of Ethephon that generates a significant increase in shiss drop with a lower percentage of fruit drop. It seems that an Ethephon concentration between 1000 ppm and 600 ppm (800 ppm for example) might be the optimal concentration to be tried in order to get more shiss drop and at the same time less fruit drop. For this reason, we replaced the concentration 400 ppm of Ethephon by 800 ppm in the same season on the next experiment on 'Khlass' that flowers after 'Sukkary'. On the other hand, when bunches of 'Khlass' are sprayed with 1,000 ppm of Ethephon, a total of 89% of shiss dropped together with 37% of fruits (Table 2). At 800 ppm Ethephon, 81% of shiss dropped, which is not significantly different from shiss percentage dropping in the case of 1000 ppm Ethephon. The concentration 800 ppm Ethephon also generated 20% of fruit drop, similar statistically to the fruit dropping in the case of 600 ppm Ethephon (24%). At the concentration of 600 ppm Ethephon, 64 % of the shiss dropped and 24 % of fruits. For the control where pure water is sprayed on the bunches, 4% of the shiss dropped, with no drop recorded in the fruits.
In 'Khlass', the Ethephon concentration 800 ppm showed the optimum drop between shiss (81%) and fruits (20 %), as it generated similar shiss drop percentage generated by the highest Ethephon concentration (1000 ppm) and kept an undesirable fruit drop percentage (20%) similar to the one generated by the lower 600 ppm Ethephon concentration (24%). This has been translated by a strong correlation between Ethephon concentration and shiss drop (R² = 0.9687) ( Figure 4a) and a low correlation between Ethephon concentration and fruit drop (R² = 0.5556) (Figure 4b).
In a chemical thinning experiment, Mohamed et al. (2015) sprayed Ethephon ten days after pollination (10 DAP) at 500 and 1,000 ppm on 'Khlass' and 'Ruzeiz' date varieties. When the evaluation is made after two months from pollination, they found out that the fruit drop in both 'Khlass' and 'Ruzeiz' was not concentration-dependant. They reported a fruit drop in 'Khlass' equal to 45.5 % and 38.6% respectively for 500 ppm and 1000 ppm of Ethephon and in 'Ruzeiz' equal to 16.3 % and 17.9% respectively for 500 ppm and 1,000 pm of Ethephon. The effect of the increase in their Ethephon concentration was only seen at the harvest when they evaluated the fruits retained on the bunch, which was in 'Khlass' 63.9% and 44.9% respectively for the treatment 500 ppm and 1000 ppm of Ethephon and was in "Ruzeiz" 77.1% and 62.5% respectively for the treatment 500 ppm and 1,000 ppm of Ethephon. Ghazzawy et al., (2019), concluded on 'Khlass' that the Ethephon applied at different concentrations 5 days after pollination (DAP) generated an average of 42.1 % of fruit drop against 43.4 % when applied 10 days after pollination. They also reported in both application times that the greater the concentration of Ethephon is, the lower the fruit drop becomes. When the Ethephon is applied 5 DAP, the fruit drop is 50.1%, 40.2 %, 35.8 % and 36 % respectively for the control, 100 ppm, 200 ppm and 300 ppm of Ethephon.
Conclusion
To remove aborted fruits of date palm (known as tri-carpel or shiss) that remain on the brunch and exert a competition for water and nutrients, we sprayed Ethephon in hababook stage at different concentrations on bunches of 'Sukkary' and 'Khlass' varieties.
In 'Khlass', the Ethephon concentration 800 ppm showed the optimal drop (more shiss and less fruits) equal to 81% in shiss and 20% in fruits. While in 'Sukkary', the concentration of 600 ppm was the optimal with a shiss drop equal to 44% and fruit drop equal to 12%.
We consider that the concentration of 800 ppm at hababook stage might be the ideal concentration to generate optimal drop in shiss with reasonable percentage of fruit drop. We, therefore highly recommend a trial using this concentration on 'Sukkary'.
|
v3-fos-license
|
2020-05-04T19:02:08.829Z
|
2020-04-21T00:00:00.000
|
218485641
|
{
"extfieldsofstudy": [
"Medicine",
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.medrxiv.org/content/medrxiv/early/2020/05/09/2020.04.28.20075036.full.pdf",
"pdf_hash": "d8b782740e3caca7166a3a93e90cb713ec9b978b",
"pdf_src": "MedRxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46201",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "d8b782740e3caca7166a3a93e90cb713ec9b978b",
"year": 2020
}
|
pes2o/s2orc
|
A simulation-based procedure to estimate base rates from Covid-19 antibody test results I: Deterministic test reliabilities
We design a procedure (the complete Python code may be obtained at: https://github.com/abhishta91/antibody_montecarlo) using Monte Carlo (MC) simulation to establish the point estimators described below and confidence intervals for the base rate of occurrence of an attribute (e.g., antibodies against Covid-19) in an aggregate population (e.g., medical care workers) based on a test. The requirements for the procedure are the test's sample size (N) and total number of positives (X), and the data on test's reliability. The modus is the prior which generates the largest frequency of observations in the MC simulation with precisely the number of test positives (maximum-likelihood estimator). The median is the upper bound of the set of priors accounting for half of the total relevant observations in the MC simulation with numbers of positives identical to the test's number of positives. Our rather preliminary findings are: The median and the confidence intervals suffice universally; The estimator X/N may be outside of the two-sided 95% confidence interval; Conditions such that the modus, the median and another promising estimator which takes the reliability of the test into account, are quite close; Conditions such that the modus and the latter estimator must be regarded as logically inconsistent; Conditions inducing rankings among various estimators relevant for issues concerning over- or underestimation.
Introduction
The Corona crisis revealed several bottle necks regarding testing. Many of these bottle necks are physical, but one is cognitive: how to interpret the results of a test. Medical experts seem to have problems in interpreting and combining statistical information (cf., e.g., Uffrage et al. [2000]). They, as well as politicians, journalists, or the general public, may suffer from the so-called base-rate fallacy (cf., Bar-Hillel [1980]): The base-rate fallacy is people's tendency to ignore base rates in favor of, e.g., individuating information (when such is available), rather than integrate the two. This tendency has important implications for understanding judgment phenomena in many clinical, legal, and social-psychological settings.
The base rate in the quote above can be associated with the incidence of an attribute in a larger population, such as the occurrence of antibodies against the Corona virus in a certain region or profession, breast cancer among females, or Down syndrome among unborn children with mothers aged 41. The individuating information in the quote above can be associated with information obtained from a(n individual) test (result).
A widely accepted technique integrating the two kinds of information mentioned, involves Bayesian reasoning in which a prior distribution (base rate) is updated on the basis of information gained from a (possibly imperfect) test, such that the latter can be interpreted on an individual level. It safe to say that this technique is not very well known throughout the various scientific communities, let alone to the general public. It is also safe to say that the technique yields counter-intuitive answers. There are at least two sides to this scienceversus-intuition gap: on the one hand human intuition seems underrated and should be taken more seriously, and on the other intuition can be helped by representing statistical data in a more user friendly manner (cf., e.g., Cosmides & Tooby [1996], Gigerenzer & Hoffrage [1995]).
The following stylized problem has been used recently for didactic purposes to inform the general public about the limited use of testing in case the general population has a low incidence of an attribute (cf., Volkskrant [2020]).
Example 1. A test for antibodies against Corona has the following reliability: if a person really has antibodies, the test gives a positive result with 75%, hence the test gives a negative result with the complementary probability, i.e., 25%; if a person really does not have antibodies, the test gives a negative result with 95% probability, hence the test gives a positive result with the complementary probability of 5%. This information can be summarized as follows: REAL: P os N eg The number 0.05 is also known as the rate of false positives (a.k.a. type I error rate), and the number 0.25 is known as the rate of false negatives (a.k.a. type II error rate).
2 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) Now, suppose 2% of the general public have antibodies against Corona. This is the base rate (a.k.a. prior in statistical jargon), and we test 10,000 people taking all these probabilities mentioned as given and (exactly) true. Then, the following natural question arises.
• How many people will test positive (in expectation)?
If 10,000 people are tested, then approximately 1 200 will have antibodies for real, and the complementary number 9800 will not. The two numbers above the matrix represent the expected number of people who have antibodies against Corona (left) and those who do not (right). These numbers may be recovered from the matrix below by adding the numbers in the corresponding columns. The two numbers to the right of the matrix represent the expected numbers of people who receive a positive test result, i.e., 640, and a negative one, i.e., 9360. These numbers are obtained by adding the numbers in the corresponding row of the matrix.
We now continue with an analysis based on Bayesian reasoning in order to make sense of these numbers, to answer the ensuing natural questions.
• What is the probability that a person truly has antibodies if tested positive?
• What is the probability that a person truly has antibodies if tested negative?
The probability that a person really has antibodies if tested positive is approximated by: a lot of room for doubt and insecurity, as the probability that the test result is correct is less than 24%. This means that the vast majority of people receiving a positive test result, receive a misleading diagnosis.
The example above shows that the information gain from a test may be quite disappointing in quality if the incidence levels on a total population level are low. This perceived low quality of information of a positive test result may be a great impediment to promote or justify testing, and it may de-legitimize taking appropriate measures (e.g., wearing face masks, washing hands, forbidding mass meetings or travel), especially if other, non-cognitive, bottle necks occur. For instance, it may be quite costly (reportedly some 45 Euro per test in Robbio 2 in Italy) or rather time-consuming to test an individual, hence a re-test after a positive test result would be unattractive looking at it from the resourceprovision side of the problem, although re-testing in this case will be much, much more informative. An additional bottle neck might be that tests may not be available in sufficient numbers. 3 Then, a priority or a legitimization problem arises: to use the scarce test for testing people for the first time, or for retesting positives. Especially combinations of these bottle necks, and they have materialized at crucial moments in the Corona crisis, may lead to questioning the usefulness of testing at all.
The aim of this paper is however not to contribute to solving the issue of the base-rate fallacy, nor distributional dilemmas induced by the scarcity of tests. We are interested in solving another bottle neck namely the practical, more basic problem of lack of knowledge (hence unavailability) of a prior distribution (or base rates or incidence rates of occurrence) of an attribute in a chosen aggregate population. We however think there is a psychological connection between the missing base-rate problem and the base-rate fallacy. We suggest that it is very likely that a missing base rate shifts the interpretation of the test's result unpredictably anywhere between giving a lot of weight (if not all) to the individuating information, or vice versa in which having no anchor for the base rate at all might psychologically mean base rate equals zero.
The reasons why base rates might be lacking can be numerous. Take a Corona test, and suppose that the reliability data were obtained (correctly) in China or Italy, where the illness occurred early and in rather large numbers. If one were to use this test in, for instance, Noord Brabant, the earliest hot spot of Corona in the Netherlands, the validity of the reliability data might be upheld, but the great missing parameter would be the prior, i.e., the incidence of antibodies to the Corona virus on a population level. Assuming the priors to be the same as in Italy or China would be without any scientific base.
An additional aim of this paper is to be able to provide answers regarding priors on the basis of relatively low numbers of tests. Obviously, larger tests provide better answers if the base rate is stationary. We have the following reasons for this additional ambition. In case of a disease spreading, the assumption of stationarity is frivolous, so then more is not necessarily better, more recent might be better. Moreover, crucial measures may be triggered by data on an aggregate level, but cannot be delayed until results from large numbers of tests 2 https://it.businessinsider.com/esclusiva-cosa-rivelano-i-primi-test-di-robbio-primo-paeseitaliano-a-fare-i-test-sullimmunita-a-tutti-i-cittadini/ 3 At the moment of writing a problem in the Netherlands. The Dutch government had the aim of testing 17,000 people per day from a certain date onwards, but this date has gone by and the maximum daily number of tests taken in reality is approximately 7,000. have accrued. Furthermore, a sequence of estimated priors (using low numbers of tests) taken at different moments in time, may provide information regarding the stationarity issue, in other words: is it spreading or not? Additionally, one might have the wish to restrict attention to specific groups each possibly having another base rate, e.g., people working in medical care or care for the elderly, primary school children and teachers, or family members of those working in jobs with a high probability of exposure to Corona.
Example 2. We could for instance, use some of the data above to come up with estimates of the probability that antibodies occur in a population. One option is to look at the number of positives which is 640 out of 10,000, but this naive estimate of 6.4% yields a much too high number compared to the real 2% underlying the computations. A seemingly better option is to solve the equation subject to N = 10, 000 This yields exactly p = 0.02 which is the precise prior used for the illustration. So, then we have an estimator, but we have absolutely no idea about how reliable this number is. Let Then it is easy to confirm that the estimator p for p given the parameters presented, is computed in general terms by However, even for the given numbers p = 0.02, m 11 = 0.75, m 12 = 0.05, to reach this (640) or any given number of positives we have outcomes resulting from a combination of three random processes. Suppose that the number of positives turns out to be 654 instead, which, by the way, may occur with a likelihood quite close to the likelihood of 640 positives occurring, then although the real p does not change, its estimator would be p = (0.75 − 0.05) −1 640 10000 − 0.05 + 14 10000 = 0.022. Observe furthermore that any test result with number of positives number of people tested < m 12 is hard to interpret, or number of positives number of people tested > m 11 for that matter, because logic dictates that the probability computed should belong to the unit interval.
The organization of the remainder of this note is minimalistic. In the next section, we present results of our Monte Carlo simulations which are used to derive confidence intervals and point estimators for base rates assuming the reliability data to be perfect. The conclusions concentrate on perceived regularities in doing a series of such estimations, and reflections on the feasibility of the aims we started with. The Python codes for anybody wishing to experiment with the tools are available at the github repository. 4 4 See https://github.com/abhishta91/antibody montecarlo 5 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
Monte Carlo simulation & estimators of priors
We are interested in finding a point estimator or a confidence interval for the base-rate probability of a certain attribute based on a test on this attribute. We operate under the specific assumption that the reliability reported are true. For this purpose we employ the procedure presented in the next subsection. The results for three hypothetical cases are presented and compared. Note that the Monte Carlo simulation can be adapted for many if not all inputs desirable.
Pseudo code
For a certain test (or sample) size N meaning the total number of people tested, we find a certain number of positives X. A quick approximation using (1): may be convenient to establish a region in the unit interval which base rate qualifies as most likely to underly the statistical process providing the test outcome.
In what follows, we make a grid of size 0.001 of the most promising region or interval to be examined more closely. For a given grid size point p in the latter interval we perform the following loop in pseudo-code.
Step 1 Draw N times with probability p of success to determine 5 T P ( p, K). Go to Step 2.
Step 2 Draw T P ( p, K) times with probability m 11 of success to determine 6 T P tp( p, K). Go to Step 3.
Step 3 Draw N − T P ( p, K) times with probability m 22 of success to determine 7 T N tn( p, K), then set 8 T N tp( p, K) := N − T P ( p, K) − T N tn( p, K). Go to Step 4.
Otherwise if K 1 = K, then go to Step 5.
This sub-loop will run K times and larger loop will run K times and register tp( p, X, N ) for each such grid point p, which is simply the number of times in the total Monte Carlo simulation under base rate p, exactly outcome X occurs. 5 The number of True Positives. 6 The number of True Positives tested positive. 7 The number of True Negatives tested negative. 8 The number of True Negatives tested positive, i.e., the so-called false positives.
6
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 4, 2020.
Interpretation of results from the MC simulation
By Monte Carlo simulation we generate a large number of positives from a test of size N for a fixed known candidate prior which is taken as underlying the simulation, and record how many of the positives out of the total number of positives generated by our Monte Carlo simulation, equal precisely X. We rank the, say G = 400, candidate priors according to a (n evenly meshed) grid of a relevant interval p 1 < p 2 < ... < p G .
For candidate prior say p j , we take, say, 1000 samples of size N . For each such sample, we generate a pair consisting of the number of real positives and the number of real negatives by drawing independently N observations with probability p j (1 − p j ) of having (not having) the attribute. Then, for each such pair of numbers, say (T P, T N ), of true positives and true negatives in the sample, i.e., T P + T N = N , we draw 1000 samples taking T P draws with the probability of testing positive equal to the upper left element of M and taking T N draws with the probability of testing positive equal to the upper right element of M . The former are then the True Positives tested positive (T P tp) and the latter are the True Negatives tested positive (T N tp).
The sum of those two numbers T P tp + T N tp then provides one observation of positives X j k . Taking independent samples, we find one million different realizations of positives, say X j 1 , X j 2 ..., X j 10 6 . Then, we record among them, the number of positives for known prior p j being exactly equal to the number of positives resulting from the test as follows We do the same for the whole range of candidate priors in exactly the same manner.
We then construct a histogram of the relative numbers of hits equal to X for each prior, i.e., Observe that x i ≥ 0 for all i = 1, 2, ..., G and that G i=1 x i = 1. Then, the number x i tells us that the prior p i accounted for generating a proportion x i of all realizations in the entire Monte Carlo simulation yielding X positives. So, alternatively these numbers can be interpreted as probabilities.
Let in the same vein Then, an interpretation for the latter expression immediately comes to mind which is close to the one of a cumulative probability distribution, namely the first c α of the (ranked) priors that account for proportion of α of all realizations in the entire Monte Carlo simulation which yielded exactly X positives. The 'area under the curve' formed by the histogram between the lower bound of the range examined and p cα , the latter included, is (approximately) α. Continuing along this interesting analogous interpretation we coin the following expressions.
7
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 4, 2020.
These notions can be interpreted in line with the more standard notions with the same names widely used in statistics. M odus (X, N, M, G) is the smallest prior which yields the highest number (proportion) of positives equal to X in our Monte Carlo simulation for sample size N using deterministic reliability matrix M , having a grid dividing a relevant interval of priors into G parts of equal length. There might be more than one such prior, and in order to obtain a unique prior as M odus we took the lowest. So, knowing only little, this prior could be interpreted as a maximum likelihood estimator and for the (admittedly few) cases examined we seem to have (with p given by Eq. (1)) M odus (X, N, M, G) ≈ max(0, p). Next, M edian (X, N, M, G) is the smallest prior such that set of priors smaller than or equal to it are responsible for (approximately) half of the simulated hits equal to X.
We interpret CI 1−α (X, N, M, G) as our confidence interval among the priors as that it gives us the set of priors accounting for a proportion of 1 − α of outcomes yielding X hits in the Monte Carlo simulation. The restriction in first part of the notion applies to the case that the X N exceeds the type I error rate which intuitively seems a rather convenient turn of events. If the second part applies, i.e., we have a more extreme case of the relative number of hits ( X N ) being lower than the type I error rate (m 12 ), we may obtain with great likelihood M odus (X, N, M, G) → 0 = max(0, p) < M edian (X, N, M, G) .
8
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 4, 2020.
9
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 4, 2020. . https://doi.org/10.1101 The effect on the size of the confidence intervals is significant. The size of the corresponding interval for N = 1, 000 is more than double the size for that for N = 10, 000, whereas the confidence interval for N = 125 is almost three times the latter size.
10
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
11
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 4, 2020. . https://doi.org/10.1101 Observe that the modus changes only very slightly over the three histograms, if at all, but equals zero. The median for the three cases is positive, it shifts considerably and the higher N is the closer the median gets to zero. This seems quite intuitive, as unlikely results in the sense that X N < m 12 , should occur less and less frequently if the sample size increases. We obtain the following ranking (for each case studied in this subsection) p < M odus (X, N, M, G) = 0 < M edian (X, N, M, G) < X N .
The modus appears to be at zero, which will simply not do as a point estimator of the prior. It is logically inconsistent to have positives if the prior is truly equal to zero. For the confidence interval we find Again, in line with intuition, we see that for larger N , keeping the ratio X N fixed, the size of the confidence interval shrinks.
12
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
13
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 4, 2020. . https://doi.org/10.1101/2020.04.28.20075036 doi: medRxiv preprint Discussion of findings The figures in this subsection share a few common qualitative features, but the first two seemingly share more qualitative features among them and with the first set of three histograms, than with the third histogram. Again the histograms appear single peaked, the first two seem rather symmetric, the last one seems skewed.
The median and the modus appear quite close in the first two figures. Furthermore, we have Observe that the median again changes only very slightly over the three histograms, but the confidence intervals change tremendously in size.
Conclusion
For the first couple of weeks as the Corona crisis developed, we have been merely bewildered spectators at the side line, wondering how to make sense of phenomena with relevant data and estimates lacking universally. Frankly, we questioned the validity of many of the statements made by scientists, politicians and serious media. Quite recently we found an opportunity to make constructive 14 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 4, 2020. . use of our experience in designing Monte Carlo simulations for problems in which analytical distributions of relevant phenomena are very hard to obtain. We designed a tool 9 to find base rates underlying certain tests.
Actually, we set out on a larger idea of which this is the first preliminary paper. 10 We propose a procedure based on Monte Carlo simulation based analysis with inputs: a sample of N from a certain population is taken, X is the number of positives and M is the matrix combining the reliability of the test, i.e., This matrix satisfies 1 = m 11 + m 21 = m 12 + m 22 , where m 11 may be called the true positive rate, m 21 is the false negative rate (or type II error rate), m 12 is the false positive rate (or type II error rate) and m 22 is the true negative rate. We may distinguish several point estimators for the base rate p of certain populations, and the following two are seemingly 11 frequently used: The subscript u stands for 'unadjusted.' The first estimator has been used in recent studies (e.g., Bendavid et al. [2020]) as a quick-fire solution disregarding test reliabilities, the second should however be considered as a slightly more precise point estimator incorporating the probabilities of false positives in the test. We have the following rankings among those two estimators: So, only by sheer 'luck' both estimators coincide in general. Furthermore, p u < m 12 implies p < 0 if m 11 > m 12 .
In this paper we add three new estimators of the base rate in a population. Two are point estimators, the third is an interval estimator, or confidence interval. We must stress that for the present procedure we assume the matrix M to be deterministic.
The modus is the smallest prior which yielded the highest number and hence proportion of positives equal to X in our MC simulation for sample size N using deterministic reliability matrix M. The median is the upper bound set of ranked priors starting at the lowest value, responsible for (approximately) half of the simulated hits equal to X in the MC simulation. We interpret a our 9 Due to time pressure, we did a hasty check on literature. So, none of this line of thinking/modeling might be new, and we apologize for wasting your time. However, our sincere intention was to offer some help.
10 The second paper, to appear in a couple of days, proceeds on this one, but will take another hurdle in estimating base rates, namely the real-life problem of test reliability matrices which are estimates themselves (hence, with all components being stochastic).
11 Seemingly, because none of the reports we found use explicit formulas. Recalculating one of the reported numbers in Bendavid et al. [2020] yields a perfect match. In a report (in German) by Streeck et al. [2020] only specificity m 22 > 0.99 is mentioned which bounds m 12 , but not m 11 . Taking both specifity and sensitivity equal to 99% yields an outcome which is compatible with their estimation.
(1 − α)-confidence interval among the priors as that it gives us the set of priors accounting for a proportion of 1 − α of outcomes yielding X hits in the Monte Carlo simulation.
We focus on the following findings regarding this collection of point and interval estimators. By elimination of alternatives, the final bullet point gives the most preferred pair of estimators, in our opinion.
• In many cases the median, modus and p are quite close, and are to be found rather central in any standard two-sided confidence interval.
• Confidence intervals shrink in size as the number N increases, i.e., the discriminatory power of the procedure increases in the usual manner.
• The median is always in the range of the most used confidence intervals (90%, 95%, 99% two-sided).
• The sample size N has negligeable influence on the median, the modus and p relative to the size of the corresponding two-sided 95%-confidence intervals generated, provided that the resulting histogram is close to symmetric. So, rather small samples may provide rather reliable estimators for cases yielding symmetric histograms.
• It may happen that p u = X N does not fall into the two-sided 95%-confidence interval of the procedure (cf., e.g., Figures 1 − 7). This rules out this estimator as a universally applicable point estimator, in our opinion.
• It may happen that p is negative, which rules this estimator out as a universally applicable point estimator by logic.
• It may happen that the modus is equal to zero (cf., e.g., Figures 4 − 6), which rules out the modus as a universally applicable point estimator by logic.
• The sample size N is of significant influence on the median and of no influence on the modus and p (as the latter are smaller than or equal to 0) for low ratios of X N . The median decreases considerably if N is increased. • Both the median and the confidence intervals universally make sense as concepts, as well as as estimators.
4 Appendix: the procedure applied to two data points from a recent study On Saturday April 18, while trying to finalize this preliminary paper, we found a study reporting on tests in the county of Santa Clara in California (Bendavid et al. [2020]). We gladly refer to the paper for more details of this interesting (also) preliminary report. In a rather precisely described case, the authors found a number of 50 positives in a test of size 3330. So, for the first two inputs necessary necessary, we took X = 50 and N = 3330. Determining M , the matrix summarizing the test reliability was a little bit more problematic for us. The authors provided a lot of numbers regarding the test validity which are highly relevant to our 16 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 4, 2020. . framework, but frankly, we were a quite dazzled by them. We took the liberty of generating the following matrix of test reliability (the underlying numbers were found in Bendavid et al. [2020]) under the presumption that this is indeed what the authors intended for the unadjusted case: This matrix was obtained by interpreting the statement: ... provides us with a combined sensitivity of 80.3% (95 CI 72.1-87.0%) and a specificity of 99.5% (95 CI 98.3-99.9).
Following standard practice, we took m 11 = 0.803 and m 12 = 0.005 which immediately induces all four entries in the reliability matrix.
Findings
We ran our procedure 12 using these numbers and obtained results visualized in Figure 10. We interpret the least sophisticated framework, i.e., we do the rough estimation on total population level, which happens to yield the lowest valued estimator of all estimators of the base rate presented in Bendavid et al. [2020]. Figure 10: The output of our Monte Carlo simulation based procedure obtained from our interpretation of the reliability matrix in Bendavid et al. [2020] applied to the aggregate findings. The median, the modus and the 95%-confidence interval are indicated. Figure 10 is rather illustrative on its own, but for the reader's convenience 12 The Python code may be found at https://github.com/abhishta91/antibody montecarlo 17 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 4, 2020. . we summarize some relevant candidate estimators below.
|
v3-fos-license
|
2019-05-21T13:04:18.087Z
|
2019-01-22T00:00:00.000
|
159173584
|
{
"extfieldsofstudy": [
"Economics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=90060",
"pdf_hash": "efef43d6656d6f846d090fb9f3817d6a66b3d16b",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46202",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "3ef984d0cdbd11037ce72777822c9b4802f261b7",
"year": 2019
}
|
pes2o/s2orc
|
Economic Cycle, Uncertainty of Economic Policy and Cash Holding of Listed Companies
Based on the systematic review and economic analysis of the theoretical literature, we consider not only the impact of the economic cycle or the economic policy uncertainty (EPU) on the cash holding ratio, but also the com-prehensive impact of the two on the cash holding rate. We raise the research hypothesis by using the data from 2004-2015 in the A-share listed companies which are listed in Shanghai and Shenzhen securities exchange as research samples. The empirical results show that: 1) There are respectively negative correlation between the economic cycle and the cash holding of listed companies, and positive correlation between the EPU and the cash holding. 2) During the boom, the cash holdings are significantly positive with the current EPU and the last stage; during the recession, the cash holdings of listed companies is significantly negatively correlated with the current EPU, while positive with the last stage. 3) We further examine the role of the economic cycle and the EPU on the cash holding value, and find that EPU will reduce the cash holding value. 4) When the economy is booming, the increase in EPU will reduce the market value of corporate cash holdings, but it is not significant. During recession, the increase in EPU will increase the market value of cash holdings.
Introduction
Cash is equivalent to the company's "blood", and holding liquid assets such as cash help companies to obtain valuable investment opportunities in the future [1]. In the theoretical range, scholars' research on corporate cash holdings has been enduring. As early as 1936, Keynes [1] proposed that the liquidity of cor-the cash holdings of listed companies. EPU mainly affects the company's cash holding decisions in two ways: on the first hand the higher EPU, the more careful management will make investment decisions, and the total amount of corporate investment will fall [6], which will raise the level of corporate cash holdings from the outflow perspective; On the other hand, the increase in EPU will enable financial institutions to treat corporate financing needs more rigorously, increase corporate financing constraints, and enable companies to increase cash holdings for preventive motives.
There is a lot of debate about the relationship between economic cycles and EPU. Most scholars have found that EPU is countercyclical. Johannsen [7] found that fiscal policy uncertainty will lead to a sharp decline in consumption, investment and output, and the economic cycle will decline. Mumtaz et al. [8] determined that the time-varying monetary policy uncertainty was determined by stochastic volatility, and found that when the uncertainty increases, the nominal interest rate, output growth, and inflation fall. But Lee [9] found that as long as the market guarantees that productive companies can survive, EPU will promote exploratory research and innovation, and contribute to overall economic growth.
When the macroeconomic environment changes, the government will carry out macroeconomic regulation and control in order to maintain market stability and achieve governance goals. For example, during recession, stable economic policies can reduce the impact of economic environment on the financing for enterprises, and weakening the impact of the economic cycle on the corporate cash holdings, and vice versa.
The purpose of this paper is to study the impact of macroeconomic cycle and EPU on the cash holding level of listed companies. Compared with other existing literatures, the innovation of this paper lies in the research on the combined impact of the macroeconomic cycle and EPU on the company's cash ratio. This paper theoretically enriches the mechanism and the economic consequences of the interaction of macroeconomic factors affecting business decision-making.
The significance of this paper is to remind policy makers to distinguish the economic cycle from the stage of economic cycle when introducing policies. When the economy is booming, it should play more roles in the market and reduce the intervention in the market. In the recession period, appropriate policy intervention is beneficial to the whole market.
The rest of the paper is structured as follows: the second part is literature review and hypothesis development, the third part is research design, the fourth part is empirical analysis, and finally the conclusion and limitation. The existing literature on cash holding decision-making focuses on the impact of the company's financial status [10], governance structure [11] and industry competition [12] on the micro range. Most of the research on the impact of the macro environment on corporate cash holding decisions is related to financing constraints. Fazzari [13] and others used various investment models to test that the investment of companies with large financing constraints is more sensitive to cash flow. The study by Opler et al. [14] also proves financial constraints and the relevance of the decision of corporate cash holdings.
Literature Review and Hypothesis
Bernanke and Gertler [15] found that the external financing ability of enterprises is largely plagued by economic cycle fluctuations. Since capital markets often have a series of barriers that are considered uncontrollable. External financing costs are common in terms of financing costs, and higher than internal financing costs, resulting in financing constraints. Baum et al. [16] found that when macroeconomic uncertainty is high, companies will increase their cash holdings and improve their ability to cope with future risks and investments for preventive motives. Generally speaking, during the economic boom period, external financing is more harmonious, and the company will reduce the company's cash holdings and increase the scale of transactions. However, when the economic cycle enters a tightening period, the company's financing capacity declines, and the company's management team will increase the company's cash holdings, actively respond to possible financial distress, or ensure the steady growth of the company's investment level. Therefore, compared with the period of economic expansion, the company's cash holdings during the economic contraction period show that enterprises have higher cash holding levels, and Jiang Long and Liu Xiaosong [17], Ni Huiping and Zhao Shan [18] also support the opinion. Based on the above analysis, we propose the following hypothesis: H1: Economic cycle is negatively correlated with the cash holding level of listed companies.
EPU and Cash Holding
EPU will increase the uncertainty of the environment in which the company is located, and increase the systemic risks faced by enterprises [19], which will affect the company's cash holdings in two ways: cash inflows and cash outflows.
Bai et al. [20] explored the global financial crisis and found that the uncertainty of enterprises in the crisis period increased significantly, the interaction between financial friction and increasing uncertainty at the enterprise level led to a sharp decline in credit, corporate financing difficulty and financing costs. The sharp increase has led to a greater restriction on cash inflows, which has reduced corporate cash holdings. On the other hand, EPU have a greater impact on corporate cash outflows. Bloom [21] believes that uncertainty increases the size of po- [6] have found that EPU and corporate investment levels are negative correlation, that is, economic policy uncertainty will reduce the company's cash outflow, thereby increasing the level of corporate cash holdings.
Generally speaking, when enterprises face high EPU, the uncertainty of future expectations of enterprises will increase, and enterprises cannot accurately predict the possibility of cash shortage in the future. For preventive motivation, enterprises will increase cash holdings. On the other hand, when EPU increases, it is easy for management or major shareholders to increase the company's cash flow in order to facilitate their own gains, which will be difficult to be captured by regulators, thereby further increasing the company's cash holding level. We therefore hypothesize the following: H2: EPU is positively correlated with the cash holding level of listed companies.
Economic Cycle, EPU and Cash Holding
Mitchell & Buns [22] defined the economic cycle as a volatility in a country's overall economic activity, including four continuous and recurring phases: recovery, expansion, boom, and depression. This paper adopts the above division The government's introduction of economic policies can play a multiplier role if it is compatible with the economic cycle. The impact of EPU on the cash holding level of listed companies is asymmetric. When the economy is booming, the active market will increase the amount of investment and reduce the level of cash holdings. At this time, with the increase of EPU, the level of investment in enterprises will decline, and the investment efficiency will increase, so that the level of cash holdings of enterprises will also be rise. On the other hand, during the boom period, the market is in an expanding and active state, and the flow of funds is accelerating. The financing pressure faced by enterprises is relatively small. The cash holdings of enterprises themselves are at a low level. At this time, the increase in EPU will cause enterprises to rapidly increase cash holdings with preventive motives, which is reflected in the immediate response of cash holdings to EPU.
When the economic cycle is in recession, the financing constraints faced by enterprises are relatively large. At this time, the increase of EPU will further increase the financing constraints of enterprises in the short term. Enterprises are subject to an objective financing environment and EPU. It will also increase the amount of corporate investment. In general, the level of corporate cash holdings will decrease in the short term, which is reflected as negative correlation between economic policy uncertainty and cash holdings. After a period of time, the market will gradually absorb the impact and gradually reach a new financing equilibrium. The increase in investment volume brought about by EPU will shift the cash shortage cost curve of enterprises. Enterprises will increase cash holding for preventive motive. Based on this, we present the following hypothesis: H3a: When economic cycle in prosperous, the EPU is positively correlated with the cash holding in the short term. The EPU is still positively correlated with the cash holding in the subsequent stage.
H3b: When economic cycle in recession, the EPU is negatively correlated with the cash holding in the short term. The EPU is positively correlated with the cash holding in the subsequent stage. On the one hand, it provides a reasonable excuse for management and major shareholders to maintain a large amount of cash for personal gain. On the other hand, the increase in EPU also makes the supervision of agency problems more difficult to implement. It is expected that EPU will reduce the value of corporate cash holdings when the economy is in a prosperous phase. When the economic cycle is in recession, the whole market is in a downturn, and the production and operation of enterprises can only be maintained at a low level. When EPU increases, the company increases cash holdings and reduces bankruptcy due to preventive motives, thereby enhancing corporate value. And in the recession stage, the company's shortage cost reduction is greater than the increase in holding costs, thereby increasing the value of corporate cash holdings. Based on the above analysis, this paper proposes the following hypothesis: H5a: When economic cycle in prosperous, EPU is negatively related to the value of corporate cash holdings.
Economic Cycle, EPU and Cash Holding Value
H5b: When economic cycle in recession, EPU is positively correlated with the value of corporate cash holdings.
Model Design and Variable Definition
Considering that the research on the cash holding level of listed companies involves a variety of influencing factors, this paper builds the following foundations based on the models set by Opler et al. [14], Jiang Long and Liu Xiaosong [17]. Inspection model: The above model is mainly used to observe the degree of impact of EPU on the cash holdings. Since the response of enterprises to macro factors usually has a certain degree of delay, this paper also verifies the impact of one order lag EPU on the cash holding level of listed companies. The model is as follows: represents the regression coefficient, i represents the company i, and t represents the year t. Since the model is mostly a period variable, in order to eliminate the possible deviation in data extraction, this paper use annual variable in this model. "t" is the current period, The main variables involved in several of the above models are described below.
1) Interpreted variables
Cash holding level (CASH). This paper mainly explores the extent to which the cash holding level of listed companies is affected by the macroeconomic environment. Considering the difference between companies' scale, we use size-adjusted cash holding level, which is cash holding of company to the total assets. .In addition to monetary funds, the cash in this indicator also includes short-term investments (pre-2006) and trading financial assets (post-2006).
Corporate value (V). We use the sum of the stock market price of the SFC algorithm and the book value of the debt. The calculation of the stock market price distinguishes between A shares (including AB shares) and B shares, if "(Circulating A shares + restricted A shares) Not equal to 0", then the total market listed shares = A share closing price × (total number of shares − H − overseas shares − B shares) + B shares closing price × RMB exchange rate * B shares. If "(Circulation A shares + Restricted A shares) = 0" and "B shares total not equal to 0", then the total market value = B shares closing price × RMB exchange rate × B shares total shares.
2) Explanatory variables Economic Policy Uncertainty (EPU). Economic Policy Uncertainty is one of the systemic risks faced by enterprises, and it will have different degrees of impact on various decisions of enterprises. This paper draws on the data of China's economic policy uncertainty developed by Baker et al. [29] and uses the calculation method of Rao Pingui et al. [6] to calculate the uncertainty of quarterly This paper represents the economic cycle by GDP growth rate.
4) Control variables
Refer to Opler et al. [14], Lu Zhengfei and Zhu Jigao [4], Jiang Long and Liu Xiaosong [17] and other documents. In the basic model, we introduces company scale (SIZE), main business income (MBI), assets and liabilities (LEV), net working capital ratio (NWC), short-term debt ratio (SD), industry, etc. The control variables involved in the cash value model include operating cash flow ratio (CFO), non-cash asset ratio (NA), interest expense ratio (I), cash dividend payout ratio (D), and capital expenditure ratio (CAPEX) ( Table 1).
Sample Selection and Data Source
We takes the listed companies of Shanghai and Shenzhen A-shares as the re- [29]. We use STATA 14.0 and EXCEL to process data.
Test Results of the Impact of the Economic Cycle on the Cash Holding
Column (1) of Table 3 presents the baseline regression without the main terms.
The R 2 is 0.4535, more than 0.45, which indicates that the overall model fit well.
The coefficients of the net working capital ratio (NWC) and the company scale (SIZE) are negative and significant at the 1% level respectively, indicating that the higher net working capital ratio, or the larger company size, the lower cash holding ratio. There was a significant negative correlation between the financial leverage and the corporate cash-holding ratio. The cash holding level was significantly positively correlated with the income level of the main business, and was significantly negatively correlated with the short-term bank loan interest rate.
Column (2) Table 3. Regression analysis of the first model. will invest more, thereby reducing the cash holding level. These results support H1.
Test Results of the Impact of Economic Cycle and EPU on the Cash Holding
Columns (3) and (4) of Table 3 report the regressions of the current period EPU and one order lag EPU on the listed company's cash holding level. According to the regression results, the coefficient of EPU in both the current period and the lag period is significantly positive, indicating that when the EPU increases, the We divides the economic cycle into four groups by descending order of GDP growth rate to study the impact of macroeconomic policies on the cash holding of listed companies under different economic cycle periods. The first quarter is divided into the prosperity group, and the last quarter is divided into the recession group. Columns (5)-(8) of Table 3 report the impact of the EPU on the cash holdings of listed companies in different stages of the economic cycle. In general, the four regressions in Table 3 are all have large R 2 , which is more than 0.32, indicating that the model fits well. Columns (5) and (6) report the regression results of EPU and the level of cash holdings of enterprises in the period of economic prosperity. It can be seen from the regression results that the impact of economic policies on corporate cash holdings during the boom period has a certain lag, and the coefficient of the first-order lag variable is significantly larger than the coefficient of the original variable, indicating the impact of EPU on the current period of the enterprise is significantly smaller, compared with the next phase. So this paper only considers the first-order lag variable in the subsequent cash value model. When the economic cycle is in prosperous stage, the EPU in the short term is positively correlated with the cash holding level of listed companies. The EPU in the subsequent stage is still positively correlated with the cash holding level of listed companies, which is consistent with Hypothesis 3a.
Columns (7) and (8) represent the regression of EPU and the level of cash holdings of enterprises in the period of economic recession. From the regression results, it can be seen that when the economy in recession, the EPU will further increase the financing pressure of enterprises in the short term, which reflects the decline of cash holdings of enterprises. However, after a period of time, companies further increase cash holdings for preventive motivation. In the economic recession, the EPU in the short term is negatively correlated with the cash holding level of listed companies. The EPU in the subsequent stage is positively correlated with the cash holding level of listed companies, which is consistent with the setting of hypothesis 3b.
Robustness Test
Since the economic cycle and EPU are macroeconomic factors, they are highly exogenous to enterprises, so we do not consider the endogeneity of the model. In this paper, the robustness test is carried out in the following way, and the regression results are almost indistinguishable from the corresponding regression results in the text above. 2) Change period: Since the economic cycle is usually divided into 10 years, and the cash value model needs to lag two periods. Therefore, this paper selects the data for the 12-year period from 2004 to 2015, and further increases the research period in the robustness regression to 2004-2017. It was found that there was almost no change in the regression results; In the test of Hypothesis 2, only the one-stage lag variable was used for the relationship between EPU and the late cash holding level of listed companies, and more was adopted in the robustness regression. We found that there is no longer significant impact on the cash holdings after the two periods of lag, so the lag effect of economic policy uncertainty may only exist in the one order lag period.
Conclusions and Recommendations
This terprises has greater financing constraints brought by EPU, which is reflected in the decline of cash holdings in the short-term, and after the market has absorbed the impact of the financing constraints, the impact of EPU on cash holdings is more manifested in expectations theory.
This paper further examines the role of economic cycle and EPU on the value of cash holdings of listed companies. The empirical results show that EPU will reduce the cash holding value of listed companies. When the economy is booming, the increase in EPU will reduce the market value of cash holdings, but it is not significant. When the economy is in recession, the increase in EPU will increase the market value of corporate cash holdings. Therefore, the significance of this paper is to remind policy makers to distinguish the economic cycle from the stage of economic cycle when introducing policies. When the economy is booming, it should play more roles in the market and reduce the intervention in the market. In the recession period, appropriate policy intervention is beneficial to the whole market.
Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper.
|
v3-fos-license
|
2021-08-13T13:24:52.257Z
|
2021-08-13T00:00:00.000
|
236992169
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.725832/pdf",
"pdf_hash": "ed9f06d0fb6a125b38dc70a00846a4c75c9f37c6",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46205",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"sha1": "ed9f06d0fb6a125b38dc70a00846a4c75c9f37c6",
"year": 2021
}
|
pes2o/s2orc
|
Early Environmental and Biological Influences on Preschool Motor Skills: Implications for Early Childhood Care and Education
Early motor skills underpin the more complex and specialized movements required for physical activity. Therefore, the design of interventions that enhance higher levels of early motor skills may encourage subsequent participation in physical activity. To do so, it is necessary to determine the influence of certain factors (some of which appear very early) on early motor skills. The objective of this study was to examine the influence of some very early environmental variables (delivery mode, feeding type during the first 4 months of life) and some biological variables (sex and age in months) on preschool motor skills, considered both globally and specifically. The sample was composed by 43 preschool students aged 5–6 years. The participant's parents completed an ad hoc questionnaire, reporting on delivery mode, feeding type, sex, and age in months. The children's motor skills were assessed using observational methodology in the school setting, while the children participated in their regular motor skills sessions. A Nomothetic/Punctual/Multidimensional observational design was used. Results revealed that certain preschool motor skills were specifically influenced by delivery mode, feeding type, sex, and age. Children born by vaginal delivery showed higher scores than children born via C-section in throwing (p = 0.000; d = 0.63); total control of objects (p = 0.004; d = 0.97); total gross motor skills (p = 0.005; d = 0.95); and total motor skills (p = 0.002; d = 1.04). Children who were exclusively breastfed outperformed those who were formula-fed in throwing (p = 0.016; d = 0.75); visual-motor integration (p = 0.005; d = 0.94); total control of objects (p = 0.002; d = 1.02); total gross motor skills (p = 0.023; d = 0.82); and total motor skills (p = 0.042; d = 0.74). Boys outperformed girls in throwing (p = 0.041; d = 0.74) and total control of objects (p = 0.024; d = 0.63); while the opposite occurred in static balance (p = 0.000; d = 1.2); visual-motor coordination (p = 0.020; d = 0.79); and total fine motor skills (p = 0.032; d = 0.72). Older children (aged 69–74 months) obtained higher scores than younger ones (aged 63–68 months) in dynamic balance (p = 0.030; d = 0.66); visual-motor integration (p = 0.034; d = 0.63); and total balance (p = 0.013; d = 0.75). Implications for early childhood care and education are discussed since this is a critical period for motor skill development and learning.
INTRODUCTION
The World Health Organization has declared that 81% of all school-aged children fail to engage in the minimum recommended amount of daily physical activity (World Health Organization, 2010;Bull et al., 2020). This means that a large number of children do not receive the many physical, mental, and socio-emotional benefits of regular physical activity. This can be corrected, however, since physical activity (or the lack thereof) is a modifiable behavior. An initial step is to identify and determine the factors underlying this lack of physical activity in children. These variables include the level of motor skills acquired during early childhood (De Niet et al., 2021;Moghaddaszadeh and Belcastro, 2021). Motor skills include the movement and coordination of one's muscles and body (Matheis and Estabillo, 2018). They are classified into two groups: (1) Gross motor skills and (2) Fine motor skills (Gonzalez et al., 2019;Goodway et al., 2019;Meylia et al., 2020). (1) Gross motor skills refer to developmental aspects associated with the child's ability to move using their large muscle groups to perform activities such as walking and jumping. (2) Fine motor skills refer to precise movements using smaller muscle groups to perform more delicate tasks such as picking up small objects, threading beads, and writing. They require control and coordination of the distal musculature of the hands and fingers. Both gross and fine motor skills can be divided into more specific typologies (Goodway et al., 2019;Bolger et al., 2020;Meylia et al., 2020). (1) Three types of gross motor skills have been established: (1.1) Locomotor skills: these are movements having the fundamental objective of moving the body from one point in space to another, such as: running, jumping, rolling, etc. (1.2) Balance: this is the ability to maintain a controlled position or posture during a specific task. Here, differentiation is made between: (1.2.1) Static balance: it is the ability to maintain postural stability and orientation with the center of mass over the base of support while the body is at rest. It is necessary, for example, to perform squats; and (1.2.2) Dynamic balance: it refers to the same ability to maintain postural stability and orientation with the center of mass over the base of support but while the body parts are in motion. An example of dynamic balance task is stair climbing. (1.3) Object control skills: skills that allow the individual to move or receive objects, be it with the feet, hands, or even the body. Differentiation is made between: (1.3.1) Propulsive skills: they involve sending an object away from the body, such as throwing, or batting a ball; and (1.3.2) Receptive skills: they involve receiving an object, such as catching a ball, or a frisbee (Kokstejn et al., 2017;Bolger et al., 2020). As for (2) Fine motor skills, two separate elements have been established: (2.1) Visual-motor coordination (also referred to as Fine motor coordination): it refers to small muscle movements with a visual component. It includes abilities such as finger dexterity, motor sequencing, and fine motor speed and accuracy. These skills are used in tasks such as building with blocks; finger tapping, and imitative hand movements. (2.2) Visualmotor integration (also called Visual-spatial integration or Fine motor integration): it involves the organization of small muscle movements in the hand and fingers with the processing of visual stimuli. It implies that visual information from the environment is processed and integrated using fine motor movements. It requires more visual perception than Visual-motor coordination. Visual-motor integration skills are often captured by tasks that involve writing and copying (Goodway et al., 2019).
Early motor skills are essential for subsequent physical activity. They are the basis of more advanced, complex, and specialized movements needed to participate in games, sports, and other context-specific physical activity (Chang et al., 2020;Moghaddaszadeh and Belcastro, 2021). Therefore, promoting and obtaining a suitable level of early motor skills is a positive element that may stimulate and enhance the onset and maintenance of physical activity. Children with good motor skills are perceived as being competent, leading to increased enjoyment and engagement in more and wider variety of motor and physical activity experiences. Increased physical activity provides more opportunities to promote motor skill development. Therefore, a positive spiral or dynamic relationship is evident between motor skills and physical activity (Stodden et al., 2008). On the other hand, less-skilled children will have a lower perceived competence and will perceive many tasks as being more difficult and challenging, therefore being less likely to engage in them. Hence, having good motor skills, even in early childhood, may contribute to becoming a physically active individual, or even an elite athlete (De Niet et al., 2021).
Despite the clear importance of these early motor skills in the life and development of children, they tend to be overlooked on a research and practical/educational level (Lopes et al., 2021). This has led to an increase in the number of children with poor motor skills and an upward trend of motor difficulties over recent years (Honrubia-Montesinos et al., 2021;Lopes et al., 2021).
One of the issues that seems to have contributed to this lack of research and promotion of early childhood motor skills is the misconception that they will develop naturally over time. However, to attain an appropriate motor skill level, these skills must be learned, practiced, and reinforced over time (Honrubia-Montesinos et al., 2021;Moghaddaszadeh and Belcastro, 2021). Preschool years are an especially important life phase for the development and learning of motor skills (Wang et al., 2020;Lopes et al., 2021). During these years, development occurs quickly and it is closely linked to the quality and quantity of the stimuli received by the children. Therefore, during early childhood children should be offered enriched environments, allowing them to achieve their full motor potential (Lopes et al., 2021;Moghaddaszadeh and Belcastro, 2021). Early Childhood Education classrooms are an ideal context for this since a large number of children attend these schools, spending many hours there (European Commission/EACEA/Eurydice, 2019; Spanish Ministry of Educational Professional Training, 2020).
However, we should note that, for Early Childhood Education experiences to be effective, they should be intentionally designed taking into account the child's current level of development (Darling-Hammond et al., 2020). However, a problem arises for educators. Even children in the same academic year and enrolled in the same class may display different motor skills levels, given their distinct characteristics and past and present experiences. Numerous and diverse factors may affect motor skill levels in children (Wang et al., 2020). Learning more about these potential factors and their influences on children's motor skills is necessary to design individualized interventions that optimize motor skills for all children.
Given all of this, as well as the current literature on the topic, the purpose of this study was to provide knowledge as to the influence of certain variables (some of which are present from a very early age) on global and specific motor skills, on preschoolage children (5-and 6-year-olds). The following variables of influence on preschool motor skills were considered: delivery mode, feeding type during the four first months of life, sex, and age (specifically, the relative age effect), since, according to the literature, there is still an ongoing discussion regarding its relationship with motor skills.
Each of these variables is discussed below.
Delivery Mode
A possible impact of delivery mode on the neurodevelopment of children has been considered. The mode of delivery has been directly related to biochemical and structural changes in the central nervous system, although their consequences are not wellknown. Thus far, literature on this area (especially referring to the influence of the delivery mode on motor skills) has been inconclusive, since studies are scarce and they offer conflicting results (Blazkova et al., 2020;Takács et al., 2020). Vaginal delivery is considered to be the ideal mode of delivery for the child's development. It is the most natural delivery mode and tends to lead to a lower number of complications for both mother and child (World Health Organization, 2018). In this mode of delivery, certain inherent mechanisms may be produced, possibly triggering certain protective and strengthening processes for the child's appropriate development (Tribe et al., 2018). Over recent years, however, the rate of cesarean deliveries (C-sections) has increased considerably in numerous countries (World Health Organization, 2018). It has been due an overuse of the procedure, and not to medical indications (World Health Organization, 2018), such as mothers' wishes to have a planned birth (King, 2021). Many studies have warned of the harmful consequences that this may entail, since, like any other surgery, C-sections are associated with short-and long-term risks that may persist years after the intervention and which may affect the child's health and development (Chojnacki et al., 2019;King, 2021). Specifically, about the effect of C-sections on motor development, further research is necessary given the literature is limited and does not offer conclusive results.
Some studies have found no evidence to affirm that children born by C-section display poorer gross and fine motor skills than those born by vaginal delivery (Zhou et al., 2019;Takács et al., 2020).
Other studies have found the opposite results, suggesting that delivery mode affects the child's motor development. Rebelo et al. (2020) studied the influence of delivery mode on motor skills (both gross and fine) in children aged 12-48 months. Their results indicated that: (1) children born via vaginal delivery had better motor skills, both gross and fine, as compared to those born via C-section. More specifically, in older children (36-48 months), differences based on delivery mode were statistically significant in object control, visual-motor coordination, and visual-integration skills, as well as in the score on total gross motor skills and total fine motor skills. No statistically significant differences were found for locomotor and balance skills; (2) the effect of delivery mode on motor skills became more pronounced as the children became older. Blazkova et al. (2020) also found that the mode of delivery had a major effect on visual-integration skills in 5-year-olds: those born via vaginal delivery had higher visual-motor integration than those born by C-section. No additional measures regarding motor skills were included in said study.
In summary, along with the disparate results found between studies regarding the influence (or lack of) of delivery mode on motor skills, it has been found that most of the studies offer only partial results and fail to consider all specific gross and fine motor skills that have been identified in the literature (and explained above). Given these limitations and lack of knowledge, it was decided to include this variable in this study.
Feeding Type
Appropriate feeding practices are vital for children's optimal growth and development. Breastfeeding is recognized as the gold standard for infant nutrition (Chen et al., 2021). Many of the components of breast milk offer multiple benefits to the child's health, growth, and development, over the middle-and long-term. Breastfeeding has been associated with appropriate cerebral development, improved immunity, and a decreased risk of infections, metabolic diseases (including obesity and diabetes), asthma, and cardiovascular risk. It also may result in better mental health, improved cognitive and language development and academic performance (Grace et al., 2017;Jardí et al., 2018). Few studies exist, however, regarding its effect on motor development (Hernández Luengo et al., 2019). And said studies have focused more on analyzing the effects of a longer or shorter duration of breastfeeding on motor development rather than on the effects of breastfeeding as compared to other types of infant feeding (such as formula feeding). Among the limited studies that have considered this topic, results have been non-conclusive. Moreover, studies analyzing the effects beyond 3 years are even more few. Bellando et al. (2020) found that, at the age of 3 months, breastfed infants displayed better motor development than formula-fed infants. However, at 12 and 24 months, no differences were found between both groups. Similar results have been found by Michels et al. (2017), who suggested that the type of feeding during the first 4 months of life does not impact the ages at which gross motor milestones (standing and walking alone) are achieved.
Other studies have offered distinct results, suggesting that associations exist between infant feeding type and motor skills. Jardí et al. (2018) found that children who were exclusively breastfed for the first 4 months of life (as compared to those who were formula-fed or mixed-fed during that time) displayed better motor development at 6 months and 1 year of age. Results found by Kádár et al. (2021) suggest the same. At 1 year of age, children who were exclusively breastfed for 6 months showed the lowest incidence of delays in their motor development. Those who were exclusively formula-fed during this time had the highest incidence of delays.
This set of studies presents some important results, although motor development is only considered in a general manner, and without differentiating between different motor skills. Very few studies have made this differentiation and those that have revealed discrepancies as to whether or not the infant feeding type influences gross and fine motor skills. Sacker et al. (2006) reported that at 9 months of age, children who had never been breastfed were the most likely to have delays in motor development (both gross and fine). Similarly, and further supporting the positive effects of breastfeeding as compared to other infant feeding types for gross and fine motor skills, Dee et al. (2007) found that breastfeeding was a protective factor against developmental delays (for both gross and fine motor skills) in children aged 1-5. The results of Leventakou et al. (2015) were somewhat different, however. They found that at 18 months, no differences existed in the gross motor skill level of children who were never breastfed as compared to those who were. On the other hand, differences were found between children in terms of fine motor skills, which were lower in children who were never breastfed. Therefore, according to these authors, fine motor skills are more sensitive to the effects of feeding than gross motor skills. We are unaware of studies that have analyzed the effects of infant feeding type on different specific gross and fine motor skills.
Given the wide variety of results and this gap, there is clearly a need for additional research to determine the impact of early feeding type on motor skills, specifically considering its influence on different specific gross and fine motor skills. Existing studies have failed to consider this issue.
Sex
Numerous studies have suggested differences in the motor skills of boys and girls (Kokstejn et al., 2017;Matarma et al., 2020;Mecías-Calvo et al., 2021). These differences have been primarily explained by the different stereotyped activities, sporting or other, that are carried out by the different sexes, and not by differences in their physical characteristics (body type, body composition, strength, and limb length), since, before puberty, these characteristics are quite similar in both boys and girls (Bolger et al., 2020;Matarma et al., 2020). Some studies, however, have failed to find differences in preschool motor skills between boys and girls .
Discrepancies exist even among those who defend the idea that there are differences in motor skills according to sex. The influence of sex on infant motor skills appears to depend on the specific motor skill at hand, but there is no consensus as to the specific associations. Thus, discrepancies exist as to which sex displays better performance on each of the motor skills.
Regarding gross motor skills, some studies have found that boys outperform girls (Bolger et al., 2018(Bolger et al., , 2020Wang et al., 2020), while other studies have suggested that girls outperform boys (Matarma et al., 2020) and others have found no differences between both sexes (Peyre et al., 2019;Martínez-Moreno et al., 2020). In terms of fine motor skills, girls have been found to have better performance than boys (Kokstejn et al., 2017;Peyre et al., 2019;Mecías-Calvo et al., 2021), although other studies have suggested that fine motor skills are very similar between both sexes .
These discrepancies regarding which motor skills present differences and which do not, and whether said differences favor boys or girls, become even greater when we consider the different specific skills making up the gross motor skills. Some studies have suggested that locomotor skills are higher in girls (Bolger et al., 2018(Bolger et al., , 2020Wang et al., 2020), while other works claim that they are higher in boys (Robinson, 2010); and other studies have failed to detect any significant differences between both sexes (Bakhtiar, 2014;Foulkes et al., 2015;Barnett et al., 2016;Bolger et al., 2018Bolger et al., , 2020. As for balance skills, some studies have shown that these skills are higher in girls (Venetsanou and Kambas, 2016;Kokstejn et al., 2017;Mecías-Calvo et al., 2021) while others indicate that they are similar for both sexes (Singh et al., 2015;Barnett et al., 2016). As for control object skills, some studies show higher levels in boys (Foulkes et al., 2015;Barnett et al., 2016;Venetsanou and Kambas, 2016;Kokstejn et al., 2017;Bolger et al., 2018Bolger et al., , 2020Mecías-Calvo et al., 2021) while others find similar levels between both sexes (LeGear et al., 2012;Bakhtiar, 2014). We are unaware of studies that have focused on the analysis of potential differences based on sex for the distinct specific preschool fine motor skills.
Given the disparity of results and this gap, additional research is clearly necessary in this area. Therefore, in our study, we have included the sex variable to analyze its influence on (global and specific) motor skills.
Age
It is well-known that as children grow, their motor skills improve (Bolger et al., 2018). What is not so well-known is whether significant differences exist between the motor skills of children born in the same year. In Spain (where this research was conducted), the educational policy groups together children based on their date of birth, with all children born in the same natural year (January 1 to December 31) being grouped in the same academic year. This is an attempt to seek the minimum number of differences between children in the same academic year, and to offer appropriate experiences for all. However, in fact, this means of grouping leads to cases of an almost full year's difference in the age of some students who are in the same academic year (12 months minus 1 day). That is, there is a chronological age difference between children of the same cohort. The results of this phenomenon are referred to as the "relative age effect" (RAE) (Gladwell, 2008). The RAE refers to the effects of being relatively younger or older than peers. It may result in children who are born earlier in their year of birth outperforming children of the same cohort who were born later in the year. Therefore, being born later potentially puts these children at a disadvantage as compared to their peers with earlier birthdays. The size of the RAE is inversely correlated with age, such that the RAE is more prominent in early grades (Aune et al., 2018).
Some studies have found that even as early as in Early Childhood Education, children born at the beginning of the year displayed higher levels of gross and fine motor skills than their peers with later birthdays Mecías-Calvo et al., 2021;Navarro-Patón et al., 2021). This appears to be due not only to their nervous and muscular system having matured for a longer period of time, but also to their increased opportunities for motor practices, experiences, and feedback; issues that may help to refine their motor skills (Bolger et al., 2020;Cupeiro et al., 2020).
Although these studies have revealed the existence of an RAE on preschool motor skills, it should be noted that their results also suggest that the RAE does not affect all of the preschool motor skills, with discrepancies arising when attempting to determine which motor skills have an RAE and which do not. In addition, the use of distinct assessment instruments and the consideration of different motor skills prevent the comparison of studies. Therefore, for example, Imbernón-Giménez et al. (2020) found an RAE on the control of objects, visual-motor integration, and total gross motor skill score. However, they did not find it on locomotor, balance, or total score for fine motor skills. Visual-motor coordination and the total motor skill score were not considered in this study. Mecías-Calvo et al. (2021) did not find results that coincide with those of prior authors, since they found an RAE on balance but not on object control. (2020)-. They also failed to find an RAE on the total motor skills score -an aspect that also diverges from Mecías-Calvo et al. (2021)-. More specific results were found by Imamoglu and Ziyagil (2017). These authors analyzed the RAE on locomotor and object control skills, detecting that only some, -not all-locomotion skills were affected by this effect. The object control skills, as suggested by Mecías-Calvo et al. (2021) did not reveal an RAE, unlike the results of Imbernón-Giménez et al. (2020) and Mecías-Calvo et al. (2021).
Given the wide variety of results, based on partial studies that do not consider all of the specific gross and fine motor skills identified in the literature, further research is necessary to determine which of the specific preschoolers' gross and fine motor skills are influenced by an RAE.
AIM
The objective of this study was to analyze whether there were influences of delivery mode, type of feeding during the first 4 months of life, sex, and age (more precisely, the RAE) on motor skills (considered at both a global and specific level), assessed in 5-and 6-year-old preschoolers.
Based on the existing literature on this area, we proposed the following hypotheses: -H 1 : Differences will be found in childhood motor skills based on the delivery mode: children born by vaginal delivery will have higher motor skills than children born by C-section.
-H 2 : Differences will be found in childhood motor skills based on the type of feeding during the first 4 months of life: children who were exclusively breastfed during this time will have higher motor skills than children fed with formulas or mixedfed. -H 3 : Differences will be found in childhood motor skills based on sex: boys will outperform girls on certain motor skills while, in other motor skills, the opposite will occur. Moreover, in other skills, no differences will be found between both sexes. -H 4 : There will be an RAE on certain preschool motor skills: children born over the first half of the year will outperform their classmates who were born over the second half of the same year on some motor skills.
We believe that determining whether these variables have an influence on the motor skills of 5-and 6-year-olds may be of great assistance to educators as well as health, sports, and physical activity professionals, who may subsequently design more effective personalized interventions. These results may be relevant for policymakers when implementing public health, social, and educational policies that promote appropriate motor skill development from very early ages, thereby enhancing physical activity and healthy lifestyles in the children.
Methodology and Design
Data for this study are a subset of a broader research project focusing on the analysis of diverse childhood skills and competencies. A multimethod and mixed methods approach was used (Elliott, 2007;Sánchez-Algarra and Anguera, 2013;Anguera et al., 2018a). The multimethod approach consisted of selective methodology to determine the delivery mode, feeding type during the first 4 months of life, sex, and age (a questionnaire was used for this), as well as information referring to the sample's inclusion/exclusion criteria (questionnaires and standardized batteries were used); and observational methodology to observe preschool motor skills in the school context while the children participated in their regular motor skills sessions. Our study was also carried out from a mixed methods approach because observational methodology itself is considered a mixed methods, since it integrates qualitative and quantitative elements in a succession of QUAL-QUAN-QUAL macro-stages (Sánchez-Algarra and Anguera, 2013;Anguera et al., 2018aAnguera et al., , 2020a. In the first QUAL stage, an ad hoc observation instrument is built and applied to code the behaviors that are the subject of the study, taking into account the natural setting in which they occur. Then, in the QUAN stage, observational data quality is tested and analyses through quantitative techniques are carried out to respond to the study objectives. The quantitative results obtained are qualitatively interpreted in the third and last stage (QUAL stage), considering the research problem and the literature. All this permit a seamless integration.
Observational methodology plays an essential role in our study. It is a robust scientific method for analyzing regular behavior (like the motor behaviors studied in this work) in natural settings (such as the scholastic one, the context in which this research was carried out) (Suárez et al., 2018Escolano-Pérez et al., 2019a,b;Anguera et al., 2020a,b;Sagastui et al., 2020). Furthermore, observational methodology is the most appropriate methodology for studying the behavior of young children (like those in this study who were 5 and 6 years of age) (Anguera, 2001;Early Head Start National Resource Center, 2013;Blanco-Villaseñor and Escolano-Pérez, 2017;Escolano-Pérez et al., 2017;Escolano-Pérez et al., 2019b).
Of the eight types of existing observational designs (Anguera et al., 2018b), we employed a Nomothetic/Punctual/Multidimensional design. It was: "Nomothetic" because various units of observation were studied (43 children); "Punctual" because for each child, an observation session was carried out to study each of the motor skills of interest in the study; and "Multidimensional" because different response levels were observed, that is, distinct aspects were observed regarding the gross and fine motor skills, thereby following the theoretical proposal of distinct authors (Matheis and Estabillo, 2018;Gonzalez et al., 2019;Goodway et al., 2019;Meylia et al., 2020). These response levels are reflected in the observation instrument used (available in the Supplementary Material).
The observation was active, based on scientific criteria, non-participatory and direct (the level of perceptibility of the behaviors was complete). It was performed by direct observation of recorded film (Anguera et al., 2018b).
Participants
Preschool children aged 5 and 6 (N = 43: 15 boys and 28 girls; 34.88% and 65.12%, respectively) in the third year of Early Childhood Education (M age = 68.6 months; SD age = 3.59) from an intentionally selected public school participated in the study. The school was located in a middle-upper (socioeconomic) class neighborhood, in a city in the northeast of Spain.
All children had the following characteristics (meeting exclusion/inclusion criteria established for study participants): (1) absence of a history of pre, peri, or postnatal problems, neurological disease, sensory disturbance, mental or other clinically diagnosed impairment (such as attention-deficit hyperactivity disorder, developmental coordination disorder, developmental dysphasia, etc.) or special needs, according to the information provided by the parents of the children; (2) according to the school's management team, all participants were enrolled in the school since the 1st year of Early Childhood Education. That is, they were completing the entire second cycle of this educational stage (from 3 to 6 years of age) at this school; (3) they had appropriate IQ for their age, according to the assessment carried out by the research team using the BADyG-I (Battery of Differential and General Abilities I; Yuste and Yuste, 2001).
The study was part of a broader research project endorsed by the Research Unit of the University of Zaragoza. All participants were treated in accordance with the principles of the Declaration of Helsinki. Written informed consent was required from the children's parents.
Instruments Used for Selective Methodology
An ad hoc questionnaire to be completed by the participants' parents was used to determine the following: (1) Information related to the exclusion criterion referring to a history of pre, peri, or postnatal problems, neurological disease, sensory disturbance, mental or other clinically diagnosed impairment (such as attention-deficit hyperactivity disorder, developmental coordination disorder, developmental dysphasia, etc.), or special needs; (2) Information on the delivery mode, type of feeding during the first 4 months of life, sex, and age of each participant. More specifically, and regarding these variables, the questionnaire requested that the following be indicated: (a) delivery mode: select between vaginal delivery and Csection, according to the classification used in similar past studies, such as those by Khalaf et al. (2015) and Grace et al. (2017); (b) feeding type during the first 4 months of life: select between exclusive breastfeeding; exclusive formula or artificial milk feeding; and mixed-feeding (combination of breast and formula feeding), according to the classification proposed by other similar past studies (Tozzi et al., 2012;Michels et al., 2017;Jardí et al., 2018;Chojnacki et al., 2019). It should be noted that this age (4 months) was selected because, according to the studies conducted in Spanish contexts, this is a turning point in infant feeding. Most Spanish mothers tend to stop breastfeeding at this point, given that their maternity leave ends and they have to return to work. At this point, many mothers resort to other feeding options for their children (Jardí et al., 2018;Cabedo et al., 2019); (c) sex: select between masculine and feminine; (d) date of birth, indicating the day, month, and year. The questionnaire also contained a section for additional "considerations" allowing parents to clarify any responses.
To gather information referring to the inclusion criterion of being enrolled in the school since the 1st year of Early Childhood Education (that is, to be completing the entire second cycle of this educational stage in the school), another ad hoc questionnaire was used, to be completed by the preschool management team.
To determine whether all of the participants complied with the inclusion criterion of having an appropriate IQ for their age, the BADyG-I (Battery of Differential and General Abilities I; Yuste and Yuste, 2001) was used. It is one of the most widely used instruments in Spain (where the study was conducted) to measure student IQ, since it has suitable psychometric properties and provides a complete measurement including distinct verbal (Numerical-Quantitative Concepts, Information, and Graphic Vocabulary) and non-verbal (Reasoning with Figures, and Logic Puzzles) fields. BADyG-I offers a Verbal, Non-verbal, and General IQ.
Instruments Used for Observational Methodology
According to the GREOM (Guidelines for Reporting Evaluations Based on Observational Methodology; Portell et al., 2015), it is necessary to differentiate between recording (to record or code data) and observation (to observe a specific topic) instruments.
Recording instruments
The following recording instruments were used: (1) a video camera to record the children's motor sessions and (2) the free software Lince v.1.2.1 (Gabin et al., 2012) to code actions indicative of infant motor skills.
Observation instrument
We created a modified version of the original ad hoc observation instrument by Escolano-Pérez et al. (2020). The modifications included new categories, the elimination of other categories, and some more specific definitions. The observation instrument was a combination of a field format and systems of categories, given that the observational design was multidimensional (Anguera et al., 2018b). This observation instrument consists of a total of 26 criteria. Each criterion was broken down into a system of exhaustive, mutually exclusive categories. Overall, the observation instrument contained 82 categories. The selection of criteria and categories was based on the information provided by theoretical and empirical studies on childhood motor skills (Hestbaek et al., 2017;Oberer et al., 2017;Goodway et al., 2019;Haywood and Getchell, 2019); the Spanish Early Childhood Education curriculum, which determines the motor skills worked on during this educational stage (Education Science Ministry of Spanish Government, 2007), and the information obtained from the reality observed. The observation instrument is fully available (criterion; criterion description; category systems; category description, and category code) in the Supplementary Material. Table 1 shows its criteria and categories.
Data Analysis Software
All analyses were carried out using IBM SPSS version 25 (IBM Crop, 2017).
Procedure
The preschool management team was informed of the purpose, procedure, and benefits of the study. Once their approval was obtained, the parents were also informed and asked to complete the informed consent to authorize the participation of their children in the study.
Then, the parents that signed and delivered the informed consent were given an ad hoc questionnaire so that they could provide the information for the participant's exclusion criteria (having a history of pre, peri, or postnatal problems, neurological disease, sensory disturbance, mental or other clinically diagnosed impairment, or special needs), as well as information related to delivery mode, feeding type during the first 4 months of life, sex, and age. The preschool management team was given an ad hoc questionnaire to determine whether the potential participants complied with the first inclusion criterion: having been a student at the school since 1st year of Early Childhood Education.
Children who did not present exclusion criteria and who complied with the first inclusion criterion were assessed by the research team using the BADyG-I to verify their compliance with the second inclusion criterion: having a suitable IQ for their age. BADyG-I was administered following the instructions of its manual. To observe the children's motor skills, the research team designed recreational motor activities, taking the following into account: (1) the study objective; (2) theoretical and empirical studies on childhood motor development (Hestbaek et al., 2017;Oberer et al., 2017;Goodway et al., 2019;Haywood and Getchell, 2019); (3) the Spanish Early Childhood Education curriculum, which determines the content related to motor skills to be worked on during this educational stage and the pedagogical resources to be used for it, being play especially highlighted (Education Science Ministry of Spanish Government, 2007); (4) spatialtemporal characteristics of the motor skill sessions carried out by the children during their regular school programming. Based on all of this, seven recreational motor activities were created, requiring the use of the gross and fine motor skills that are the subject of interest of this study (and previously defined in the Introduction Section). These skills are: locomotor skills; static balance; dynamic balance; receptive skills; propulsive skills (all referring to gross motor skills); visual-motor coordination and visual-motor integration (referring to fine motor skills). All of the recreational motor activities designed were accompanied by a brief fantasy-type story about animals, which was used to attract the children's attention, increase their motivation and encourage their engagement in the activities. This was done so since the imagination and fantasy, together with play, are the most common pedagogical resources used in Early Childhood Education (McLachlan et al., 2018), and animals are a common focus of attention in preschoolers (Born, 2018). Specifically, the seven recreational activities designed to promote the use the different motor skills were: -Leaping hare: this game required the use of locomotor skills.
From a specific point, the child was to jump with both feet together, as far as possible. When landing the jump, the child was unable to help using his/her hands, so the landing was made on foot. The child had three successive attempts (without recovery time) to do this. -Blind frog: this recreational activity required static balance skills. The child was to remain as long as possible with his/her eyes closed, in a squatting position over the balls of the feet, keeping his/her body bent and the arms extended horizontally to the sides. If they lasted <5 s in this position, they could try again a second time (without recovery time). -Jumping flea: this game involved dynamic balance. The child was to jump up and down without leaving a 25 cm square area, painted on the ground, looking forward (not at the ground). -Flamethrower dragon: this game referred to propulsive skills.
The child was to horizontally throw a tennis ball from the height of his/her shoulder so that it passed through a 30 cm diameter hoop that was 1.5 m away. They had to throw the ball 8 times (four successive throws with each hand and without recovery time between the throws made by each hand). -Ball-catching dog: this game implied receptive skills. The child was to catch a ball thrown by an adult from a distance of 1.5 m. The adult made four successive throws. -Centipede wiping its feet: this game entailed visual-motor coordination. Using their thumb, the child was to touch the fingertips of the other four fingers of the same hand. They were to do this successively, beginning with the pinky finger until reaching the index finger. Once touching this finger, they were to repeat these movements in reverse order, that is, from the index finger to the pinky finger. This series of movements was to be carried out once with each hand. They had three successive attempts to accomplish the task. -Cunning fox: this game involved visual-motor integration, the child was to copy consecutively six shapes appearing on a sheet. The child could not review the figure to copy it. During the copying, the child could erase but not after the completion of the figure. The 6 figures to be copied were: a cross, triangle, square, arrow cross, rhombus, and triangle within another triangle.
The observation sessions were carried out in the school's motor development room, where the children's regular motor skills activities were carried out. Participants making up each class group attended the motor sessions at their regular time, together with their teacher, as usual. Before beginning each recreational motor activity, the teacher read the children the fantasy-based story corresponding to each activity. Visual instructions and a demonstration of each motor skill required were presented, maintaining the regular work method for the children and complying with the pedagogical guidelines indicated in the literature (Hamilton and Liu, 2018). The participants knew about this working method, but not the tasks, which were new to the participants; i.e., it was the first time that the participants performed these tasks. The tasks were carried out in five motor sessions that were developed on alternate days (respecting, as already indicated, the school schedule of the children's motor sessions). The tasks performed in each motor session were the following: 1st session, Blind frog; 2nd session, Ball-catching dog and 20 min later, Leaping hare; 3th session, Jumping flea and 20 min later, Centipede wiping its feet; 4th session, Cunning fox; and 5th session, Flamethrower dragon. This distribution of the tasks was carried out taking into account the usual duration of the motor sessions. The execution of each participant in each recreational motor activity was recorded for its subsequent observation and analysis. These recordings were imported to the Lince software and were coded using the observation instrument (available in the Supplementary Material). An expert observer in observational methodology, Early Childhood Education, and motor development coded all of the observation sessions (301 sessions). Two months later, they were once again coded to calculate intra-observer reliability. A second observer, also an expert in these areas, coded all of the observation sessions to calculate the inter-observer reliability. To do so, the coded data were converted into a matrix of codes.
Data Analysis
Data quality was calculated from a classic perspective that assessed the correlations arising in the categories of the observation instrument coded in each of the two recordings made by the first observer (intra-observer reliability), as well as the correlations between the categories coded in one recording of the first observer and those coded by the second observer (inter-observer reliability), based on the correlation coefficients of Pearson, Kendall's Tau-b, and Spearman. In addition, an index was sought out to relate to the association concept, using Cohen's Kappa.
To determine whether the variables of interest (delivery mode, feeding type during the first 4 months of life, sex, and age) influence motor skills, it was necessary to transform the observational data. For each participant, each category observed during the execution of each recreational motor activities was transformed into a score based on its degree of suitability for the execution of this activity, according to the literature on this area (Goodway et al., 2019;Haywood and Getchell, 2019). For each participant, the scores obtained in each activity were added. Thus, every participant obtained seven scores, each referring to one of the seven specific motor skills studied in this work: locomotor skills; static balance; dynamic balance; propulsive skills; receptive skills, visual-motor coordination, and visual-motor integration. Based on these scores, the following scores were also calculated: total balance score (total of the scores obtained on static balance and dynamic balance); total object control skills score (total of the scores on propulsive and receptive skills); total gross motor skills score (total of the scores on locomotor skills; static balance; dynamic balance; propulsive skills; and receptive skills); total fine motor skills score (total of the scores on visual-motor coordination and visual-motor integration); and total motor skills score (total of the scores on the 7 specific motor skills: locomotor skills; static balance; dynamic balance; propulsive skills; receptive skills, visual-motor coordination, and visualmotor integration). Therefore, each participant received a total of 12 scores.
To analyze whether there were differences in the motor skills based on delivery mode, the children's motor scores were grouped into two groups: those corresponding to children born via vaginal delivery and those of the children born via C-section.
To analyze whether there were differences in the motor skills based on feeding type during the first 4 months, the children's motor scores were grouped into two groups: one group made up of scores belonging to children who were exclusively breastfed and another group made up of scores for the rest of the children (those fed exclusively with formula + children receiving mixedfeeding), that is, those who received formula feeding to a greater or lesser extent. Given the sample size, it was impossible to create three groups based on the three types of feeding that were initially considered in the questionnaire. Therefore, and as with Tozzi et al. (2012), this classification was made based on two groups: exclusive breastfeeding and formula feeding.
To analyze whether there were differences in motor skills based on the participant's sex, their motor scores were grouped based on their sex, creating two groups: boys and girls.
To analyze whether there were an RAE on motor skills, the motor scores of the participants were grouped together into 2 groups based on the half of the year in which they were born -according to the grouping criteria used in past studies Martínez-Moreno et al., 2020)-: group 1 = children born during the last half of the year, that is from July 1 to December 31, who were the youngest participants. Their ages were between 63 and 68 months; group 2 = children born during the first half of the year, that is, from January 1 to June 30. These were the oldest participants. Their ages ranged from 69 to 74 months.
We calculated descriptive statistics in terms of group means (M) and standard deviations (SD). In all of the analyses of comparison of means, it was verified that the data followed a normal distribution through the Shapiro-Wilk test. In cases in which the data followed a normal distribution, a one-way ANOVA was used. In all other cases, for those not having a normal distribution, the Mann-Whitney U was used, although significant differences were never obtained with this test. All pvalues lower than 0.05 (two-tailed) were considered statistically significant. For each of the differences obtained, the effect size was calculated using Cohen's d (Cohen, 1988), applied to the comparison of the means between groups, establishing the cutoff points of 0.00-0.19 = negligible; 0.20-0.49 = small; 0.50-0.79 = medium; and as of 0.80 = high.
RESULTS
The For the rest of the criteria, Kappa value = 1. Therefore, the intra and inter observer reliability was found to be excellent, as was the quality of our observational data.
Significant differences were obtained in some of the motor skills measured based on delivery mode, type of feeding during the first 4 months of life, sex, and age.
Regarding the mode of delivery, children born by vaginal delivery were always found to have higher scores than children born via C-section (Table 2), except in Static balance. These differences were significant in: throwing [F (1,33) Regarding the type of feeding during the first 4 months of life (exclusively breastfeeding or formula-feeding), significant differences were found (with children who were exclusively Table 3).
As for sex (Table 4), statistically significant differences were found (higher scores in boys) for throwing [F (1,28) In all cases, the higher scores were obtained by the group made up of the older children (aged 69-74 months), that is, those born in the first half of the year. Therefore, there were an RAE on the indicated motor skills.
DISCUSSION
This study has examined whether there were influences of delivery mode, feeding type during the first 4 months of life, sex, and age (more precisely, RAE) on motor skills (considered globally and specifically) evaluated in 5-and 6year-old preschoolers. The results obtained suggest that this is a complex topic given that the influence of each of these variables on the studied motor skills is specific. In other words, their influence varies depending on the specific motor skills to be considered and depending on whether or not it is an overall score. Therefore, given that some (but not all) of the examined motor skills are found to be influenced by delivery mode, type of feeding during the first 4 months of life, sex, or age, it may be determined that two of the initially proposed hypotheses were corroborated (H 3 and H 4 ), while the other two being only partially supported (H 1 and H 2 ). It is difficult to make direct comparisons of these results with those from the literature, and it should be carefully done given the heterogeneity of the samples from each study (different ages, distinct socioeconomic and cultural contexts, etc.), the different motor skills studied, and the distinct activities/tasks and instruments used.
H 1 affirmed that differences existed in the children's motor skills based on delivery mode. It was expected to find that children born via vaginal delivery would have higher motor skills than those born via cesarean section. The results indicate that not all of the motor skills revealed differences between the two types of children. Children born via vaginal deliveries displayed higher scores on: throwing, total object control, total gross motor skills, and total motor skills; there was a medium or large effect size in all of the cases. For the remaining motor skill scores, no significant differences were found. Therefore, only some gross motor skills, no fine motor skills, as well as total motor skills were found to be influenced by delivery mode. These results support the findings of past studies such as those by Rebelo et al. (2020), who also found that the influence of delivery mode on motor skills varied depending on the type of motor skill considered. Our results are coherent with those of these authors, as they suggest differences favoring children born by vaginal delivery for object control skills and total motor skills, and no difference for locomotor and balance skills. However, unlike the results found by these authors, we have not found an influence of delivery mode on visual-motor coordination, visualmotor integration, or total fine motor skills (all referring to fine motor skills). Similarly, our results vary from those found by Blazkova et al. (2020), who also found differences in visual-motor integration based on the children's delivery mode. Considering our results, and unlike those of other studies (Grace et al., 2017), it is not possible to absolutely declare that being born by Csection will result in poorer motor skills. However, it may be suggested that its influence appears to be specific to certain motor skills. The discrepancies arising between studies may be due not only to the previously mentioned variables (different sample characteristics, motor skills, tasks, and instruments used) but also to the classification of the delivery modes that was used in each study. Therefore, in our study, even using a classification that has been previously used in the literature, there was no differentiation made as to whether the vaginal delivery involved the use of instruments or not (for example, the use of forceps, vacuum, and spatulas), or whether the C-section was programmed or due to an emergency. Some authors have indicated that these aspects, not considered in our work, may have distinct effects on children's motor development (Tribe et al., 2018;Takács et al., 2020).
As for H 2 (referring to the existence of differences in the children's motor skills based on the type of feeding received during the first 4 months of life, it was expected to find that children that were exclusively breastfed during this period would have better motor skills than those fed with formula or via a mixed-feeding mode). The results indicate that only some of the motor skills presented differences based on feeding type (although the effect size was always medium or large). These skills are: throwing; visual-motor integration; total control of objects; total gross motor skills, and total motor skills, with the children that were exclusively breastfed obtaining the higher scores. For the rest of the motor skills analyzed, there were no statistically significant differences found. These results are distinct from those of Bellando et al. (2020) and Michels et al. (2017), who did not find any effect of feeding type on motor skills beyond the 3 first months of life. Our results are along the lines of those of Jardí et al. (2018) and Kádár et al. (2021) since we found that the influence of infant feeding type on motor skills continue for longer periods of time. Furthermore, like Dee et al. (2007), this influence is found for some gross and some fine motor skills. In our study, more gross motor skill scores (4) than fine ones (1) were influenced by feeding type, allowing us to conclude that gross motor skills appear to be more sensitive to the influences of feeding type than fine motor skills, unlike the findings of Leventakou et al. (2015).
Once again, these differences may be due to distinct factors. In addition to those mentioned above, the feeding time period that participants were asked about in the distinct studies should be considered. As explained above, in our study, parents were asked about the feeding type for the first 4 months of life, given that, in Spain, this is when maternal leave ends and mothers tend to go back to work, often deciding to no longer breastfeed (Jardí et al., 2018;Cabedo et al., 2019). Therefore, asking about the feeding type beyond these first 4 months of life would probably not have resulted in the creation of a group of children that were exclusively breastfed. Other studies on breastfeeding and infant feeding, carried out in other countries (not Spain), also used this time period as the turning point in infant feeding (Michels et al., 2017). However, other works have used a cut-off point of 6 months (Grace et al., 2017;Kádár et al., 2021). This temporal difference may contribute to the distinct results found among the different studies. Some works have also considered the frequency of feeding (how often a child was breastfed or how much milk drank each day) (Khan et al., 2019), an issue that may also lead to the variable results of the literature.
As for the influence of sex on the children's motor skills (H 3 ) , as we hypothesized, the results indicate that boys outperformed girls on certain skills (throwing and total object control) while girls outperformed boys on other motor skills (static balance; visual-motor coordination and total fine motor skills). In all of the cases, the effect size was medium or large. Also in line with the hypothesis, for certain motor skills (the remaining motor skills studied), no significant differences were found between both sexes. Our results are coherent with those found by other authors who also failed to detect differences in locomotor skills (Bakhtiar, 2014;Foulkes et al., 2015;Barnett et al., 2016;Bolger et al., 2018Bolger et al., , 2020, and who found better object control skills in boys (Foulkes et al., 2015;Barnett et al., 2016;Venetsanou and Kambas, 2016;Kokstejn et al., 2017;Bolger et al., 2018Bolger et al., , 2020Mecías-Calvo et al., 2021). It should be noted that other studies have obtained distinct results, defending the existence of differences in locomotor skills between genders, in favor of boys (Robinson, 2010), or girls (Bolger et al., 2018(Bolger et al., , 2020Wang et al., 2020), or no differences in object control (LeGear et al., 2012;Bakhtiar, 2014). As for balance skills, our results have demonstrated a better static balance in girls, but no difference between both sexes in dynamic balance and total balance. The lack of a difference between both sexes in total balance is in line with the findings of other studies (Singh et al., 2015;Barnett et al., 2016) although it contradicts others that found higher scores in girls (Venetsanou and Kambas, 2016;Kokstejn et al., 2017;Mecías-Calvo et al., 2021). The remaining results referring to the balance skills (better static balance in girls and no differences in dynamic balance), cannot be compared with past works since the studies do not differentiate between both types of balance skills, offering an overall score on balance skills. Therefore, our work offers additional information to overcome this information deficiency in the literature regarding specific balance skills of preschoolers.
Our results also offer novel information on the influence of sex on specific preschool fine motor skills, given that there was a gap in the literature regarding this issue. Our results indicate higher scores for girls on visual-motor coordination and total fine motor skills and a lack of differences in visualmotor integration. Therefore, it may be concluded that sex appears to distinctively influence motor skills, when considered both globally and specifically. According to many authors, these differences between girls and boys are not necessarily due to their physical characteristics (since before puberty, they are quite similar) but rather, they may be caused by the distinct experiences of boys and girls participating in different activities. This may be related to gender stereotypes, often promoted by parents and teachers (Bolger et al., 2020;Matarma et al., 2020). Therefore, girls tend to be more likely to participate in cultural and artistic activities (painting, drawing, handicrafts, or playing an instrument, which are more related to fine motor skills) and are less likely to be involved in sporting activities (more associated with gross motor skills). When they participate in physical and sports activities, they tend to be those such as ballet (associated with balance) as opposed to ball sports such as soccer or tennis (related to object control) (Hernández Luengo et al., 2019;Bolger et al., 2020;Matarma et al., 2020). However, coeducation and gender equality policies are becoming increasingly frequent in our country (Venegas et al., 2019), which may explain why, in our study, there was a larger number of motor skills in which no differences were found based on sex, as compared to those in which differences did indeed exist.
H 4 refers to the RAE on preschool motor skills. We hypothesized that children born during the first half of the year would outperform those born during the second half of the same year on certain motor skills. Our results corroborate this hypothesis. Children born during the first half of the year, that is, the older children, displayed better visual-motor integration, dynamic balance, and total balance, with a medium to large effect size in all of the cases. No differences were found in the remaining motor skills examined. These results support some of the results found in the literature but they contradict others. While some other authors also found an RAE on balance and on visual-motor integration , other studies have not confirmed the existence of differences in balance Navarro-Patón et al., 2021). In our study, as in other works, no differences were found in total object control (Imamoglu and Ziyagil, 2017;Mecías-Calvo et al., 2021) or total motor skills . Other works contrast with these results (the existence of differences in object control: Imbernón-Giménez et al., 2020;Navarro-Patón et al., 2021;and in total motor skills: Mecías-Calvo et al., 2021). Our study also failed to find an RAE on visual-motor coordination, unlike other studies Navarro-Patón et al., 2021). It also did not find an RAE on locomotor skills. As for the latter, Imamoglu and Ziyagil (2017) found differences in some of these, but not in others, suggesting a great specificity of the RAE on motor skills, since even within one type of motor skills, such as locomotor, depending on the specific task or activity being analyzed, the results may vary. Therefore, as mentioned above, the distinct tasks used to assess the motor skills in the diverse studies make the direct comparison of results quite difficult (De Niet et al., 2021) and may contribute to the variety of results in the literature. We are unaware of studies that have analyzed the RAE on the remaining specific motor skills considered in our study: throwing, catching, static balance, and dynamic balance, aspects in which, except for the latter, we have not detected an RAE.
To conclude, the distinct motor skills analyzed reveal distinct degrees of sensitivity to the influence of delivery mode, infant feeding type, sex, and RAE. Vaginal delivery, having been exclusively breastfed for the first 4 months of life, and being older than one's peers (as opposed to being born via C-section, having been formula-fed, and being younger than one's classmates) are characteristics that appear to favor certain motor skills.
Although not all of the motor skills are positively influenced by these aspects, no motor skills are negatively influenced by them. Sex influences some (but not all) motor skills, with boys outperforming girls for some skills, and the opposite being found for others.
The specificity of the results obtained suggests the need to design individualized interventions aiming at improving the motor skills which may be at risk for each child, based on their present (such as sex and age) and past (such as delivery mode and feeding type) characteristics. According to other authors, this series of results allows us to conclude that certain biological events (such as sex and age), and some experiences in very early life (such as delivery mode and type of feeding during the first 4 months), are especially influential on preschool motor skills and that the influences of some very early experiences on human development may be evident even years later (Nelson and Gabard-Durnam, 2020).
Although these results should be carefully considered due to the limitations of this study (see below), they may be quite relevant, given is the current lack of knowledge on preschool motor skills (Imbernón et al., 2021). This study attempts to fill this gap, providing information on the influence of certain factors on these skills, an essential aspect to design effective interventions that respond to the distinct needs of children. It should be noted that a highlight of our study is the analysis of the influence of four factors on these motor skills. In the majority of studies of this type, only one factor is considered (Barnett et al., 2016). Therefore, our study offers information that may be of great interest as it permits a deeper understanding of preschool motor skills. It is especially relevant and useful for teachers, other professionals, and researchers working with children in healthcare, educational, social, or sporting environments. Our results should also be considered by policymakers, given that they suggest the need to implement public policy strategies aiming to improve children's motor skills and that would, thereby, promote a physically active and healthy lifestyle. This will be considered in greater detail below.
As for the contributions and implications of this study on daily teaching practices, we consider that the information about the recreational motor activities and the assessment process of motor skills, as well as the observation instrument offered, can be very useful. This is even more so if we consider that: (1) Education Science Ministry of Spanish Government (2007) and other international institutions (Early Head Start National Resource Center, 2013) indicate that preschooler development and learning must be assessed using direct and systematic observation; and (2) many early childhood teachers recognize their lacking of knowledge, skills, and resources in the motor assessment field (Cueto et al., 2017). Therefore, we believe that the detailed and extensive assessment process of preschool motor skills conducted via systematic observation is another strong point of this work. This suggests that the assessment of motor skills was: (1) objective -not subjective as some teachers had admitted to being (Cueto et al., 2017)-and was not based on third party information, as it often occurs in studies (Khalaf et al., 2015;Takács et al., 2020), despite the limitations that this may imply (Blanco-Villaseñor and Escolano-Pérez, 2017); (2) carried out in the child's natural setting, such as at school, capturing the spontaneous motor execution by the children during recreational activities that are significant and of interest to them. This assessment is characterized by a high ecological validity , and allows us to overcome the ecological validity issues present in other studies on motor skill assessment (Tamplain et al., 2020); and (3) carried out using an instrument created based on the objective and context of the study. In other words, not using an instrument created from a clinical perspective, like the majority of the tools intended for the assessment of children's motor skills and which, despite their limitations, are often used in studies carried out in scholastic contexts (Lindsay et al., 2018;Klingberg et al., 2019;Morley et al., 2019). It should be noted that, despite the previously mentioned relevance of Early Childhood Education for the appropriate development and learning of children's motor skills, the literature highlights a lack of instruments available to assess said skills in the educational environment (Klingberg et al., 2019;Morley et al., 2019). Our study, and specifically the observation instrument offered, which is also free of charge, contributes to eliminating this gap. This instrument also overcomes the limitations of the instruments for motor skill assessment developed from a clinical perspective, which are of extended use without considering the context in which the assessment is performed [such as, the Motor Assessment Battery for Children-2 (MABC-2), the Motor-Proficiency-Test (MOT4-6), or the Test for Gross Motor Development-2 (TGMD-2)]: (1) These instruments require specific materials, unavailable in the school setting (Platvoet et al., 2018); (2) In these instruments, aimed at the assessment of children with risks or difficulties, the spectrum of levels of the motor skills assessed tends to be limited. Consequently, they do not allow for the determination of the large variability of skills that may be demonstrated by children with more typical development, or even the levels of motor skills that may be demonstrated by children with more advanced or highly stimulated development (Klingberg et al., 2019;Morley et al., 2019). The observation instrument used in this study overcomes these limitations since: (1) it does not have equipment requirements; (2) it permits the assessment of a broad spectrum of motor skill levels (from low to high performance), which are also considered at both a global and specific level. This is noteworthy since many motor skill assessment instruments only permit the evaluation of specific motor skills, but not all the skills that have been identified on a theoretical level. Thus, many motor skill assessment instruments reveal incoherencies between the theoretical and practical aspects. Although at a theoretical level this classification of gross and fine motor skills is widely accepted, along with the differentiation of various specific motor skills within each category, most assessment instruments do not reflect this structure, being restrictive and insufficient to assess the large set of abilities making up the motor skills (De Niet et al., 2021). Furthermore, this instrument, according to the recommendations of the most recent literature (Palmer et al., 2021), considers process-oriented motor skills assessment (how a movement is performed) and product-oriented assessment (the outcome of a movement). To date, product-based measures are the most common (Chang et al., 2020), with very few studies using both product-and process-based assessments to measure preschool motor skills (Szeszulski et al., 2021). Our study has addressed this gap. Despite considering both approaches, the process-approach is more important in our instrument, since this approach provides more useful information so that teachers can provide instructional skill-specific feedback to the children on their performance, a necessary element for educational practices aimed at the development of motor skills to be more effective (Bolger et al., 2020).
Ultimately, all of the characteristics that define our instrument make it useful and appropriate so that teachers may perform an objective, thorough, and profound assessment of the preschoolers' specific gross and fine motor skills. Based on this information, educational practices responding to the needs of each child may be designed.
All this issues can be also interesting to researchers given comprehensive assessment of children's motor skills is a significant concern in the contemporary child motor research field (Chang et al., 2020).
Our results suggest the need to develop public health, social and educational policies that promote infant motor skills. Therefore, in clinical practice, it is necessary to raise awareness so that obstetricians adopt the World Health Organization (2018) recommendations to reduce unnecessary cesarean sections and to develop high-quality antenatal education programs that offer information to parents on the effects of cesarean delivery, to avoid C-sections by demand. The same should occur with regard to feeding. In addition to informing parents as to the benefits of breastfeeding, a social environment should be created to favor it. Social policies should be implemented and facilities should be offered to promote this practice, such as increased maternity leave or the creation of breastfeeding rooms at work and social sites. Our results also suggest the need to reflect on the organizational policies of the school system, given the RAE detected on certain motor skills. Grouping students based on the half-year in which they were born, and not based on the entire year, would result in more similar levels of motor skills for children attending the same class, thus promoting a more beneficial educational experience for all.
Certain study limitations should be considered. The information referring to the mode of delivery and type of feeding during the first 4 months of life was collected retrospectively from the parents, therefore, some recall bias may have taken place. However, the retrospective collection of these type of data is a widely used resource in the literature to obtain perinatal data and to characterize child development histories (Khalaf et al., 2015;Bornstein et al., 2020), given the difficulty (and even impossibility) of obtaining data from medical or other professional records. Some authors indicate that the validity and reliability of parental recall in data collection are assured when the data are collected within 1-3 years after the relevant event took place (Grace et al., 2017). Other authors extend this time up to 6 years (Keenan et al., 2017), or even up to 20 years, after the event (Natland et al., 2012). According to these authors (Natland et al., 2012;Keenan et al., 2017), we can consider the information provided in this study by the parents to be valid and reliable.
Some authors have indicated that the validity and reliability of parental recall are affected by aspects such as the specificity of the considered event (Bornstein et al., 2020). To facilitate and increase the validity and reliability of the parental recall, as mentioned previously, in the study, parents were asked about general aspects of the delivery, specifically, whether it was vaginal or cesarean, without requesting more detailed information. In the future, it would be interesting to collect as much information as possible about other more specific issues of vaginal/cesarean delivery (for example, instrument use or not during the vaginal delivery), although this implies assuming a greater risk regarding potential parental recall bias.
A similar situation is found for feeding type. As previously mentioned, in our study, no information was collected on the frequency or duration of breastfeeding or the type of feeding used after the first 4 months, aspects which may also affect the children's motor skills (Khan et al., 2019). In the questionnaire administered to the parents, response options did not include the option of providing breast milk in a bottle, another possible feeding type. However, none of the parents indicated this in the "considerations" section of the questionnaire. These limitations may be interesting to consider in future studies.
It should also be considered that the study carried out is a punctual design. Therefore, in the future, it may be interesting to carry out a follow-up study to determine whether changes are found in the influence of the variables studied here regarding the distinct motor skills as the children grow up.
Another aspect to be considered is the small sample size and its non-random nature. It should be noted, however, that observational studies do not seek the representativeness of the sample, but rather its intensive study. There is a greater interest in obtaining a large quantity of detailed information on the natural behavior of a small number of participants than in the representativeness with respect to a larger population (Anguera, 2003). Nevertheless, in the future, it may be interesting to increase the number of participants in the study, which would also assist in the analysis of the influence on motor skills of each of the 3 infant feeding types considered in the questionnaire. However, it should be taken into account that, given the participants are minors, and the assessment of their motor skills is carry out with observational methodology, increasing the sample size may result in great complexity, effort, and dedication (Salamon, 2017;Maddox, 2019). Therefore, before increasing the sample size, it may be interesting to conduct an analysis of generalizability and an optimization plan to assess costs/benefits .
Another limitation of this study is its failure to consider potential confounders. This is a common limitation in this type of studies given the complexity of motor skills and their development (Hernández Luengo et al., 2019). Therefore, in the future, it may be interesting to also consider the effect of potential interactions between the variables analyzed in this study on the motor skills, including other potential variables that have not been considered and which, apart from, or in addition to, the factors considered, may also affect the children's motor skills. These variables may refer to both the child (anthropometric measurements such as weight, body mass index; type of activities -sports and others-carried out in their free time; etc.) and the parents (mother's age at birth, parents' education, smoking during pregnancy, etc.), as well as the family context (quality of home stimulation received, presence of older siblings acting as models for developing motor skills, etc.) and the social context in which the child develops (such as proximity and ease of access to sporting installations). Numerous factors may influence childhood motor skills. Although it was not within the scope of this study to consider every variable in the analyses, they should be carefully considered when interpreting the results of our study and conducting further research on the area, given the complexity. In the future, interdisciplinary collaborations will be necessary to better understand how and why these and other potential factors have specific influences on motor skills.
CONCLUSION
Preschool motor skills are a complex topic. They show distinct degrees of sensitivity to different early environmental and biological variables such as delivery mode, type of feeding during the first 4 months of life, sex, and age. More exactly, vaginal delivery, having been exclusively breastfed for the first 4 months of life, and being older than one's peers (as opposed to being born via C-section, having been formula-fed, and being younger than one's classmates) favor certain (not all) preschool motor skills. No motor skills are negatively influenced by them. Sex influences some (but not all) motor skills, with boys outperforming girls for some skills, and girls outperforming boys for others.
Very important practical implications for teachers, other professionals, and researchers working with children in healthcare, educational, social, or sporting environments are derived from these results. Our results should also be considered by policymakers, given that they suggest the need to implement public health, social and educational strategies aiming to improve children's motor skills and that would, thereby, promote a physically active and healthy lifestyle.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by The Research Unit of the University of Zaragoza. The research was also approved by the school management team. In accordance with Organic Law 15/1999 of December on the Protection of Personal Data (1999, Official State Gazette no. 298, of December 14), all parents of the participants signed the informed consent authorizing their children's participation in the study and the recording of the children. Furthermore, and following the guidelines of the aforementioned law, observers signed a confidentiality agreement. No special ethical approval was required for this research since the Spanish public education system and national regulations do not require such approval. Each participant received a small reward (two chocolates) in gratitude for their participation. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
AUTHOR CONTRIBUTIONS
EE-P was involved in conceptual and methodological structure, literature review, data collection, systematic observation, manuscript drafting, and discussion. CS-L was involved in methodological structure and data analysis. MH-N was involved in data collection and systematic observation. All of the authors contributed to revising the manuscript and provided final approval of the version to be published.
FUNDING
The authors gratefully acknowledge the support of the Spanish government's subproject Integration ways between qualitative and quantitative data, multiple case development, and synthesis review as main axis for an innovative future in physical activity and sports research [PGC2018-098742-B-C31] (2019-2021) (Ministerio de Ciencia, Innovación y Universidades/Agencia Estatal de Investigación/Fondo Europeo de Desarrollo Regional), part of the coordinated project New approach of research in physical activity and sport from mixed methods perspective (NARPAS_MM) [SPGC201800X098742CV0]. EE-P and MH-N also wish to acknowledge the support of the Aragon Government Research Group, Grupo de Investigación de Referencia Educación y Diversidad (EDI) [Grant number S49_20R] and the Department of Psychology and Sociology of the University of Zaragoza.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.